Kriging#

import geolime as geo
import numpy as np
from pyproj import CRS
import pyvista as pv
import pandas as pd

pv.set_jupyter_backend("html")
geo.Project().set_crs(CRS("EPSG:26909"))
dh_tom = geo.read_file("../data/dh_tom_comp_capped.geo")
tmw_sol = geo.solid_from_dxf("TMW", "../data/tom_west_blob_small.dxf")

Grid Creation#

To create the interpolation grid called BlockModel in GeoLime, we first create a Voxel wich is the plain parallelepiped. We then convert it into a BlockModel allowing us to remove the block outside the domain.

The Voxel is created from the drillholes bounds, they can be accessed using the minimum and maximum of each coordinates, or also using the bounds() methods.

z_max = np.max(dh_tom["Z_B"])
z_min = np.min(dh_tom["Z_E"])
z_min = dh_tom.bounds()[0][2]
z_max = dh_tom.bounds()[1][2]
z_min, z_max
(883.8832451628475, 1545.2686973526997)

As the bounds are floating values, it might be best to round them. Using the numpy round method, we can even round to the lower decaded using (-1), lower hundred using (-2), etc ..

z_min.round(-1)
880.0

For safety we also shift a bit the origin of the Voxel to ensure total coverage of the data.

Here we create a Voxel with a cell size of 15x15x15.

vx = geo.Voxel(
    name="Grid3D",
    shape=[50, 100, (z_max - z_min) // 5],
    origin=[
        dh_tom.bounds()[0][0].round(-1) - 5,
        dh_tom.bounds()[0][1].round(-1) - 5,
        z_min.round(-1) - 5
    ],
    axis=[
        [15, 0, 0],
        [0, 15, 0],
        [0, 0, 5],
    ]
)

bm = geo.BlockModel(name="Block3D", xyz=vx.coords())
bm.element_count()
660000
geo.plot_2d(
    [
        {"georef_object": vx, "property": "Z", "agg_method":"first"},
        {"georef_object": dh_tom, "property": "Zn_pct", "agg_method":'sum'},
    ],
    width=400, 
    height=650,
    colorscale='viridis'
)

Unnecessary blocks removal#

To flag each cell of the BlockModel in the solid we can use the method

tmw_sol.contains(bm)

which will create a region in_TMW in the BM. The cell will be inside the solid if the centroid is in the sold.

We can also use the function compute_block_proportion_inside_solid which will compute the proportion of each cell in the solid based on the given discretisation. A discretisation of (1, 1, 1) is equivalent to the method explained above but the results will be a new property with 0 and 1 instead of a new region.

geo.compute_block_proportion_inside_solid(
    grid_object=bm, 
    solid=tmw_sol, 
    block_proportion_name='domain_prop', # name of the new property
    block_discretization=(2, 2, 1) # proportion will be calculated on centroid only
)
set(bm['domain_prop'])
{0.0, 0.25, 0.5, 0.75, 1.0}

We can then transform the property into a region using condition such as equality or threshold.

bm.set_region(
    name="Ore",
    data="domain_prop > 0"
)

Cells in a specific region can then be kept or removed.

bm.keep_only_cells('Ore')

Plotting for visual QC#

geo.plot_2d(
    [
        {"georef_object": bm, "property": "Z", "agg_method":"first"},
        {"georef_object": dh_tom, "property": "Z_M", "agg_method":'first'},
    ],
    width=400, 
    height=650,
    colorscale='viridis'
)
bm.set_property(name="cell_size_z", data=f"{bm.axis()[2]}")
geo.plot_2d(
    [
        {"georef_object": bm, "property": "cell_size_z", "agg_method":"sum"},
    ],
    width=400, 
    height=650,
    colorscale='viridis'
)
dh_pv = dh_tom.to_pyvista("Zn_pct")
bm_pv = bm.to_pyvista()
tmw_sol_pv = tmw_sol.to_pyvista()
p = pv.Plotter()

p.add_mesh(dh_pv.tube(radius=10))
p.add_mesh(bm_pv,show_edges=True, opacity=0.7)
p.add_mesh(tmw_sol_pv, style='wireframe')
p.show()
2025-10-03 14:59:25.122 (  22.426s) [    7F7D7EFCFB80]vtkXOpenGLRenderWindow.:1458  WARN| bad X server connection. DISPLAY=

Covariance and Neighborhood setup#

Covariance and neighborhood are necessary to initialise an Kriging estimator, here we defined a covariance based on the variography analysis and a appropriate Neighborhood strategy.

cov = (
    0.13 * geo.Nugget() 
    + 0.51 * geo.Spherical(metric=geo.Metric(scales=[60., 85., 10.], angles=[145.0, 45.0, 20.0]))
    + 0.34 * geo.Spherical(metric=geo.Metric(scales=[120., 90., 30.], angles=[145.0, 45.0, 20.0]))
)

neigh = geo.MaxPerCategoryNeighborhood(
    dimension=3,
    metric=geo.Metric(
        angles=[145, 45, 10],
        scales=[60, 60, 15]
    ),
    min_n=2,
    max_n=12,
    max_per_category=4,
    category=dh_tom.holeids()
)

Estimator Setup#

GeoLime estimators are first initalised with the neccessary parameters. For the Kriging, covariance and neighborhod are needed.

estimator = geo.OrdinaryKriging(covariance_model=cov, neighborhood_model=neigh)

Now let’s imagine we have a list of grades sharing the same covariance model, we can use a loop to perform multiple kriging at once.

You can complexify and make nested loops if you have multiple domains and also multiple kriging pass.

for attrs in ['Zn_pct', 'Pb_pct']:
    estimator.solve(
        obj=dh_tom,
        obj_region=f"{attrs}.notna()",
        obj_attribute=attrs,
        support=bm,
        support_region=None,
        support_attribute=f"{attrs}_krig",
        discr=(3, 3, 3)
    )

Kriging Visual QC#

geo.swath_plot(
    [
        {"obj":bm, "attribute":"Zn_pct_krig", "region":"Zn_pct_krig.notna()", "swath_interval":30, "axis":"Y", "color": "blue"},
        {"obj":dh_tom, "attribute":"Zn_pct", "region":"Zn_pct.notna()", "swath_interval":30, "axis":"Y_M", "color": "red"}
    ]
)

Using a rough estimation of the density block model we can then compute the grade tonnage curves.

bm.set_property(name='density', data='3')
grades = np.linspace(start=0, stop=15, num=10)
geo.gtc(grid=bm, attribute='Zn_pct_krig', density='density', grades=grades)

Tonnage and mean grades values can be extracted using the following functions.

tonnage = geo.gtc_plotting.compute_selectivity_tonnage(grid=bm, attribute='Zn_pct_krig', density='density', grades=grades)
tonnage
array([63659250., 44917875., 32764500., 19571625.,  8633250.,  3476250.,
        1191375.,   617625.,   381375.,   124875.])
mean_grade = geo.gtc_plotting.compute_selectivity_mean_grade(grid=bm, attribute='Zn_pct_krig', density='density', grades=grades)

They can be merged together in a DataFrame for a nicer display.

df = pd.DataFrame({'grades':grades, 'tonnage':tonnage, 'mean grade':mean_grade})
df
grades tonnage mean grade
0 0.000000 63659250.0 3.651597
1 1.666667 44917875.0 4.953531
2 3.333333 32764500.0 5.857524
3 5.000000 19571625.0 6.969873
4 6.666667 8633250.0 8.501978
5 8.333333 3476250.0 10.148578
6 10.000000 1191375.0 12.393275
7 11.666667 617625.0 13.856538
8 13.333333 381375.0 14.801636
9 15.000000 124875.0 16.446541

For reporting, decimal values can be rounded.

df['grades'] = df['grades'].round(2)
df['tonnage'] = df['tonnage'].round(-3).astype(int)
df
grades tonnage mean grade
0 0.00 63659000 3.651597
1 1.67 44918000 4.953531
2 3.33 32764000 5.857524
3 5.00 19572000 6.969873
4 6.67 8633000 8.501978
5 8.33 3476000 10.148578
6 10.00 1191000 12.393275
7 11.67 618000 13.856538
8 13.33 381000 14.801636
9 15.00 125000 16.446541

IDW for sanity check#

Most of the time we also perform IDW for sanity check. In the same manner as the Kriging we first initialise the IDW estimator with the corresponding ellipsoid ans power and we solve it.

estimator = geo.IDW(
    metric=geo.Metric(
        angles=[145.0, 45.0, 20.0],
        scales=[60, 60, 15]
    ),
    power=2,
    neighbors_number=10
)
estimator.solve(
    obj=dh_tom,
    obj_region="Zn_pct.notna()",
    obj_attribute="Zn_pct",
    support=bm,
    support_region=None,
    support_attribute="Zn_idw",
)
metric=geo.Metric(
        angles=[145.0, -45.0, 20.0],
        scales=[3 * 60, 2* 60, 3 * 15]
    )
dh_pv = dh_tom.to_pyvista("Zn_pct")
bm_pv = bm.to_pyvista("Zn_pct_krig", "Zn_pct_krig.notna()")
ellipsoid = metric.to_pyvista(center=dh_tom.bounds()[1])
p = pv.Plotter()
p.add_mesh(dh_pv.tube(radius=10))
p.add_mesh(bm_pv,show_edges=True, opacity=1)
p.add_mesh(ellipsoid, style='wireframe')
p.remove_scalar_bar("Zn_pct_krig")
p.show()
slices = bm_pv.slice_orthogonal(x=442050, y=7003791, z=1380)
p = pv.Plotter()
p.add_mesh(dh_pv.tube(radius=10))
p.add_mesh(slices,show_edges=True, opacity=1)
p.remove_scalar_bar("Zn_pct_krig")
p.show()
bm.to_file("../data/block_model.geo")

Reporting#

Results can then store in a nested dictionnary for resource classification

tonnage_zn = geo.gtc_plotting.compute_selectivity_tonnage(
    grid=bm, 
    attribute='Zn_pct_krig', 
    density='density', 
    grades=grades
)
tonnage_pb = geo.gtc_plotting.compute_selectivity_tonnage(
    grid=bm, 
    attribute='Pb_pct_krig', 
    density='density', 
    grades=grades
)
resource_dict = {}
resource_dict['TMZ'] = {}
resource_dict['TMZ']['Zn_krig'] = {}
resource_dict['TMZ']['Pb_krig'] = {}
for i, elt in enumerate(grades):
    resource_dict['TMZ']['Zn_krig'][elt] = tonnage_zn[i]
    resource_dict['TMZ']['Pb_krig'][elt] = tonnage_pb[i]    

Dictionnary can be converted in DataFrame and exported to excel file to keep fusionated cells.

final_classif = pd.DataFrame.from_dict(
    {(i,j): resource_dict[i][j] 
    for i in resource_dict.keys() 
    for j in resource_dict[i].keys()},
).T
final_classif
0.000000 1.666667 3.333333 5.000000 6.666667 8.333333 10.000000 11.666667 13.333333 15.000000
TMZ Zn_krig 63659250.0 44917875.0 32764500.0 19571625.0 8633250.0 3476250.0 1191375.0 617625.0 381375.0 124875.0
Pb_krig 63598500.0 18191250.0 9473625.0 4768875.0 2571750.0 1285875.0 492750.0 320625.0 138375.0 77625.0
final_classif.to_excel('../data/gtc_report.xlsx', engine='xlsxwriter')