reference manual leapfrog

107

Upload: boris-alarcon-hasan

Post on 06-Nov-2015

199 views

Category:

Documents


15 download

TRANSCRIPT

  • The User's Guide and tutorials do not attempt to explain all features. Only those relevant to the particular workflow step are explained.

    The Reference Manual is intended to give details of many features and functionalities of Leapfrog not covered in the tutorials.

    Once a collar and survey file have been imported, interval measurement tables can be added at any time. To do this, right-click on the Drillhole-Data object in the project tree and select Add Interval Table from the menu:

    This will open the Add Interval Tables dialog:

    Note that adding a collar or a survey table is not allowed. Click the Add button to import an interval table.

    The Import Table dialog will then appear. Proceed as described in the Importing Drillhole Data tutorial.

    Leapfrog uses interpolation to determine the value of a continuous variable, such as grade, between the measured data samples. If the data is both regularly and adequately sampled, you will find the different interpolants will produce similar results. In mining, however, it is rarely the case that data is so abundant and input from the geologist is required to ensure the interpolations produce geologically reasonable results. There are six choices that underpin how the interpolation is performed and, consequently, how the quantity of interest is estimated at points away from the data samples:

    One way Leapfrog differs from many direct methods is that rather than attempting to produce an exact interpolation, it produces an interpolation that is accurate to a user-specified accuracy. Doing this enables Leapfrog to solve large problems quickly and efficiently.

    Setting the Accuracy Although there is temptation to set the accuracy as low as possible, there is little point to specifying an accuracy significantly smaller than the errors in the measured data. For example, if grade values are specified to two decimal places, setting the accuracy to 0.001 is more than adequate. Smaller values will cause the interpolation procedure to run more slowly and degrade the interpolation result. For example, when recording to two decimals the range 0.035 to 0.045 will be recorded as 0.04. There is little point in asking Leapfrog to match a value to plus or minus 0.000001 when intrinsically that value is only accurate to plus or minus 0.005.

    Leapfrog estimates the accuracy from the data values by taking a fraction of the smallest difference between measured data values.

    Navigation: No topics above this level

    Reference Manual

    Navigation: Reference Manual >

    Add Interval Table

    Navigation: Reference Manual >

    Advanced Interpolation Settings

    1. Accuracy

    2. Variogram models

    3. Modelling the underlying variation

    4. Anisotropy

    5. Data transformation

    6. Nugget

    Page 1 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Variogram Model

    In Leapfrog the interpolated value at a point is the weighted sum of the data points added to a smooth estimate of the underlying distribution of the data. This is equivalent to conventional Kriging. Leapfrog differs from most Kriging implementations in its choice of Variogram models.

    One of the fundamental difficulties in interpolating data is the problem of determining a suitable range. A finite range means that any point in space that is more than the range away from a data sample will have an interpolated value that is either zero or an estimate of the mean value. Often this is an advantage, as it is intuitively reasonable to expect that an interpolation becomes less reliable further from the data.

    However, often the range is not known a priori and the data sampling is highly irregular. In such a case, a basis function with an infinite range can produce a better result. The linear variogram is an example of just such a model and as a consequence it is the default interpolation method inside Leapfrog. A data set interpolated with a linear variogram is independent of axis units, and will produce identical results if the data coordinates are given in meters or millimetres.

    It is important to realise that even if a variogram has infinite range, the behaviour near data samples is determined substantially by the values of that data and can be controlled using the nugget value. Beware that when using a linear variogram, artefacts may occur in parts of the isosurfaces far away from data values. These can be removed either by choosing another variogram model or by clipping the isosurfaces to a minimum distance from the data.

    Appropriate choice of variogram model and associated parameter settings can be crucial for successful modelling. Therefore, before going into the various options in Leapfrog, first a little background on variograms. The following variogram represents the variance (gamma,) of sample values vs. distance following the popular spherical basis function.

    The "sill" defines the upper-bound of the variance. At the distances less than the "range", shows a quasi-linear behaviour, and is stabilised at the sill beyond the "range". Roughly speaking, having a "sill" limits the influence of a value to be within the specified "range".

    The "nugget" (effect) is the expected variance when two different samples are very close. This is greater than or equal to zero and less than the sill. If samples taken at two very close locations are very different, the nugget becomes a large positive value. When the nugget is non-zero, the variogram is discontinuous at the origin. The nugget effect implies that values have a high fluctuation over very short distances.

    Leapfrog provides 4 Variogram Models: Linear, Multi-Quadric, Spheroidal and Generalised Cauchy. One variogram might perform better than others for a particular data set.

    Page 2 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • (x)=(x2+c2)/2 ,where =1,3,5 and c=scale

    As there is no sill, both linear and multi-quadric models tend to connect across larger intervals, which could have been disconnected if a different model (e.g. Cauchy, Spheroidal) were used. If you want high connectivity, linear or multi-quadric variograms will be a suitable choice.

    However, both variograms may suffer from blowouts at data extremities. While the Multi-Quadric model produces smoother interpolation than the linear model, it is more susceptible to blowouts. If you observe this problem, consider providing a small nugget value or switch to one of the following two models.

    (x)=sill(1-c(x2+c2)-/2) ,where =1,3,5,7 or 9 and c=scale

    1. Linear variogram (default) A useful general purpose interpolant for sparsely and/or irregularly sampled data. This is not bounded. i.e. there is no sill.

    2. Multi-Quadric [Hardy (1971)] In earlier versions of Leapfrog, this model was referred to as "Generalised Multiquadric". Shows an exponential growth but flat slope around the origin. This is a simple way of smoothing the linear model's sharp changes of slope and rounding the corners (i.e. smooths the derivatives). The "scale" parameter is the radius of curvature at x=0, and controls the smoothness. The alpha () parameter determines the growth rate. Users may specify alpha() and scale. The function is given as follows:

    3. Spheroidal An interpolant that approximates the spherical basis function used in Kriging. Instead of having an exactly finite range the function dies rapidly to zero outside the specified range. The grade shells produced by this function are in general very similar to those produced by Kriging (spherical basis function) close to the data values, but the shells are less prone to artefacts when the grade shell is distant from a measured data point. High alpha() leads to fast growth, approaching the sill quickly. Roughly speaking, the spheroidal model shows the behaviour of the linear model at the origin and the rest shape is reminiscent of Generalised Cauchy model.

    4. Generalised Cauchy Also known as the Inverse Multi-Quadric. Particularly suitable for smooth data such as gravity or magnetic field data. This model is flat at the origin, and asymptotically approaches the sill. Users may specify "sill", "scale" and "alpha()". The function is given as follows.

    Page 3 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The variogram approaches the sill at a pace determined by the alpha () and c parameters - Varying the sill does not make a noticeable difference to the interpolation results.

    Higher values for the range allows the surface to expand further from the known points. As a result, there is a higher chance for the surface to be connected to neighbouring surfaces. Similarly, lower alpha () means the model is slower to reach the "sill", and it is also more likely to make neighbouring surfaces get connected.

    Modelling the Underlying Drift The underlying drift is a model of the grade distribution in terms of a simple deterministic model such as a zero, constant, linear or quadratic variation. Away from data samples, the interpolant will tend towards the value predicted by the underlying drift. This has a direct analogy with Kriging. Simple and Ordinary Kriging differs in that the latter estimates the mean of the data samples whereas the former assumes a zero mean. Leapfrog enables the user to use higher order models, such as a linear or quadratic variation across the data when this is appropriate.

    Anisotropy In an isotropic world the influence of an isolated data point on the interpolation is symmetric in all directions. Thus the isosurfaces formed around an isolated data point will appear to be spheres. It is often the case that data is not isotropic, for example in a vein. Here, it is expected that the influence of a data point in a vein should extend further in the direction parallel to the vein than in the direction perpendicular to the vein. This behaviour is achieved in Leapfrog using anisotropy. If anisotropy is defined, a data point no longer influences the interpolant uniformly around a data point but does so in the form of an ellipsoid. This is particularly useful in circumstances where the geologist wants grade shells to link along a direction defined by, for example, a fault.

    In order to preserve the volume, the ranges used in the anisotropy are scaled to maintain unit volume. Thus, only the ratio of the lengths is important. Specifying an ellipsoid ratio of 1:1:10 will produce a result identical to specifying an ellipsoid ratio of 0.1:0.1:1.

    The ellipsoid ratios are mapped onto the axes defined by the dip, dip-azimuth and pitch in the following manner. The Max scaling is applied along the axis defined by the pitch line (pitch-axis). The Min scaling is applied to the axis perpendicular to the plane defined by the dip and dip-azimuth (pole-axis). The Intermediate scaling is applied to the axis that is perpendicular to the axes defined by the pitch and pole.

    In practice, setting the anisotropy is most easily done in Leapfrog using the moving plane.

    Data Transformation One of the problems with modelling grade values occurs with the existence of samples with extreme values. An interpolant that uses a weighted sum of the data will place far too much emphasis on what are essentially exceptional values. The solution to this problem is to apply a nonlinear transformation to the data to reduce the emphasis of exceptional values. Leapfrog provides two grade transformation methods, namely Logarithmic and Gaussian. Both preserve the ordering of data values so that if the value of a sample is higher or lower than another before transformation, the same relationship will exist after transformation.

    The Gaussian transform modifies the distribution of the data values to conform as closely as possible to a Gaussian Bell curve. Because the grade value distribution is often skewed, (for example, a large number of low values) this transformation cannot be done exactly.

    The logarithmic transform uses the logarithm to compress the data values to a smaller range. In order to avoid issues with taking the logarithm of zero or negative numbers a constant is added to the data to make the minimum value positive. After the logarithm is taken, a constant is added so the minimum of the data is equal to the specified post-log minimum. Flexibility in choosing the pre-log minimum is provided since increasing this value away from zero can be used to reduce the effect of the logarithmic transformation on the resultant isosurfaces.

    Pressing the "Show Histogram" button will show the histogram of the data with the specified transformation. Show Histogram should also be pressed to update the histogram after any changes to the transformation parameters.

    When isosurfacing transformed data, the threshold value is also transformed. This ensures that an isosurface at a threshold of 0.4 will still

    Page 4 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • pass through data samples whose value is 0.4. What will change, however is the behaviour of the isosurface away from the samples.

    Nugget Nugget represents a local anomaly in the grade values. That is, a nugget value is substantially different than the value that would be predicted at that point from the data around it. In Leapfrog nugget behaviour is most commonly seen in the form of pin-cushion distortions of the isosurfaces near data points. Block models that are based on smooth interpolants are also affected by this pincushion effect, although it may not be as visible to the user.

    The pincushion effect can be reduced by adding or increasing the nugget value in the variogram. This effectively places more emphasis on the average values of the surrounding samples, and less on the actual data point. It is important to note that when nugget is non-zero an isosurface of a given value may no longer touch a sample of that value. How far it deviates from the sample is an indication of how different that data sample is from what would be predicted from its neighbours.

    Note that the pincushion effect can also be caused by incorrect specification of a deposit's anisotropy.

    Exporting multiple items at once may be done by using the Batch Export command from the Project menu as shown below.

    The Select Objects To Export dialog will appear showing the project tree. Select any objects you want to export by ticking the check-boxes and clicking OK as shown below.

    Navigation: Reference Manual >

    Batch Export

    Selecting Export from the Project menu

    Page 5 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Multiple objects may be selected at once by right-clicking on a row and choosing Select Children. This will select all the children of a given row but not the row itself. This allows you to select all the grade shells of an interpolant in one go as shown below.

    Exporting a points ( grey points) object will export the points and all the associated values at once, including any points without associated values. Exporting a values object ( coloured points) will export only the selected values and their points.

    The Batch Export dialog is then displayed.

    The Batch Export dialog lists the objects to export, along with a header rows for each object type selected.

    To change the file name for an object double-click on the cell in the Save As column and enter a new name.

    To change the export file format click on the Format column and select a new format from the combo-box. To change the format for all objects of a type, set the format in the header row.

    To change the export folder click on the Folder column and type in a new directory or click the button to open a file chooser dialog as shown below.

    To change the export folder for all objects of a type, set the folder in the header row.

    To change the export folder for all objects, use the text box at the bottom of the dialog or click the Browse button.

    Some GMP products do not allow spaces in filenames, to prevent spaces in the exported filenames un-tick the Allow spaces in filenames checkbox.

    A Boolean operation on two meshes (or isosurfaces) computes the intersection, union or subtraction of one mesh from another. To demonstrate this operation, we compute the intersection of two meshes, cu 0.61 and m_assays Buffer 47.0.

    Navigation: Reference Manual >

    Boolean Mesh

    Page 6 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • We have two isosurfaces, cu 0.61 and m_assays Buffer 47.0, as shown in the project tree:

    Here we refer to both the isosurface and the mesh objects as 'meshes'. To compute the intersection of two meshes, right-click on one of the meshes in the project tree and select the New Boolean Mesh option.

    A mesh object (listed under the Meshes object in the project tree) can be derived from an isosurface by extracting mesh parts (see screenshot above). Alternatively, you can export an isosurface to a mesh file (*.msh) and then import it back into Leapfrog as a mesh object. For details, refer to Extract Mesh Parts in the Reference Manual.

    In the Boolean Mesh window, the mesh you right-clicked on is already specified as the first mesh:

    Page 7 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The default operation is Intersect. Other available operations include Union, First minus Second and Second minus First. The result of the Boolean mesh operation will be placed under the mesh you selected to initiate the process, but you can change this using the Place under list.

    To select the second mesh, click on the Second Mesh button. The Select Mesh window that appears lists all the available meshes (including both isosurfaces and mesh objects):

    Select the second mesh, in this case m_assays Buffer 47.0, and click OK. Back in the Boolean Mesh window, both meshes are now specified:

    Notice that the default name has been updated automatically. Click OK to proceed.

    The new mesh has been added under the isosurface cu 0.61:

    Page 8 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Press Shift+Ctrl+R to run the process. When the operation is complete, view cu 0.61 Intersect m_assays Buffer 47.0. When other meshes are cleared from the scene, the intersecting mesh looks like the one below:

    Compare with the two original meshes and confirm that the correct intersection is obtained.

    Boolean Mesh vs. Domaining

    If you are not familiar with the domaining technique covered in Domaining Tutorial, skip the following.

    A Boolean mesh not only offers intersection, but also provides A union B, A-B and B-A operations, where A and B refers to the first and the second mesh respectively.

    Where the intersection of two meshes is concerned, a boolean mesh operation is similar to domaining. The essence of the domaining technique, "clipping a mesh by a domain", is to obtain the intersection of the mesh and the domain.

    While the following two results are very similar, the boolean mesh and a domain are computed slightly different. This results in subtle differences. In short, the Boolean mesh produces sharper boundaries, whereas the boundaries produced by the domain are more jagged (or chamfered).

    Page 9 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • However, it is possible to produce shaper edges with domaining. When you specify the domain for an isosurface, you can select the Exact Intersection.

    There are three options:

    If you select Standard for Exact Intersection, the isosurface clipped by the domain will be identical to the intersection computed by the boolean mesh. (Slight differences may occur depending on the order of operations.)

    Boolean Operations and the Direction of a Mesh In Leapfrog, a mesh has a positive side and a negative side, which affects the results of Boolean operations carried out on meshes.

    A Boolean operation on two meshes acts on the positive part of the space divided by each mesh. The following table illustrates the result of Boolean operations on closed meshes, where red is the positive side and blue is the negative side:

    Intersection by Boolean mesh. Produces sharp boundaries. Clipped by a domain, showing jagged boundaries.

    Off: Default. It will trim the edge of the mesh if the triangle on the boundary intersects the domain. As a result, the edge may be jagged or chamfered.

    MultiRes: With this option, the entire isosurface is computed using a multi-resolution solution. The edge will be very smooth and fine. However, isosurfacing with this option will be considerably slow.

    Standard: With this option, most of the isosurface is computed with the specified resolution, but it will use the boolean mesh to compute the edges.

    Operation Both surfaces positive toward the inside

    One positive surface toward the outside

    Both surfaces positive toward the outside

    Union

    Intersect

    Page 10 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • To create a bounding box, right-click on the Bounding Boxes folder in the project tree and select Define Bounding Box from the menu:

    This displays the Define Bounding Box window:

    When the Define Bounding Box window opens it defaults to a bounding box calculated from the project extents, that is, from all the locations, polyline and mesh objects in the project. The project extents box can be recalculated at any time by clicking the From Projects Extents button.

    There are two types of bounding boxes:

    To specify a fixed bounding box, check the Fixed Bounding Box radio button and type the required extents in the Minimum and Maximum columns.

    To copy the extents from a locations object to the fixed bounding box area, click on the Object Bounding Box radio button, then select the required object from theLocations drop downbox. Set the - Margin and + Margin as required, click on the Fixed Bounding Box radio button, then on the Copy Extents button.

    To specify an object bounding box, check the Object Bounding Box radio button and type the required extents in the - Margin and + Margin columns.

    To specify the actual extents of an object bounding box, click on the Fixed Bounding Box radio button and set the extents in the Minimum

    First minus second

    Second minus

    first

    Navigation: Reference Manual >

    Bounding Boxes

    A Fixed Bounding Box does not depend on any other object and will not change unless the user edits it directly. An Object Bounding Box surrounds an object, enlarged by the specified margin. The bounding box will change when the locations

    of the object it surrounds changes.

    Page 11 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • and Maximum columns. Click on the Object Bounding Box radio button, then on the Copy Extents button. If the specified extents would result in a negative margin, the margin is set to zero instead.

    Example We will edit the m_assay points bounding box to have a minimum corner at (3500, 7000, 120) and a maximum corner at (5000, 8000, 1200).

    1. Double-click on the m_assay bounding box to open the Edit Bounding Box dialog as shown below:

    2. Click on the Fixed Bounding Box radio button:

    3. Now type in the desired extents for the bounding box: (3500, 7000, 200) in the Minimum column and (5000, 8000, 800) in the Maximum column and click on the Object Bounding Box radio button:

    Page 12 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • 4. Now click on the Copy Extents button. The margins will update as shown below. Now click OK to save changes.

    5. Now rerun all the grade shells that depended on the bounding box. Here are the new Au grade shells. The bounding box is now large enough to not clip the Au 0.48 grade shell. The yellow and green isosurfaces are the one with the new and the old bounding boxes respectively in the following screenshot.

    Page 13 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • To set a default bounding box, right-click on the Bounding Boxes folder and select Set Default Bounding Box:

    The Set Default Bounding Box window will appear:

    This window displays all bounding boxes currently defined for the project, together with the option .

    Navigation: Reference Manual > Bounding Boxes >

    Set Default Bounding Box

    Page 14 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • If you select , the project as a whole will be used as the default bounding box.

    Select the required default bounding box and click OK. The default bounding box is indicated in the project tree by the blue bounding box icon:

    You can also set the default bounding box by right-clicking on the bounding box you wish to use, then ticking the Default box:

    Point data is grouped into three folders based on the type of data the points represent: Numeric Data, Boundaries and Topography. If some data appears in the wrong folder it can be moved to another using the Change Data Type command.

    To change the data type of a points object right-click on the points and select Change Data Type from the menu as shown below:

    Navigation: Reference Manual >

    Changing Data Types

    Page 15 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The Select New Type dialog will appear. Select the desired folder, in this case Boundary, from the Geological Type drop down list and click OK.

    The points object and all it's children will be moved to the selected folder as shown below:

    Changing the data type of a points object will change the data type of any subsets of the points selected in a domain.

    Combined interpolants are weighted linear combinations of other interpolants. Given interpolants f and g with weights w1 and w2 respectively, the value of the combined interpolant is given by:

    c(x) = w1f(x) + w2g(x).

    Suppose you have imported the Demo drillhole sets in tutorials\Demo and followed the instruction given in Vein Modelling. The Combine Interpolants command is found by right clicking on an interpolant object in the project tree and selecting Combine Interpolants from the menu:

    Navigation: Reference Manual >

    Combined Interpolants

    Page 16 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The Select Interpolants dialog will then appear.

    Select two or more interpolants and click OK to display the Combined Interpolant dialog.

    To add more interpolants, click the Add button to redisplay the Select Interpolants dialog.

    To remove an interpolant, select the interpolant in the list and click the Remove button.

    To change a weight, double-click on the desired number (or select the desired row and hit Space), and type the new value - hit Enter to finish editing.

    The Normalize button will scale all the weights so there sum is one (1) whilst maintaining the ratio between them.

    Fill in the Name text box and click OK to create the new combined interpolant which will run automatically. Combined interpolants are placed in their own folder which will appear if it does not already exist. They may be used like any other RBF interpolant.

    Page 17 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Example We will use combined interpolants to display the thickness of the vein shown below. From the B1_vein footwall interpolant, select Combine Interpolants.

    Select B1_vein footwall offset values and B1_vein hangingwall offset values. To find the thickness of the vein we combine these two interpolants.

    Ensure that the two weights are -1 and 1 and click OK to create the new interpolant.

    Right-click on the mesh from which the vein was made - B1_vein footwall Surface in this instance - and select the Evaluate command. Select the combined interpolant just created - vein thickness in this instance - as shown below and click OK.

    Page 18 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Running the evaluation and displaying gives the following result:

    Blue regions indicate thin and red regions indicate thick parts. More information about the thickness can be obtained from the evaluation's properties or changing the Colouring.

    The vein thickness interpolant must be evaluated on the mesh from which the vein was made - not on the vein mesh. Evaluating the interpolant on the vein mesh will not give you the thickness at that point.

    The thickness evaluation is automatically computed if the vein is created by New Vein function.

    The Assay Compositing dialog allows you to perform fixed-length compositing of assay data. This dialog can be accessed by right-clicking on the assay table of the imported drillhole data.

    Navigation: Reference Manual >

    Composite Assays

    Page 19 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The dialog is composed of three tabs, Compositing, Volume and Output Columns:

    Compositing Compositing Method

    Special Assay Values Under Special Assay Values will be listed any meanings that have been associated with special assay values in the table (non-numeric or negative values), along with 2 standard values - Blank (empty or NULL value ) and Missing (no row in database).

    For each type of interval you can Omit (leave empty), Replace it with a fixed value or set it to a Background value depending on the assay column. The background values used for each assay column are specified in the Assay Background Values list.

    No compositing: Apart from the actions for special assay values, no processing on the input data is done. Fixed Length: All intervals are processed to the fixed composite length. Note that the interval at the end of a drill hole may be shorter

    than the composite length. If the last interval it is longer than the specified minimum length, it will be kept, otherwise it will be discarded.

    Page 20 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • We examine how the compositing option affects the result. The following snapshots are showing cu grades of a portion of the m_assays data.

    The first snapshot shows the original, non-composited, intervals of cu.

    Original intervals

    Let us select No Compositing method. This will only perform processing for the special cases. Select Replace action for Below Detection and give 1.5 (just for illustrative purpose; in practice, the value for below detection is very low). Notice that the result remains mostly unchanged, apart from the short interval that appears to have a value below detection (inside the red rectangle)

    Page 21 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • No compositing. Below Detection changed to 1.5

    We now select Fixed Length with Composite Length 20.0 and Minimum Length 10.00. Notice that all the intervals are exactly 20.0 long. The exceptions are are those at the start and the end. If an interval is shorter than the minimum length, this interval is discarded and its length is distributed between the intervals at the start and the end. The grade of an interval is the average value. The replaced value 1.5 for the below detection case is no longer distinctively shown, but it contributes to yielding a higher average.

    Fixed Length Compositing with Length 20.0 and Minimum length 10.0.

    Volume In the Volume tab, you can specify where to composite. By default, it is performed everywhere, but you can choose to composite only the inside of a region or the results of a query filter. If you have regions or query filters available, they will show in the dialog. Otherwise, they can be created using the Composite Region, and the New Query Filter commands respectively. Suppose we have a composited region MX_composite created from m_assays by including the zone MX but excluding others:

    Page 22 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • We composite with the same Fixed Length setting as above, but limit to composite to inside a region MX_composite only. The red bars below represents portions inside this region. Only the data within this region will be composited.

    MX_composite (showing "included" regions only)

    The result should be similar to below.

    Page 23 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Output Columns While all the columns will be composited by default, you can determine which columns to include or exclude in the Output Columns tab.

    This topic supplements the tutorial on Compositing Regions.

    The Composite Region dialog (shown below) is used for modelling spatial regions. These regions could represent a particular lithological type (or group of types), mineralization, high grade zones or any other region of interest. The result is stored in a region table, which is an interval table with one measurement column called 'interest'. The interest is 1 for intervals inside the region and 0 otherwise.

    Navigation: Reference Manual >

    Composite Region

    Page 24 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The left-hand side of the dialog is used to select which intervals are to be included in the region. The right-hand side allows you to specify the processing steps to apply to the intervals selected on the left.

    Intervals to include in the region can be selected using a query filter, specifying a list of category values to include (i.e. lithology values) or by specifying a set of category values previously grouped together using a partition.

    Using a Query Filter Select Query Filter from the Define region using a combo-box (the Category Column parameters will be removed). Then select the desired query from the Query filter to use combo-box, as shown below:

    If there are no query filters defined this option is not available.

    Using a Category Column Select Category Column from the Define region using a combo-box. Then select a column from the Column to use combo-box as shown below:

    If there are no category columns in the table, this option is not available.

    You can work directly with values in the selected column or you can work with previously defined partition groups by selecting a partition from the Partition combo-box.

    Using the left mouse button, drag the intervals you want to model from the Exclude column to the Include column. Use the Ignore column for dykes or other (younger) intrusions that you wish to ignore.

    Exclude vs. Ignore Let us consider the following diagram showing three lithologies, A, B and C, where we wish to model lithology A.

    If no processing is required, consider using Partitions or Query Filters instead of creating a composite region.

    Navigation: Reference Manual > Composite Region >

    Selecting Regions

    Page 25 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Clearly A must be included.

    If B and C are both excluded, Leapfrog will model A as shown below.

    On the other hand, if C is ignored (and B excluded), all occurrences of A-C-A down a drillhole are replaced with A-A-A and all occurrences of A-C-B down a drillhole are replaced with A-B (the contact point is the midpoint of the removed C interval). Effectively C will be completely ignored as if it were non-existent and Leapfrog will model A as shown below.

    Missing Intervals Missing intervals (sometimes known as 'implicitly missing intervals') can be treated in the same way as other intervals: included, ignored or excluded. Ignored is recommended in most situations except when there are large areas of un-sampled drillholes. This can happen, for example, when the ore is below a lot of ground rock.

    Processing Types Leapfrog provides five ways to process the drillhole data when you composite a region.

    We describe the details of each processing type and observe how each of them affects the following scene: the original drillhole data showing MX zone only.

    Navigation: Reference Manual > Composite Region >

    Parameter Settings

    1. Window filter

    2. Fill short gaps

    3. Remove short intervals

    4. Extract single vein

    5. Longest interval only

    Page 26 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Window Filter The window filter quickly determines whether an interval should be included in or excluded from the composited region. The decision is based on the three parameters: Width, Interest Percentage and Conservatism.

    The Width parameter specifies the width of the window. If the proportion of the interest intervals (MX in this example) within the window is higher than the Interest Percentage, the filter decides these intervals will be included. Otherwise, these intervals will be removed and will not appear in the resulting composited region.

    The following series of images show the effect of varying the parameters. The translucent white cylinders are the processed region intervals, and the red cylinders are the original interest intervals.

    Width=1, Interest percentage=50%, Conservatism=50%

    m_assays (zone showing MX only)

    Page 27 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Width=40, Interest percentage=50%, Conservatism=50%

    A high value for Interest percentage would make the filter strict, and may improve the alignment between the output and the input. Width=40, Interest percentage=90%, Conservatism=50%

    Page 28 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Conservatism controls the strictness in determining the boundary of the filtered intervals.

    After the filtering, the region intervals' endpoints will not usually match any of the original interval end-points. This is not particularly desirable, so the filtered intervals may need to extend its endpoint to an adjacent interval.

    If a filtered interval happens to have an endpoint lying within an interest interval, they will be merged and the endpoint will be extended to the endpoint of the interest interval.

    On the other hand, if a filtered interval endpoint lies within a non-interest interval, then the composite region result will include the original (non-interest) interval when the overlap between the filtered interval and the original interval is more than Conservatism percent.

    A high value for Conservatism will remove poorly-aligned intervals. For example, if Conservatism is 100%, then no non-interest areas touching the filtered boundary will be included, resulting in a smaller volume. If Conservatism is 0.1% (almost) all non-interest areas touching the filtered boundary will be included, resulting in a larger volume.

    Page 29 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Fill Short Gaps The intervals separated by a gap shorter than the specified distance can be merged.

    To demonstrate this, composite a region with Fill gaps shorter than 60.

    In the scene below, intervals less than 60m away are joined.

    Remove Intervals The intervals shorter than a specified length can also be removed.

    To demonstrate this, composite a region with Remove intervals shorter than 50. The short red intervals not overlapped by the white translucent intervals are those excluded by the filter.

    Extract Single Vein With this option, each drillhole would have only one interval, which fills all the gaps between the first interval and the last interval and forms a long continuous interval.

    Page 30 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Longest Continuous Interval Only With this option, each drillhole would have one interval, the longest continuous interval only. This is useful for extracting veins. The red intervals not overlapped by the white translucent intervals are those excluded by the filter.

    Composited regions can also be composited. Right-click on a region table in the Project tree, and select Composite Region. This means that the different types of filters can be sequentially applied. For example, you may apply the remove-intervals filter to the region you composited with the window filter.

    Handling Missing/Ignored Intervals Missing Intervals In practice, it is not uncommon for a drillhole to contain some intervals with no values. For correct modelling, you should specify how these 'missing intervals' will be processed. Based on your domain knowledge and analysis, they can be included, excluded or ignored.

    Convert Ignored Intervals

    Page 31 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • If Yes is selected, Leapfrog will convert the ignored intervals to either included or excluded by comparing them with their adjacent intervals. For example, when an ignored interval lies between two included intervals, it is converted to an included one. When an ignored interval is sandwiched by one of each type, the ignored interval will be split into two and each portion will be converted to the type of the neighbouring interval.

    Otherwise, ie. if No is selected, the ignored intervals are left ignored.

    Meshes may be offset by a constant distance using the Constant Offset command.

    The Constant Offset command may be found by right-clicking on any mesh type object in the project tree and selecting Constant Offset:

    The Constant Offset Mesh dialog will then be displayed:

    Select an Offset Distance. Positive values offset towards higher grade for grade shells and to the red side of boundary meshes. Use negative values to offset in the other direction.

    Select a Quality level. A low quality offset will run faster but will not be completely accurate around detailed areas and is more likely to miss small parts. A high quality offset will take longer but will be accurate around detailed areas and will offset small parts correctly. Below are time comparisons for a mesh with 16 500 vertices.

    Setting a value for Ignore parts less than ignores parts smaller than the threshold value in the offsetting process. Small parts are often not interesting and do not offset well unless a very high quality is used. If you set this to 0 then set the Quality to 1.00. Click OK to create the offset mesh. Three objects will appear under the Offset Interpolants sub-folder.

    Navigation: Reference Manual >

    Constant Offset

    Quality: 0.25 0.50 0.75 1.00 Time taken for Interpolation step: 2.7sec 7.4sec 16.3sec 21.6sec

    Page 32 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The first object, Cu 0.61 offset by 1.0, is the values object defining the offset mesh, which is then interpolated by an Rbf to obtain the actual offset mesh Cu 0.61 offset by 1.0 Surface. To edit the offset distance or other parameters double click on the offset values object ( Cu 0.61 offset by 1.0 in this instance).

    Example This example demonstrates how problems with small parts can manifest. We will offset the same grade shell by 30m with all small parts included and a quality of 0.15 as shown below.

    This results in the following surface which has missed two of the internal parts (among others) as is shown below.

    Page 33 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Increasing the quality will catch the missing parts.

    giving:

    To create a 2D slice, you first need to add the slicer to the scene. To do this, activate the slicer by clicking on the button on the scene toolbar. Manipulate the slicer as described in the Section View Manipulation tutorial. Position the slicer in the scene to represent the slice you will create.

    Next, right-click on the Images and Slices folder and select Make 2D Slice:

    Navigation: Reference Manual >

    Creating a 2D Slice

    Page 34 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The Name window will appear. Type a name for the new section, then click OK.

    A window will be displayed showing all the domains available in the project:

    If you wish to include any of the listed domains, tick the required box. Then click OK. The new section will appear in the project tree in the Images and Slices folder. To view the section, drag it into the scene or right-click on on it and select View.

    When loading date and timestamp columns, you can specify the date and time format used.

    Format strings are case sensitive. The following directives can be used in a date or timestamp format string:

    Examples:

    Navigation: Reference Manual >

    Date and Time Formats

    Directive Place-holder forYY Year without century [00-99].YYYY Year with century.MM Month as a number [1-12].MMM abbreviated month name.MMMM full month name.DD Day of the month as a decimal number [1,31].DDD abbreviated weekday name.DDDD full weekday name.hh Hour as a number [0-23]. 0-12 if 'pm' directive is specifiedmm Minute as a number [00-59].ss Second as a number [00-59].pm AM or PM place holder.\Y, \M, \D

    \h, \m, \s, \p

    A literal Y, M, D, h, m, s or p

    \\ A literal \

    Page 35 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • To generate the complement of a domain, right-click on the domain and select Define Complement:

    The complement will be generated and will appear in the project tree under the Domains folder:

    You can then view the complement properties and modify the complement in the same way you would any other domain.

    The Define Complement function is useful in dividing a larger volume into smaller ones. For example, say we wish to use a fault surface to divide a volume into two separate volumes:

    The first step is to use the Define Sub Domain function to create a new sub-domain, the first of the smaller volumes:

    example date format string matching date3 November 2006 DD MMMM YYYY3/11/06 DD/MM/YYNov 3, 2006 MMM DD, YYYYon 3-Nov-2006 on DD-MMM-YYYYTuesday, 11 November 2006 DDDD, DD MMMM YYYYDate: 3 Nov 06 \Date: DD MMM YY

    example time stamp format string maching time stamp2006-11-03 14:35:00 YYYY-MM-DD hh:mm:ss2006-11-03 02:35:00 pm YYYY-MM-DD hh:mm:ss pmTuesday, 11 November 2006 at 2:35pm DDDD, DD MMMM YYYY at hh:mmpm20061103143500 YYYYMMDDhhmmss2:35pm on Tue 3 Nov 06 hh:mmpm on DDD DD MMM YYDate: 3 Nov 06 Time: 14:35 \Date: DD MMM YY Ti\me: hh:mm

    Navigation: Reference Manual >

    Define Complement on Domain

    Page 36 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Next, right-click on the new sub-domain in the project tree and select Define Complement. When the result is added to the scene, together with the sub-domain, you can see that the original volume has been divided in two:

    The scene window can be detached from the main window and promoted to as a stand-alone window. This is especially convenient if you have two or more display screens and wish to have the scene window maximised in one screen and have the main window as a controller in the other screen.

    To detach the viewer, select View > Detach Viewer from the main menu. Go to the scene window and press F11 for full-screen display. To put the scene window back to the main window, simply press Esc key.

    Navigation: Reference Manual >

    Detach Viewer

    Page 37 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Domains are simply regions of space. There are no restrictions on the size or shape of a domain. Domains may be infinite in extent (e.g. everywhere below the topography) or of finite extent (e.g. high grade region). Domains may contain multiple regions that are disconnected from each other or they could be a single connected region.

    Domain boundaries can be defined using polyline surfaces, grade values, minimum distance values, boundary surfaces, bounding boxes and other domains. Only one bounding box per domain is permitted.

    See the Domaining Tutorial for instructions on how to add boundaries and set thresholds.

    Intersection and Union When the Intersection option at the top of the dialog is selected (the default) all the conditions are required to be true at a point for the point to be inside the domain (logical AND). Consider the dialog encountered in the Domaining Tutorial reproduced below:

    This domain is all the points where "Topo Subset Rbf is less than or equal to zero" AND "Distance to Marvin (Isotropic) is less than or equal to 150". (Since Topo Subset Rbf is zero at the topography surface, the first phrase means "below the ground").

    When the Union option is selected, the domain is defined to be all points that satisfy any one of the conditions. Consider the same example with union selected:

    Navigation: Reference Manual >

    Domains

    Page 38 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • This domain is all the points where "Topo Subset Rbf is less than or equal to zero" OR "Distance to Marvin (Isotropic) is less than or equal to 150". The boundary of the domain looks like this (beware that triangles on the bounding box is turned of by default; therefore, all points below ground in the bounding box are not shown):

    This domain is points above the ground that are within 150m of the Marvin data and also points further than 150m from Marvin that are underground.

    Parent Domains As mentioned above, domains can reference other domains. Suppose we have a domain Ground that is defined as "Topo

  • This domain would normally appear in the Domains folder. To make it appear under the Ground domain in the project tree, select Ground from the Parent domain drop-down list. The parent domain is indicated by a pink background in the list.

    This only works when adding the Inside of a domain. This ensures that child domains in the project tree are subsets of the parent domain.

    Setting the parent is not required, but allows flexibility in the layout of the domain objects in the project. A domain's parent can be changed without recalculation, provided it remains (or was already in) the list of boundaries.

    Drawing Commands

    These are available in drawing mode, when a the drawing toolbar is visible and one of the or buttons is selected.

    Editing Commands

    These are available in editing mode, that is, when a the drawing toolbar is visible and the button is selected.

    Navigation: Reference Manual >

    Drawing Commands

    Mouse/Keyboard Action Taken Left click Draws a point or a node with straight edges Left drag Draws a point with a normal or a node with a smooth tangent Right click Terminate current polyline Shift+Left click Rotate camera Double left click Terminate current polyline Left click on contour endpoint Close polyline Ctrl-Z Undo last drawing or edit command. Note: This may change the mode

    from drawing to editing.

    Mouse/Keyboard Action Taken Left click Selects segment, node or point under cursor. When on a selected segment it selects

    the entire polyline

    Delete Delete selected segment, node or point Ctrl+Left drag (on a node or point) Moves node or point. (on a segment) Adds a node Alt+Left drag (on a node without tangents) adds a smoothing tangent (on a node with tangents)

    moves node

    Double left click Select entire polyline Ctrl-Z Undo last drawing or edit command. Note: This may change the mode from editing to

    drawing.

    Page 40 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • When editing a polyline, the nodes will normally move in the section plane that the polyline was drawn in. However, if the angle between the current view direction and the section plane is less than 35 degrees, the nodes will move in the current viewing plane instead.

    The Extract Mesh Parts command allows you to create a mesh from selected connected parts of an existing mesh or isosurface.

    To being the process, right-click on any mesh mesh ( ) or isosurface ( ) in the project tree:

    When Extract Mesh Parts is clicked, the following dialog will appear.

    The largest part is initially selected. The mesh parts may be sorted either by Volume or by Area by clicking the heading of the respective column.

    To select all the parts click the Select All button.

    To de-select all the parts click the Remove All button.

    Inside-Out parts have negative volume. These are the blobs you can see inside the large shell in the picture above. To remove them, click the Remove Inside-Out button.

    Navigation: Reference Manual >

    Extract Mesh Parts

    Page 41 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • To remove parts smaller than a given size, first click the Select All button then select the last item you want to keep in the listbox and click the Remove Below Current button as shown below.

    Click OK to create the mesh. It will be placed in the Meshes folder:

    Meshes created in this way are not connected to the mesh they were created from. Changes to the original mesh will not be reflected in the selected parts.

    Here is the result of selecting all non-negative volumes.

    From the imported drillhole data set, you can retrieve several types of pointset objects, including assay points, volume points, vein walls and contact points.

    Assay Points

    Navigation: Reference Manual >

    Extract Points

    Page 42 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The basic techniques for extracting assay points are covered in the Extract Assay Points tutorial.

    Background Regions When you apply a filter that determines the points of interests and others, those filtered-out areas are referred to as background regions.

    The points in the background regions are not particularly of interest, but they cannot be simply discarded in an attempt to reduce the number of points for more efficient processing. If they are discarded, the background regions will be seen as blanks, and when you interpolate the remaining points, the result can potentially be inaccurate. Instead, Leapfrog allows you to 'implant' a small number of points with a fixed grade (preferably low) in the background regions.

    The following example illustrates how this works.

    When assay points of m_assays are generated without a filter, cu grade is shown as below.

    Right-click on m_assays and select Extract Points > Assay Points:

    Go to the Background Regions tab, and enable "Create fewer points when" option:

    You can create a value filter inside this dialog, or opt to choose an available filter if you have created one previously. If there is no available filter, "The following criteria is" option will be greyed out.

    Page 43 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Here, we create a value filter inside the dialog: Points with cu grades are less than 0.100 are background regions.

    Leapfrog will remove all the points in the background regions, but will place new points with grade 0.01 (as specified in the Background Value field) every 50.0 m (as per Distance between points)

    Note that points of cu grade above 0.1 will remain unaffected. Enter the Name m_assays_cu_above_0.1 and click OK.

    Set the selection to display m_assays_cu_above_0.1. As you can see, background values of cu 0.01 are displayed every 50m.

    Comparing the new result with the original (no filter), the number of points have been reduced from 9182 to 8352.

    The isosurface cu 0.61 obtained from m_assays_cu_above_0.1 almost precisely coincides with the one from the original m_assays. It suggests a properly set background region will improve processing efficiency without compromising the accuracy.

    Page 44 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Isosurface of cu 0.61 (m_assays_cu_above_0.1)

    Volume Points The lithology data is typically composed of non-numeric values, and is not suitable for interpolation. Leapfrog applies the following idea to create volume points, the numeric representation of these lithology data. Refer to the Generating Volume Points tutorial.

    Suppose the area of interests is composed of three lithological layers, A, B and C, and we wish to build the 3D model of layer B. There are 5 drillholes:

    In this drawing, there are four boundary points in each drillhole. Weight 0 is assigned to each point. Intervals denoted by (+) are those to be included, others with (-) are excluded.

    "Exclude" vs. "Ignore" Let us consider the following diagram showing three layers, A,B and C, where we wish to model layer A.

    Obviously A must be included. For B and C you need to decide whether to Exclude or Ignore.

    Leapfrog requires that at least one layer be excluded.

    If B and C are both excluded, Leapfrog considers A as separated into two blocks.

    On the other hand, if B is excluded and C is ignored, the drillhole data containing C will be completely ignored as if it were non-existent. When Leapfrog performs an interpolation, the space occupied by C will be filled in by the nearest lithology type. In this case, A is likely to be seen as a single continuous block.

    Page 45 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Surface Offset Distance and Internal Fill Spacing For more realistic 3D models, Leapfrog creates many artificial points and distributes them between two boundary points.

    When lithology points are generated, users can specify two parameters, Surface Offset Distance and Background Fill Spacing.

    In the following simple drawing of a drillhole, let us suppose we wish to include the blue interval for 3D modelling.

    The top and bottom ends of the interval are adjusted by the value specified by Surface Offset Distance (offset for short).

    Points a and d are given weight 0. The remaining intervals between a and d are divided into segments of size "spacing", which is specified by Background Fill Spacing. This creates new points b and c.

    If the remaining interval is not a multiple of "spacing", Leapfrog automatically adjust "spacing" to an appropriate value.

    The weight of these artificial points is determined by the distance from the closest boundary point (possibly a boundary point from another drillhole). The further the distance is, the higher the value of that point is set.

    The default values for the offset and spacing will suffice in most situations. A smaller value for the spacing means higher resolution and, therefore, slightly smoother surfaces. However, computation will take slightly longer.

    A higher offset value may have a subtle effect. It might cause the anisotropic interpolation slightly more pronounced.

    Missing Intervals In practice, it is not uncommon for a drillhole to contain some intervals without values. For correct modelling, you should specify how these 'missing intervals' will be processed. Based on your domain knowledge and analysis, they can be included, excluded or ignored.

    Convert Ignored Intervals If Yes is selected, Leapfrog will convert the ignored intervals to either included or excluded by comparing them with their adjacent intervals.

    Page 46 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • For example, when an ignored interval lies between two included intervals, it is converted to an included one. When an ignored interval is 'sandwiched' between one interval of each type, the ignored interval will be split into two and sub-interval will be converted to the type of its neighbouring interval.

    Otherwise, ie. if No is selected, ignored intervals remain ignored.

    Vein Walls Extracting vein walls is a little more advanced and is explained as part of a separate topic, Vein Modelling.

    Contact Points Volume points are numerical representations of non-numeric lithology data, which make it suitable for Leapfrog's FastRBF engine to create a 3D surface. However, volume points are not particularly strong at outlining the boundary between two contacting layers. Therefore, Leapfrog offers an alternative called contact points. Contact points define the boundary between two lithology layers.

    From the table that contains the lithology data, in this case, m_assays, right-click and select Extract Points>Contact Points.

    We generate the contact points between layer MX and PM by setting the parameters accordingly and clicking OK:

    Now display MX-PM contacts under Boundaries and resize the point radius to get a similar scene to the one below. These points sit between MX (red) and PM(blue).

    Page 47 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • You can interpolate MX-PM contacts in the usual way to create a surface:

    FastRBF (developed by Applied Research Associates New Zealand) allows scattered 2D and 3D data sets to be described by a single mathematical function, a Radial Basis Function (RBF).

    The resulting function and its gradient can be evaluated anywhere, for example, on a grid or on a surface. RBFs are a natural way to interpolate scattered data particularly when the data samples do not lie on a regular grid and when the sampling density varies.

    The ability to fit an RBF to large data sets has previously been considered impractical for data sets consisting of more than a few thousand points.

    FastRBF overcomes these computational limitations and allows millions of measurements to be modelled by a single RBF on a desktop PC.

    After numeric data is loaded into Leapfrog (directly imported or generated from drillhole data), users may create a value filter that collects points with a grade within a specified range. For example, you can select points with Cu grade greater than 0.7:

    Filter Creation: Cu >= 0.7 In the Project tree, select the field name (e.g. Cu, Au) for which you wish to create a filter, and right-click. This brings up a context-menu.

    Select Filter Values:

    Navigation: Reference Manual >

    FastRBF

    Navigation: Reference Manual >

    Filter Values

    Page 48 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The Filter Values dialog pops up. You can specify the lowerbound of the grade. The default setting of the lowerbound filter is greater than or equal to the minimum grade found. Type in 0.7 to replace the default value 0.01. Notice that the filter name "cu >= 0.7" is automatically created. You are free to customise the name, but once modified, it is no longer automatically updated. To finish, click OK.

    This creates a filtered points set cu >= 0.7 under m_assays. This filtered point set is regarded as an independent numeric data set. You can interpolate values, distance etc. just as you can with an ordinary numeric data set Cu.

    Enable Upperbound Filter If you wish to define an upperbound of the grade, tick the and check box and enable the upperbound setting. The maximum grade found, 3.22, is the current upperbound. The name "cu in [0.7, 3.22]" is automatically produced. The delimiter "[" and "(" represent ">=" and ">" respectively.

    Modifying a Filter Double-click on Cu >= 0.700 in the project tree or right-click and choose Filter Values.

    This will bring up the Filter Values dialog again and lets you modify the filter settings.

    When a project is very large, finding objects in the project tree can be difficult. In such cases, you can search in the project tree using the Find box above the project tree:

    Navigation: Reference Manual >

    Finding Objects

    Page 49 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • You can limit the search to a specific folder or choose "All" to search the whole project tree.

    Note that the term you're searching for does not need to be complete.

    You can also find objects in other parts of the Leapfrog application. In these cases, press Ctrl+F and type the keyword you wish to search for in the dialog that appears:

    Introduction Two of the strengths of Leapfrog are the fast computation of the boundaries of three-dimensional grade shells and the ability to visualise the ore distribution described by these grade shells easily. Once a model has been obtained, it can be useful to make an approximate estimate of the total mineral within a deposit before committing to a rigorous geostatistical analysis. The following describes how to create an estimate using Leapfrog, but it is an approach that needs to be used with an awareness of the limitations of the approach. Provided it is used carefully useful estimates can be obtained.

    The basic approach is shown in Figure 1 in two dimensions. The contours illustrate the boundaries of the quantity to be estimated at different thresholds. This may correspond to grade, but the procedure is quite generic. To avoid clouding the basic procedure with scaling factors that vary depending on the type of geological or chemical units, the following discussion assumes the boundaries represent the annual rainfall in metres, and the areas of the regions are given in square metres.

    Navigation: Reference Manual >

    Grade Estimation

    Illustrating the estimation procedure in two-dimensions.

    Page 50 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • A very conservative estimate of the total water falling in regions with a rainfall above 0.5m per year would be to calculate:

    Estimate 1: 0.5*(Area of A).

    Clearly this is going to be an underestimate because there are regions within A where the rainfall is higher. A better estimate would be to take:

    Estimate 2: 0.5*(Area of A - Area of B) + 0.6*(Area of B - Area of C - Area of D) + 0.7*(Area of C + Area of D)

    A further improvement would be to recognize that the average grade in the region of A excluding the subregion B would probably be closer to (0.5+0.6)/2 = 0.55.

    Estimate 3: (0.5+0.6)/2*(Area of A - Area of B) + (0.6+0.7)/2*(Area of B - Area of C - Area of D) + 0.7*(Area of C + Area of D)

    Leapfrog uses the calculation in Estimate 3.

    It is worth noting that estimating the ore above the highest contour is an example of extrapolation rather than interpolation and needs special care, since it is not easy to estimate an average grade to use to weight this volume. This presents special problems when estimating the metal contained in nuggets discussed below.

    Estimation in Leapfrog A list of grade shell volumes is listed in the Grade Shells tab of the grade interpolant properties dialog. See the Isosurfacing Tutorial for more details.

    The major factors that need to be considered when estimating grade are:

    If the contours are incorrect, the estimate will simply be wrong. It is critically important that the user is confident that the contours faithfully represent the data. In Leapfrog interpolation can be interpreted as a form of Kriging. Like Kriging, it can produce balloons of the isosurfaces in regions of sparse data. This will result in a significant over-estimate of the mineral.

    Fortunately, ballooning is visually obvious, as is apparent above, and Leapfrog provides a number of tools to remove this effect. Two of the most common approaches would be to limit the regions to within a finite distance of the data or to define a domain boundary. It is the user's responsibility to define interpretations of what is geologically reasonable. The quality of the regions defined by the user within Leapfrog directly determines the quality of the estimate. Fortunately, Leapfrog can calculate rapidly so it is not difficult to try a range of assumptions and assess their effects.

    In the rainfall example, the user needs to determine how many contours are sufficient to represent the rainfall distribution. This can be done by the practical application of what mathematicians refer to as taking limits. In the limit of very finely spaced contours, the estimate can be expected to converge to the true value. In practice, what is usually done is that the number of contours would be doubled and the estimate recomputed. Thus, the region between 0.5 and 0.6 would be divided into two regions of between 0.5 and 0.55 and between 0.55 and 0.6. The difference between the sum of these two estimates and the original estimate between 0.5 and 0.6 gives an idea of the error in the original estimate. If it is too large or the user has doubts about the validity the operation needs to be repeated.

    A similar procedure can be used to verify an appropriate resolution for an isosurface in Leapfrog. Halving the resolution should reduce the error caused due to approximating the true surface with triangles by approximately a quarter. Again, the user needs to visually check the isosurfaces as this rule of thumb may not apply at very coarse resolutions.

    Do the contours adequately describe the distribution of a mineral? Are the contours adequately approximated?

    An example showing ballooning in Leapfrog. The data set used is the Au 0.48 from m_assays data set.

    Page 51 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The estimation procedure can be summarised as a two-stage process. Firstly, the generation of a model or models. Secondly, the varying of doubtful parameters in the models to either confirm that they are not changing the results significantly, or to estimate the range of possible results.

    Nugget Distributions The approach described above does not work well when the distribution is poorly represented by an isosurface.

    For example, in real world mining, it often happens that a significant proportion of the mineral is contained in nugget. In this case, Leapfrog will generate small isosurfaces around the nuggets encountered in the drillholes, but fails to generate the isosurfaces around nuggets between drillholes.

    Missing these isosurfaces can cause a significant underestimate of the total mineral deposit. Deposits which are prone to this problem can be identified by looking at the histogram of the grade, which will decay slowly for high values, for example Figure 3.

    An isosurface taken at a high grade threshold (Figure 4) is also typical of a grade distribution with high nugget.

    There is no simple way of solving this problem, which again essentially reduces to one of defining a volume in which the nuggets occur and estimating an effective mineral density from the measured probability distribution of the grade within this volume. Leapfrog provides the tools to help the user to define the volume, however, the estimation of the effective nugget density is still a topic of research.

    If you select to view Properties from a table, you may find the Histogram tab providing the statistical characteristics of the data.

    If the table contains several columns, you may select the column for which a histogram will be generated. For example, the histogram for Au is generated as shown in the following screenshot.

    The histogram of a deposit.

    A grade shell computed in Leapfrog for a deposit with significant nugget.

    Navigation: Reference Manual >

    Histogram

    Page 52 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • You can adjust Bin count (the number of intervals in the histogram). The default is 50 as shown above. The following figure shows the result of 25 bin counts. Type 25 and press Enter to update the histogram.

    Semi-log histogram of the data values can be produced by ticking the Semilog X check-box. This is particularly helpful when high a population is concentrated in low-valued bins.

    Columns of an interval table that have not been imported during the drillhole data import can be added at any time.

    To demonstrate this, we import the column Cxcu which has been previously omitted from importing into the assay table m_assay. Right-click on m_assays and select Add Interval Column:

    Navigation: Reference Manual >

    Import Column

    Page 53 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • This brings up the Open Interval Measurement File dialog. Choose M-Assays.DAT. The following dialog will appear.

    Let us have a close look at this dialog first.

    Column Summary The panel on the right gives a list of column summaries. Note that 4 columns, Hole, from, to and Sample are highlighted and their action is "Match". This means that all 4 columns will be used as the key to identify the matching row in Leapfrog.

    In this case, the Sample column itself provides a unique row key, so importing all 4 columns is not necessary. While it does no harm in terms of correct operation, the column import will be inefficient.

    So we just import Sample column here and do not import the other 3 columns. Change the Import As field of Hole, from, to to Not Imported as shown below.

    Note that this action would have been unnecessary if the Sample column had been selected as the Unique Row ID during the original drillhole import. Only the Sample column would have been highlighted in that case.

    Page 54 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Select Additional Columns Suppose, we attempt to import three columns cu, Cxcu and Au here by selecting "Assay" for "Import As" fields as shown below.

    The Action field of cu and Au would appear as "Match", meaning that they have been previously imported, and will be used for "matching". In contrast, the Action field of Cxcu should appear as "Import", indicating that this column will be a new addition to the database.

    Revert cu and Au back to "Not Imported". We have the Sample column for Match, so extra matching columns are unnecessary.

    In some cases, you may wish to import the same column again. As long as you assign a different name, Leapfrog allows this.

    Click on the Finish button to import the selected columns. The new column, Cxcu, will appear in the Processing Tasks list and will run automatically.

    Exercise. It appears that the new column, however, contains some errors. Fix them following the methods described in the Fixing Errors part of the Drillhole Data Import tutorial.

    Meshes in various formats can be imported into Leapfrog. The list of recognised formats is given in Export Tutorial.

    Follow the steps below to import a mesh.

    You are expected to have completed the Exporting Meshes tutorial. It will be assumed that you have the cu 1.0 (Linear Isotropic)_tr.asc files.

    Navigation: Reference Manual >

    Import Meshes

    Page 55 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Mesh Importing Basics Right-click on the Meshes folder in the Project pane, and select Import. Browse to the mesh file and select it.

    During the mesh is being imported, you will see the progress similar to below.

    When the mesh is successfully imported, you will see the imported mesh located in the Meshes folder, ready to be displayed.

    Importing a Mesh in Elevation Format There is an extra step when importing a mesh in elevation format (*.adf, *.asc). After selecting the mesh file to import, the Filter Elevation Data dialog will appear, such that you can specify a bounding box. When you import a huge topography mesh that contains large portion of area that is not needed, this option will be particularly useful. A properly set bounding box can clip unnecessary portion from the mesh during the import.

    When there is no bounding box available, the option will be disabled as the first screenshot above. We import the same mesh twice with and without a bounding box. The bounding box eastern_half specified above will include the eastern half of the original mesh, and filter out the rest. It is possible to extend the bounding box by setting Everything within field.

    Both meshes and the bounding box eastern_half are displayed below. As expected elevation_example_with_bbox only covers the eastern half of the original mesh that lies inside the bounding box.

    No Bounding Box

    With Bounding Box

    Page 56 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The clipping only takes place on the East(X)-North(Y) plane. Points with high elevation (Z) that lie above the bounding box will not be removed.

    Leapfrog can import polylines from many formats including:

    To import a polyline select Import > Polylines from the Project menu or right-click on the Polylines folder in the project tree and select Import from the menu.

    Navigate to the desired directory, select the polyline file and click the Open button.

    If the polyline file is in Gocad or DXF format the importing will start immediately, for all other formats the Polyline Import dialog is displayed as shown below:

    Navigation: Reference Manual >

    Importing Polylines

    Datamine (*.asc) Surpac String (*.str) Gemcom (*.asc) Micromine (*.asc, *.str) MineSight (*.srg) FracSIS (*.txt) Gocad (*.pl, *.ts) AutoDesk DXF (*.dxf) Leapfrog (*.csv, *.txt)

    Page 57 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • If the polyline file is in one of the standard formats listed above the default settings can be used and the Import button may be pressed immediately.

    Specifying Polyline Import Parameters Two pieces of information are required to import a polyline:

    The vertex coordinate columns are selected by clicking on the heading at the top of a column and selecting one of East (X), North (Y) or Elev (Z) from the menu that appears.

    Polyline sections may be separated in three ways:

    The Gemcom and Surpac formats use rows that do not contain a vertex. A Gemcom format polyline is shown below:

    1. The columns the polyline vertex coordinates are in

    2. How the polyline sections are separated in the file

    1. By rows that do not contain a vertex. These rows either start with a special value or are blank. (Use the option Row: Row starts with)

    2. By numbering each section and specifying the section identifier with each vertex. (Use option Column: Column values are polyline identifiers)

    3. By flagging the first vertex of each section with a special value. (Use option Column: Start new polyline on value)

    Page 58 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Gemcom uses empty lines so the text-box Row starts with is empty. Lines that do not contain a vertex are highlighted in green with a red line through them.

    Here is an example of a Surpac polyline, the separator lines start with 0:

    The Datamine format uses polyline section identifiers to separate polyline sections. An example is shown below:

    Page 59 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Note that the first column has been assigned to Polyline Separator, to tell Leapfrog which column the section identifiers are in. The first row of each section is shown in green; rows 17 and 25 in the example above.

    The Micromine polyline format includes a vertex index for each section and so new sections are flagged with an index of 1 as shown below:

    Note that the fourth column has been assigned to Polyline Separator, to tell Leapfrog which column the vertex indices are in. The Start new polyline on value text-box has been set to 1 to start sections at vertices with index 1. The first row of each section is shown in green; rows 1 and 7 in the example above.

    Any ascii polyline format that separates polyline sections in one of these ways can be imported into Leapfrog.

    The following keyboard shortcuts apply when the specified part of the application has the focus. To move focus from one area to another left-click in the area where you want the focus to be.

    Application Window

    Navigation: Reference Manual >

    Keyboard Commands

    Page 60 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Project Tree

    Scene

    Key Combination Command F8 Toggle project tree visibility F9 Toggle the shape list visibility F10 Display menu from the

    menubar. Then use keys to change menu and keys to navigate menu items and press Enter to make a selection.

    F11 Unsplit scene. Alt-F11 Split scene across top Ctrl-F11 Split scene at right Ctrl-S Save the project Ctrl-R Run Shift+Ctrl-R Run All Ctrl-Q Quit Leapfrog

    Key Combination Command

    Page Up, Page Down

    Tree navigation.

    Ctrl-O or Enter Open current object (Some objects)

    F2 Rename current object (Some objects)

    Alt-Enter View properties for current object (Some objects)

    Delete Delete current object (Some objects)

    Insert Copy current object (Interpolants only)

    Ctrl-F Search for text in the tree. The tree is expanded as required to display matching rows

    + Expand branch 1 level Shift-Keypad+ Expand entire branch - Collapse branch to current

    position

    Key Combination Command

    Rotates the camera. Hold down the Shift key for smaller steps.

    Alt and Pan the camera. Hold down the Shift key for smaller steps.

    Page-Up Page-Down Zoom in and out respectively. Hold down the Shift key for smaller steps.

    Home Reset the camera view Ctrl-Home Reset the camera view and

    Page 61 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Shape List

    Assay and lithology data are often recorded in separate files. In such cases, there will be separate tables for assay and lithology in Leapfrog. Indeed, even when both assay and lithology data are in the same file, importing the file twice (importing only assay columns the first time and lithologies second) can be beneficial. However, having separate tables makes it difficult to explore relationships between the measurements in each table.

    To get around this, Leapfrog merges all imported interval tables into a table called merged_itervals. This allows you to create queries that reference both assay and lithology values.

    How tables are merged

    the slicing and moving planes.

    N, S, E, W Set the view direction to North, South, East or West respectively.

    U, D Set the view direction to Up or Down (Plan view) respectively.

    O, P Set the view type to Orthographic projection or Perspective respectively.

    Comma (,), period (.) Move the slicing plane backwards and forwards with the current step distance. Caution: This works even when the slicing plane is turned off, you just won't see the result until the slicing plane is turned on.

    L Set the view to look down on the slicing plane

    Shift-L Look at slicing plane from rear Ctrl-B Bookmark the current view

    position

    B Restore previously bookmarked view

    Key Combination Command

    List row navigation. Delete Remove highlighted objects

    from the scene

    Navigation: Reference Manual >

    Merged Intervals Table

    Page 62 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The drawing above illustrates assay values for holes composed of 7 intervals and lithology values for the same hole having 4 intervals (shown as 4 different colours). The merged_intervals table uses the from and to depths from all tables and, for the example above, consists of 10 intervals. It has both assay and lithology values associated with each interval.

    Example The drillhole data given in the directory tutorials\Demo\ has a separate set of assay and lithology files. The lithology table has columns holeid, from, to and litho. The assay table has columns holeid, sampleid, from, to and Grade.

    After import, you should be able to find the automatically generated merged_intervals table as shown below.

    Double-click on the merged_intervals and see the table contents.

    The holeid, from and to columns are calculated from both the assay and lithology tables. The collar_id column is Leapfrog's internal identifier for the given holeid. The sampleid and Grade columns are from the assay table and the litho column comes from the lithology table.

    Page 63 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • If all the interval tables originated from the same file the merged_intervals table will be identical to the original file, except for a possible reordering of columns.

    The Merge Objects command allows you to combine multiple Locations, Polyline or Polyline Values objects into a single Locations object.

    This feature may be used to augment measured data with your own interpretation. This is useful for Modelling boundaries of any sort.

    The Merge Objects command may be found by right-clicking on a Locations, Values, Polyline or Polyline-Values object in the Project tree and selecting Merge Objects from the menu as shown below.

    The Merge Objects selection dialog is displayed. Select at least two objects from the tree using the check boxes, as shown below, and click OK.

    The Merge Objects dialog is then shown.

    Navigation: Reference Manual >

    Merging Objects

    Page 64 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • The members of the merged points are displayed in the Object list.

    To add more objects click the Add button, this will redisplay the Merge Objects selection dialog above.

    To remove an object, click on its name in the Object list and click the Remove button.

    In the event of two objects having identical points with differing values, the value from the object appearing last in the list takes precedence.

    To move an object in the list click the or buttons. This also changes the order in the default name.

    Click OK to create the merged points object. The new merged points object will appear in the same folder as the first Points object or in the Boundaries folder if all members are polylines. It will run automatically.

    Example 1: Below is the Marvin tutorial data's topography. There is an area in the foreground with no sampled data. Suppose we know there is a dip in the topography there but don't have survey data available. We can draw the dip with a polyline and then merge it with the existing points.

    Here is the polyline representing the dip

    Here is the merged points object along with its interpolating surface.

    Page 65 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • Example 2: In some infrequent cases, the Interpolate Surface command will return with the error "Could not determine surface from points", or it will simply go wrong. This happens when Leapfrog cannot determine which direction the surface should go through any of the points or when Leapfrog gets the surface direction at some of the points wrong. Let us suppose this is the case with the Marvin topography.

    Start a new polyline and draw some points with lines pointing outward from the surface as shown below. Ensure you are using Draw on Object mode so that the polyline points lie exactly on the existing data. The lines are drawn in the viewing plane so check also that the view is perpendicular to the surface you are defining.

    Here is another view of the same data.

    Page 66 of 106Reference Manual

    04-07-2011file://C:\TEMP\~hhB4B2.htm

  • When you have a sparsely covered most of the surface with the polyline save it. Right-click on the points and choose Merge Objects.

    Select the Points Off Surface Values shown under the polyline you have just drawn, as shown below.

    Click OK and then click O