isatis 2013 technical references

Upload: marcos-felipe-barboza-rocco

Post on 17-Oct-2015

74 views

Category:

Documents


3 download

TRANSCRIPT

  • ISATIS 2013Technical References

  • ISATIS 2013

    Technical References

  • Published, sold and distributed by GEOVARIANCES49 bis Av. Franklin Roosevelt, BP 91, 77212 Avon Cedex, France

    http://www.geovariances.comIsatis release 2013, February 2013

    Contributing authors:Catherine BleinsMatthieu BourgesJacques DeraismeFranois GeffroyNicolas Jeanne

    Ophlie LemarchandSbastien PersevalFrdric Rambert

    Didier RenardYves Touffait

    Laurent Wagner

    All Rights Reserved 1993-2013 GEOVARIANCES

    No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means including photocopying, recording or by any information storage and retrieval sys-

    tem, without written permission from the copyright owner.

  • 1Table of ContentsIntroduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1

    1 1 Hints on Learning Isatis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 2 Getting Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5

    Generalities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7

    3 3 Structure Identification in the Intrinsic Case . . . . . . . . . . . . . . . . . .93.1 3.1 The Experimental Variability Functions. . . . . . . . . . . . . . . . . . . . . .103.2 3.2 Variogram Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .273.3 3.3 The Automatic Sill Fitting Procedure. . . . . . . . . . . . . . . . . . . . . . . .424 4 Non-stationary Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .514.1 4.1 Unique Neighborhood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .524.2 4.2 Moving Neighborhood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .574.3 4.3 Case of External Drift(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .625 5 Quick Interpolations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .635.1 5.1 Inverse Distances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .645.2 5.2 Least Square Polynomial Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .655.3 5.3 Moving Projected Slope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .665.4 5.4 Discrete Splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .675.5 5.5 Bilinear Grid Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .696 6 Grid Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .716.1 6.1 List of the Grid Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . .736.2 6.2 Filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .917 7 Linear Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .97

  • 27.1 7.1 Ordinary Kriging (Intrinsic Case). . . . . . . . . . . . . . . . . . . . . . . . . . 987.2 7.2 Simple Kriging (Stationary Case with Known Mean) . . . . . . . . . . 1017.3 7.3 Kriging of One Variable in the IRF-k Case . . . . . . . . . . . . . . . . . . 1027.4 7.4 Drift Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047.5 7.5 Estimation of a Drift Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . 1067.6 7.6 Kriging with External Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077.7 7.7 Unique Neighborhood Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097.8 7.8 Filtering Model Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

    7.9 7.9 Factorial Kriging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1147.10 7.10 Block Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1167.11 7.11 Polygon Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1187.12 7.12 Gradient Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197.13 7.13 Kriging Several Variables Linked through Partial Derivatives . 1217.14 7.14 Kriging with Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247.15 7.15 Kriging with Measurement Error . . . . . . . . . . . . . . . . . . . . . . . . 1257.16 7.16 Lognormal Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277.17 7.17 Cokriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1307.18 7.18 Extended Collocated Cokriging . . . . . . . . . . . . . . . . . . . . . . . . . 1338 8 Gaussian Transformation: the Anamorphosis . . . . . . . . . . . . . . . . 1358.1 8.1 Modeling and Variable Transformation . . . . . . . . . . . . . . . . . . . . . 1368.2 8.2 Histogram Modeling and Block Support Correction . . . . . . . . . . . 1438.3 8.3 Variogram on Raw and Gaussian Variables . . . . . . . . . . . . . . . . . . 1479 9 Non Linear Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1499.1 9.1 Indicator Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1509.2 9.2 Probability from Conditional Expectation . . . . . . . . . . . . . . . . . . . 1529.3 9.3 Disjunctive Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1539.4 9.4 Uniform Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1559.5 9.5 Service Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1589.6 9.6 Confidence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

    Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16110 10 Turning Bands Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16310.1 10.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16410.2 10.2 Non Conditional Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16510.3 10.3 Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16811 11 Truncated Gaussian Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . 16912 12 Plurigaussian Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17312.1 12.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17412.2 12.2 Variography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17712.3 12.3 Simulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18112.4 12.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18313 13 Impalas Multiple-Point Statistics . . . . . . . . . . . . . . . . . . . . . . . . . 18514 14 Fractal Simulations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

  • 314.1 14.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19414.2 14.2 Midpoint Displacement Method. . . . . . . . . . . . . . . . . . . . . . . . . .19514.3 14.3 Interpolation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19614.4 14.4 Spectral Synthesis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19715 15 Annealing Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19916 16 Spill Point Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20316.1 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20416.2 16.2 Basic Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .205

    16.3 16.3 Maximum Reservoir Thickness Constraint . . . . . . . . . . . . . . . . .20616.4 16.4 The "Forbidden types" of control points . . . . . . . . . . . . . . . . . . .20716.5 16.5 Limits of the algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20816.6 16.6 Converting Unknown volumes into Inside ones . . . . . . . . . . . . .20917 17 Multivariate Recoverable Resources Models. . . . . . . . . . . . . . . . .21117.1 17.7 Theoretical reminders on Discrete Gaussian model applied to Uniform Con-ditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21217.2 17.8 Theoretical reminders on Discrete Gaussian model applied to block simula-tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21818 18 Localized Uniform Conditionning . . . . . . . . . . . . . . . . . . . . . . . . .22518.1 18.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22619 19 Skin algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .229

    Isatoil. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23320 19 Isatoil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23520.1 19.1 Data description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23620.2 19.2 Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23820.3 19.3 Modelling the geological structure. . . . . . . . . . . . . . . . . . . . . . . .23920.4 19.4 Modelling the petrophysical parameters . . . . . . . . . . . . . . . . . . .251

  • 4

  • 1Introduction

  • 2

  • Technical References 3

    1 Hints on Learning IsatisThe Beginner's Guide, the On-Line documentation tools and the Case Studies Manual are the main ways to get started with Isatis.

    Using the Beginner's Guide to learn common tasksThis Beginner's Guide is a great place to start if you are new to Isatis. Find in this guide a quick overview of the package and several tutorials to learn how to work with the main Isatis objects. The Getting Started With Geostatistics part teaches you the basics about geostatistics and guides you from exploratory data analysis and variography to kriging and simulations.

    Browsing the On-Line documentation toolsIsatis offers a comprehensive On-line Help describing the entire set of parameters that appears in the user interface.

    If you need help on how to run a particular Isatis application, just press F1 within the window to start the On-Line Help system. You get a short recall about the technique, the algorithm imple-mented in Isatis and a detailed description of all the parameters.

    Technical references are available within the On-Line Help System. They present details about the methodology and the underlying theory and equations. These technical references are available in pdf format and may be displayed on the screen or printed.

    A compiled version of all the Isatis technical references is also available for your convenience: just click on Technical References on the top bar of any On-Line Help window.

    Going through geostatistical workflows with the Case StudiesA set of case studies is developed in the Case Studies manual. The Case Studies are mainly designed:

  • 4 Hints on Learning Isatis

    l for new users to get familiar with Isatis and give some leading lines to carry a study through,

    l for all users to improve their geostatistical knowledge by presenting detailed geostatistical workflows.

    Basically, each case study describes how to carry out some specific calculations in Isatis as pre-cisely as possible. You may either:

    l replay by yourself the case study proposed in the manual, as all the data sets are installed on your disk together with the software, l or just be guided by the descriptions and apply the workflow on your own datasets.

  • Technical References 5

    2 Getting HelpYou have 3 options for getting help while using Isatis: the On-Line Help system, the Frequently Asked Questions and the Technical Support team ([email protected]).

    Using the On-Line Help SystemIsatis software offers a comprehensive On-line Help System to complement this Beginner's Guide. The On-Line Help describes the whole set of parameters that appears in the user interface. To use help, choose Help > Help in the main Isatis window or press F1 from any Isatis window.

    Table of Contents and Index - These facilities will help you navigate through the On-Line Help System. They are available on the top bar of the main On-Line Help window.

    FAQ - A local copy of Isatis Frequently Asked Questions is available on your system. Just click on FAQ on the top bar of the main On-Line Help window.Support site - You may also directly access the Geovariances Support Web site to check for updates of the software or recent Frequently Asked Questions.Register - Directly access the Ask for Registration section of Geovariances Support Web site in order to get personal login and password and access the restricted download area.

    Technical References - Short technical references are available from the On-Line Help System: they present more detailed information about the geostatistical methodologies that have been imple-mented in the software and their underlying theory. In particular, you will find the mathematical formulae used in the main algorithms.

    Accessing Geovariances Technical SupportNo matter where in the world you are, our professional support team manages to help you out keep-ing in mind the time and quality imperatives of your projects. Whatever your problem is (software installation, Isatis use, advanced geostatistical advice...), feel free to contact us:

    [email protected]

    If your message concerns an urgent operational problem, feel free to contact the Help Desk by phone: +33 (0)1 60 74 91 00.

  • 6 Getting Help

    Using Web-Based ResourcesYou can access the Support section of Geovariances Web site from the main On-Line Help menu of Isatis: just click on Support Site.Geovariances Web site - Visit www.geovariances.com to find articles and publications, check the latest Frequently Asked Questions, be informed about the coming training sessions.Isatis-release mailing list - Send an e-mail to [email protected] to be registered

    on the isatis-release mailing list. You will be informed about Isatis updates, new releases and fea-tures.

    Register - Get personal login and password and access the restricted download area.

    Check for Updates - Check the Geovariances Web site for Isatis updates.

  • Generalities

  • Technical References 9

    3 Structure Identification in the Intrinsic CaseThis page constitutes an add-on to the Users Guide for:m Statistics / Exploratory Data Analysism Statistics / Variogram Fitting

    This technical reference reviews the main tools available in Isatis to describe the spatial variability (regularity, continuity, ...) of the variable(s) of interest, commonly referred to as the "Structure", in the Intrinsic Case.

  • 10 Structure Identification in the

    3.1 The Experimental Variability FunctionsThough the variogram is the classical tool to measure the variability of a variable as a function of the distance, several other two-points statistics exist. Let us review them through their equation and their graph on a given data set. n designates the number of pairs of data separated by the considered distance and and stand for the value of the variable at two data points constituting a pair.

    m mZ is the mean over the whole data set

    Z Zm is the variance over the whole data set

    m m+Z is the mean calculated over the first points of the pairs (head)m m-Z is the mean calculated over the second points of the pairs (tail)

    m is the standard deviation calculated over the head points

    m is the standard deviation calculated over the tail points

    m and is considered to be at a distance of +h from .

    3.1.1 Univariate caseThe Transitive Covariogram

    (fig. 3.1-1)

    Z2

    Z+

    Z

    ZZn

  • Technical References 11

    The Variogram

    12n------ Z Z( )2

    n(fig. 3.1-2)

    The Covariance ( centered )

    (fig. 3.1-3)

    1n--- Z mZ( ) Z mZ( )

    n

  • 12 Structure Identification in the

    The Non-Centered Covariance

    1n--- ZZ

    n(fig. 3.1-4)

    The Non-Ergodic Covariance

    (fig. 3.1-5)

  • Technical References 13

    The Correlogram1n---

    Z mZ( ) Z mZ( )Z2

    ------------------------------------------------

    n(fig. 3.1-6)

    The Non-Ergodic Correlogram

    (fig. 3.1-7)

    1n---

    Z m+Z( ) Z m -Z( )Z+Z-

    -----------------------------------------------------

    n

  • 14 Structure Identification in the

    The Madogram (First Order Variogram)1

    2n------ Z Zn (fig. 3.1-8)

    The Rodogram ( 1/2 Order Variogram )

    (fig. 3.1-9)

    12n------ Z Z

    n

  • Technical References 15

    The Relative Variogram

    12n------

    Z Z( )2mZ

    2-------------------------

    n

    (fig. 3.1-10)

    The Non-Ergodic Relative Variogram

    (fig. 3.1-11)

    12n------

    Z Z( )2m+Z m

    -

    Z+

    2-------------------------

    2---------------------------------

    n

  • 16 Structure Identification in the

    The Pairwise Relative Variogram

    12n------

    Z Z( )2Z Z+

    2------------------

    2---------------------------

    n(fig. 3.1-12)

    Although the interest of the madogram and rodogram, as compared to the variogram, is quite obvi-ous (at least graphically), as it tends to smooth out the function, the user must always keep in mind that the only tool that corresponds to the statement of kriging (namely minimizing a variance) is the variogram. This is particularly obvious when looking at the variability values (measured along the vertical axis) on the different figures, remembering that the experimental variance of the data is rep-resented as a dashed line on the variogram picture.

    3.1.2 Weighted Variability FunctionsIt can be of interest to take into account weights during the computation of variability functions. These weights can for instance be derived from declustering; in this case, their integration is expected to compensate potential bias in the estimation of the experimental function from clustered data. For further information about these weighted variograms, see for instance Rivoirard J. (2000), Weighted Variograms, In Geostats 2000, W. Kleingeld and D. Krige (eds), Vol. 1, pp. 145-155.

    For instance, the weights are integrated in the weighted experimental variogram

    equation in the following way:

    ( ) 1 N, ,=

  • Technical References 17

    (eq. 3.1-1)

    The other experimental functions are obtained in a similar way.

    12---

    Z Z( )2n

    n

    -----------------------------------------------3.1.3 Multivariate caseIn the multivariate case kriging requires a multivariate model. The variograms of each variable are usually designated as "simple" when the variograms between two variables are called cross-vario-grams.

    We will now describe, through their equation, the extension given to the statistical tools listed in the previous section, for the multivariate case. We will designate the first variable by (Z) and the second by (Y), and mz and my refer to their respective means over the whole field, m+Z and m+Y to their means for the head points, m-Z and m-Y to their means for the tail points.

    The Transitive Cross-Covariogram

    (fig. 3.1-1)

    The Cross-Variogram

    ZYn

    12n------

    Z Z( ) Y Y( )n

  • 18 Structure Identification in the (fig. 3.1-2)

    The Cross-Covariance (centered)

    (fig. 3.1-3)

    The Non-Centered Cross-Covariance

    1n---

    Z mZ( ) Y mY( )n

    1n--- ZY

    n

  • Technical References 19(fig. 3.1-4)

    The Non-Ergodic Cross-Covariance

    (fig. 3.1-5)

    The Cross-Correlogram

    1n--- Z m+Z( ) Y m -Y( )

    n

    1n---

    Z mZ( ) Y mY( )ZY

    ------------------------------------------------

    n

  • 20 Structure Identification in the (fig. 3.1-6)

    The Non-Ergodic Cross-Correlogram

    (fig. 3.1-7)

    The Cross-Madogram

    1n---

    Z m+Z( ) Y m -Y( ) +Z -Y

    -------------------------------------------------

    n

    12n------ Z Z( ) Y Y( )

    n

  • Technical References 21(fig. 3.1-8)

    The Cross-Rodogram

    (fig. 3.1-9)

    The Relative Cross-Variogram

    12n------

    Z Z( ) Y Y( )4n

    12n------

    Z Z( ) Y Y( )mZmY

    ----------------------------------------------

    n

  • 22 Structure Identification in the (fig. 3.1-10)

    The Non-Ergodic Relative Cross-Variogram

    (fig. 3.1-11)

    The Pairwise Relative Cross-Variogram

    12n------

    Z Z( ) Y Y( )m+Z m

    -

    Z+

    2-----------------------

    m+Y m-

    Y+

    2------------------------

    -----------------------------------------------------------

    n

    12n------

    Z Z( ) Y Y( )Z Z+

    2------------------

    Y Y+2

    ------------------ ----------------------------------------------

    n

  • Technical References 23(fig. 3.1-12)

    This time most of the curves are no longer symmetrical. In the case of the covariance, it is even

    convenient to split it into its odd and even parts as represented below. If designates the distance (vector) between the two data points constituting a pair, we then consider:The Even Part of the Covariance

    (fig. 3.1-13)

    The Odd Part of the Covariance

    h

    12--- CZY h( ) CZY h( )+[ ]

    12--- CZY h( ) CZY h( )[ ]

  • 24 Structure Identification in the (fig. 3.1-14)

    Note - The cross-covariance function is a more powerful tool than the cross-variogram in term of structural analysis as it allows the identification of delay effects. However, it necessitates stronger hypotheses (stationarity, estimation of means), it is not really used in the estimation steps. In fact, the cross-variogram can be derived from the covariance as follows:

    and is therefore similar to the even part of the covariance. All the information carried by the odd part of the covariance is simply ignored.A last remark concerns the presence of information on all variables at the same data points: this property is known as isotopy. The opposite case is heterotopy: one variable (at least) is not defined at all the data points.

    The kriging procedure in the multivariate case can cope nicely with the heterotopic case. Neverthe-less, in the meantime one has to calculate cross-variograms which can obviously be established from the common information only. This consideration is damaging in a strong heterotopic case where the structure, only inferred on a small part of the information, is used for a procedure which possibly operates on the whole data set.

    3.1.4 Variogram TransformationsSeveral transformations based on variogram calculations (in the generic sense) are also provided:The ratio between the cross-variogram and one of the simple variograms.

    h( ) CZY 0( )12--- CZY h( ) CZY h( )+[ ]=

  • Technical References 25(fig. 3.1-1)

    When this ratio is constant, the variable corresponding to the simple variogram is "self-krigeable". This means that in the isotopic case (both variables measured at the same locations) the kriging of this variable is equal to its cokriging. This property can be extended to more than 2 variables: the ratio should be considered for any pair of variables which includes the self-krigeable variable.

    The ratio between the square root of the variogram and the madogram:

    (fig. 3.1-2)

    This ratio is constant and equal to for a standard normal variable, when its pairs satisfy the hypothesis of binormality. A similar result is obtained in the case of a bigamma hypothesis.

    The ratio between the variogram and the madogram:

  • 26 Structure Identification in the (fig. 3.1-3)

    If the data obeys a mosaic model with tiles identically and independently valuated, this ratio is con-stant.

    The ratio between the cross-variogram and the square root of the product of the two simple vari-ograms:

    (fig. 3.1-4)

    When two variables are in intrinsic correlation, the two simple variograms and the cross variogram are proportional to the same basic variogram. This means that this ratio, in the case of intrinsic cor-relation must be constant. When two variables are in intrinsic correlation cokriging and kriging are equivalent in the isotopic case.

  • Technical References 27

    3.2 Variogram Model

    3.2.1 Basic StructuresThe following pages illustrate all the basic structures available in Isatis to fit a variogram model on an experimental variogram. Each basic structure is described by:

    l its name.l its mathematical expression, which involves:m A coefficient which gives the order of magnitude of the variability along the vertical axis

    (homogenous to the variance). In the case of bounded functions (covariances), this value is simply the level of the plateau reached and is called the sill. The same concept has been kept even for the non-bounded functions and we continue to call it sill for convenience. The inter-est of this value is that it always comes as a multiplicative coefficient and therefore can be calculated using automatic procedures, as explained further. The sill is equal to "C" in the following models.

    m A parameter which affects the horizontal axis by normalizing the distances: hence the name of scale factor. This term avoids having to normalize the space where the variable is defined beforehand (for example when data are given in microns whereas the field extends on sev-eral kilometers). This scale factor is also linked to the physical parameter of the selected basic function.When the function is bounded, it reaches a constant level (sill) or even changes its expres-sion after a given distance: this distance value is the range (or correlation distance in statis-tical language) and is equal to the scale factor. For the bounded functions where the sill is reached asymptotically, the scale factor corresponds to the distance where the function reaches 95% of the sill (also called practical range). For functions where the sill is reached asymptotically in a sinusoidal way (hole-effect variogram), the scale factor is the distance from which the variation of the function does not exceed 5% around the sill value.

    This is why, in the variogram formulae, we systematically introduce the coefficient (norm) which gives the relationship between the Scale Factor (SF) and the parameter a:

    .

    For homogeneity of the notations, the norm and a are kept even for the functions which depend on a single parameter (linear variogram for example): the only interest is to manipu-late distances "standardized" by the scaling factor and therefore to reduce the risk of numer-ical instabilities.Finally, the scale factor is used in case of anisotropy. For bounded functions, it is easy to say that the variable is anisotropic if the range varies with the direction. This concept is general-ized to any basic function using the scale factor which depends on the direction, in the calcu-lation of the distance.

    m A third parameter required by some particular basic structures .

    SF a =

  • 28 Structure Identification in the

    l a chart representing the shape of the function for various values of the parameters.

    l a non-conditional simulation performed on a 100 X 100 grid. As this technique systematically leads to a normal outcome (hence symmetrical), we have painted positive values in black and negative ones in white. Except for the linear model where the median is used as a threshold.

    Spherical Variogram

    3 h 1 h 3

    (eq. 3.2-1)

    (fig. 3.2-1)Variograms (SF=1.,2.,3.,4.,5.,6.,7.,8.,9.,10.) & Simulation (SF=10.)

    Exponential Variogram

    (eq. 3.2-2)

    h( ) C 2--- a----- 2--- a----- = h a

  • Technical References 29(fig. 3.2-2)Variograms (SF=1.,2.,3.,4.,5.,6.,7.,8.,9.,10.) & Simulation (SF=10.)

    Gaussian Variogram

    (eq. 3.2-3)

    (fig. 3.2-3)Variograms (SF=1.,2.,3.,4.,5.,6.,7.,8.,9.,10.) & Simulation (SF=10.)

    Cubic Variogram

    h( ) C 1 ha-----

    2 exp=

    1.731=

  • 30 Structure Identification in the

    (eq. 3.2-4) h( ) C 7 h

    a-----

    2 354------

    ha-----

    3

    72---

    ha-----

    5 34---

    ha-----

    7+=

    1=(fig. 3.2-4)Variograms (SF=1.,2.,3.,4.,5.,6.,7.,8.,9.,10.,) & Simulation (SF=10.)

    Cardinal Sine Variogram

    (eq. 3.2-5) h( ) C 1

    ha----- sinha-----

    -------------------=

    20.371=

  • Technical References 31(fig. 3.2-5)Variograms (SF=1.,5.,10.,15.,20.,25.) & Simulation (SF=25.)

    Stable Variogram

    (eq. 3.2-6)

    (fig. 3.2-6)Variograms (SF= 8. & = .25, .50, .75, 1., 1.25, 1.5, 1.75, 2.)

    h( ) C 1 ha-----

    exp=

    3=

  • 32 Structure Identification in the

    Note - The technique for simulating stable variograms is not implemented in the Turning Bands method.

    Gamma Variogram

    (eq. 3.2-7) h( ) C 1 1

    1 h-----+ ------------------------

    = a 0>( ) (fig. 3.2-7)Variograms (SF= 8. & = .5,1.,2.,5.,10.,20.) & Simulation (SF= 10. & = 2.)

    Note - For , this model is called the hyperbolic model.

    J-Bessel Variogram

    (eq. 3.2-8)

    where (from Chils J.P. & Delfiner P., 1999, Geostatistics: Modeling Spatial Uncertainty, Wiley series in Probability and Statistics, New-York):

    a 20 1=

    1=

    h( ) C 1 2 1+( )J

    ha-----

    ha-----

    -----------------= d2--- 1>

    1=

  • Technical References 33

    - the Gamma function is defined for by (Euler's integral)

    (eq. 3.2-9)

    - the Bessel function of the first kind with index is defined by the development

    0>

    ( ) e u u 1 ud0

    =(eq. 3.2-10)

    - the modified Bessel function of the first kind, used below, is defined by

    (eq. 3.2-11)

    - the modified Bessel function of the second kind, used in K-Bessel variogram hereafter, is defined by

    (eq. 3.2-12)

    (fig. 3.2-8)Variograms (SF=1. & = .5,1.,2.,3.,4.,5.) & Simulation (SF=1. & = 1)

    K-Bessel Variogram

    J x( )x

    2---

    1( )kk! k 1+ +( )------------------------------------

    x

    2---

    2k

    k 0=

    =

    I x( )x

    2---

    1k! k 1+ +( )------------------------------------

    x

    2---

    2k

    k 0=

    =

    K x( )2---

    I x( ) I x( )sin----------------------------------=

  • 34 Structure Identification in the

    (eq. 3.2-13) h( ) C 1ha-----

    2 1 ( )-------------------------K ha----- = 0>( )

    1=(fig. 3.2-9) Variograms (SF=1. & = .1,.5,1.,2.,5.,10.) & Simulation (SF=1. & = 1.)

    Exponential Cosine (Hole Effect Model)

    Note that C(h) is a covariance in R2 if and only if , in R3 if and only if (in Chils, Delfiner, Geostatistics, 1999).

    Note - This model cannot be used in the Turning Bands simulations.

    Generalized Cauchy Variogram

    (eq. 3.2-14)

    C h( ) hxya1

    ---------- 2

    hza2-----

    hza1-------

    exp h Rn

    a1 0 a2 0>,>,( )cosexp=

    a2 a1 a2 a1 3

    h( ) C 1 1 ha-----

    2+

    = 0>( )

    20 1=

  • Technical References 35(fig. 3.2-10) Variograms (SF=10. & = .1,.5,1.,2.,5.,10.) & Simulation (SF=10. & = 1.)

    Linear Variogram

    (eq. 3.2-15)

    (fig. 3.2-11) Variogram (SF= 5.) & Simulation (SF = 5.)

    Power Variogram

    h( ) C ha----- =

    1=

  • 36 Structure Identification in the

    (eq. 3.2-16) h( ) C h

    a-----

    = 0 2<

  • Technical References 37(fig. 3.2-1)Nugget Effect + Spherical (10km, 4km)

    l same range, different sills: zonal anisotropy

    This describes a phenomenon where its variability is larger in one direction than in the orthogo-nal one. This is typically the case for vertical orientation through a "layer cake" deposit by opposition to any horizontal orientation. No geometric correction will reduce this dissimilarity

    .

  • 38 Structure Identification in the (fig. 3.2-2)Nugget Effect + Spherical (N/A, 4km)

    l Practical calculations

    The anisotropy consists of a rotation and the ranges along the different axes of the rotated system. The rotation can be defined either globally or for each basic structure.

    In the 2D case, for one basic structure, and if "u" and "v" designate the two components of the dis-tance vector in the rotated system, we first calculate the equivalent distance:

    (eq. 3.2-1)

    where au and av are the ranges of the model along the two rotated axes.

    Then this distance is used directly in the isotropic variogram expression where the range is normal-ized to 1.

    In the case of geometric anisotropy, the value au/av corresponds to the ratio between the two main axes of the anisotropy ellipse.

    For zonal anisotropy, we can consider that the contribution of the distance component along one of the rotated axes is discarded: this is obtained by setting the corresponding range to "infinity".

    Obviously, in nature, both anisotropies can be present, and, moreover, simultaneously.

    Finally the setup of any anisotropy requires the definition of a system: this is the system carrying the anisotropy ellipsoid in case of geometric (or elliptic) anisotropy, or the system carrying the direction or plane of zonal anisotropy.

    d2 uau-----

    2 vav-----

    2+=

  • Technical References 39

    This new system is defined by one rotation angle in 2D, or by 3 angles (dip, azimuth and plunge) in 3D. It is possible to attach the anisotropy rotation system globally or individually to each one of the nested basic structures. This possibility leads to an enormous variety of different textures.

    3.2.3 Integral RangesThe integral range is the value of the following integral (only defined for bounded covariances):(eq. 3.2-1)

    A is a function of the dimension of the space.

    The following table gives the integral ranges of the main basic structures when the sill C is set to 1. with and the parameter .

    1-D 2-D 3-D

    NuggetEffect

    0 0 0

    Exponential 2b

    Spherical 3b/4

    Gaussian

    Cardinal Sine

    Stable

    Gamma

    A C h( ) hdx=

    b SF a = =

    2b2 8b3

    5---b2 6---b

    3

    b b2 b3

    b + +

    2b 1+------------- b2 2+-------------

    43---b3 3+-------------

    2b 1------------ 1>

    + else

    2b2 1( ) 2( )------------------------------ 2>

    + else

    8b3 1( ) 2( ) 3( )--------------------------------------------- 3>

    + else

  • 40 Structure Identification in the

    J-Bessel2b 1+( )

    12---+ ----------------------

    12--->

    + else

    4b2 32--->

    + else

    8 b3 1+( ) 12--- ----------------------

    +

    52--->

    else3.2.4 ConvolutionIf we know that the measured variable Z is the result of a convolution p applied on the underlying variable

    (eq. 3.2-1)

    We can demonstrate that the variogram of Z can be deduced from the variogram of Y as follows:

    (eq. 3.2-2)

    Therefore, if the convolution function is fully determined (its type and the corresponding parame-ters), specifying a model for Y will lead to the corresponding model for Z.

    3.2.5 IncrementationIn order to introduce the concept of incrementation, we must recall the link between the variogram and the covariance:

    (eq. 3.2-1)

    where is calculated as the variance of the smallest possible increment:

    (eq. 3.2-2)

    K-Bessel

    Gen.Cauchy

    2b 12---+ ( )---------------------- 4b

    2 8 b3 32---+ ( )----------------------

    b 1------------ 1>

    + else

    b2 1( ) 2( )------------------------------ 2>

    + else

    b3 1( ) 2( ) 3( )--------------------------------------------- 3>

    + else

    Z Y*p=

    Y Z*P= with P p*p=

    h( ) C 0( ) C h( )= h( )

    h( ) 12--- Z x h+( ) Z x( )[ ]=

  • Technical References 41

    We can then introduce the generalized variogram as the variance of the increment of order (k+1):

    (eq. 3.2-3)

    h( )

    h( ) 1Mk-------Var 1( )qCk 1=q Z x k 1 q+( )h+[ ]

    q 0=

    k 1+=

    where which requires data to be located along a regular.

    The scaling factor Mk is there to ensure that in the case of a pure nugget effect:

    (eq. 3.2-4)

    The benefit of the incrementation is that the generalized variogram can be derived using the gener-alized covariance:

    (eq. 3.2-5)

    Then, we make explicit the relationships between and for several orders :

    Generally speaking, we can say that the shape of is not modified when considering K(h):

    k

    0

    1

    2

    Mk C2k 2+k 1+=

    h( )0 h 0=C0 h 0

    =

    h( ) 1Mk------- 1( )pC2k 1+k 1 p+ + K ph( )

    p k 1=

    k 1+=

    h( )

    h( ) K 0( ) K h( )=

    K 0( ) 43---K h( )13---K 2h( )+

    K 0( ) 32---K h( )35---K 2h( ) 110------K 3h( )+

    h( )

  • 42 Structure Identification in the

    l if K(h) is a standard covariance (range a and sill C), reaches the same sill C for the same range: its shape is slightly different.

    l if K(h) is a generalized covariance of the type, then is of the same type: the only difference comes from its coefficient which is multiplied by:

    (eq. 3.2-6)

    h( )

    h h( )

    1------- 1( )pCk 1 p+ + p 1

    k 1+3.2.6 Multivariate CaseWhen several variables are considered simultaneously, we work in the scope of the Linear Model of Coregionalization which corresponds to a rather crude hypothesis, although it has been used satisfactorily in a very large number of cases.

    In this model, every variable is expressed as a linear combination of the same elementary compo-nents or factors. Therefore all simple and cross-variograms can be expressed as linear combinations of the same basic structures (i.e. the variograms of the factors).The covariance model is then defined by the list of the nested normalized basic structures (sill=1) and the matrix of the sills (square, symmetrical and whose dimension is equal to the number of vari-ables): each element is the sill of the cross-variogram between variables "i" and "j" (or the sill of the variogram of variable "i" for ) for the basic structure "p".

    Note - The cross-covariance value at the origin may be badly defined in the heterotopic case, or even undefined in the fully heterotopic case. It is possible to specify the values of the simple and cross-covariances at the origin, using for instance the knowledge about the variance-covariance coming from another dataset.

    3.3 The Automatic Sill Fitting ProcedureIsatis uses an original algorithm to fit a univariate or a multivariate model of coregionalization to the experimental variograms. The algorithm called Multi Scale P.C.A. has been developed by C. Lajaunie (See Lajaunie C., Bhaxtguy J.P. Elaboration d'un programme d'ajustement semi-automatique d'un modle de corgionalisation - Thorie, Technical report N21/89/G, Paris: ENSMP, 1989, 6p).This technique can be used, when the set of basic structures has been defined, in order to establish the matrix of sills.

    It obviously also works for a single variable. Nevertheless, we must note that it can only be used to infer the sill coefficients of the model but does not help for all the other types of parameters such as:

    Mk 2k 1+p k 1=

    bpij

    bpii

  • Technical References 43

    l the number and types of basic structures,

    l for each one of them, the range or third coefficient (if any), finally for the anisotropy. This is why the term automatic fitting is somehow abusive.Considering a set of N second order stationary regionalized random functions Zi(x) we wish to establish the multivariate model taking into account all the simple and cross covariances Cij(h).

    If the variables Z (x) are intrinsic, the covariances no longer exist and the model must then be iderived from simple and cross variograms . Nevertheless, this chapter will be developed in the stationary case.

    A well known result is that the matrix for each basic structure p must be (semi-) definite posi-tive in order to ensure the positiveness of the variance of any linear combination of the random vari-ables Zi(x).

    In order to build this linear model of coregionalization, we assume that the variables Zi are decom-posed on a basis of random variables generically denoted Y, stationary and orthogonal. These vari-ables are regrouped in P groups of Yp random functions characterized by the same covariance Cp(h)called the basic structure. The count of variables within each group is equal to the number of vari-ables N. We will then write:

    (eq. 3.3-1)

    The coefficients are the coefficients of the linear model. The covariance between two variables Zi and Zj and can be written:

    (eq. 3.3-2)

    which can also be considered as:

    (eq. 3.3-3)

    ij h( )

    bpij

    Zi x( ) apikYkpk 1=

    Np 1=

    P=ap

    ik

    Cij h( ) apikapjkCp h( )k 1=

    Np 1=

    P=

    Cij h( ) bpijCp h( )p 1=

    P=

  • 44 Structure Identification in the

    Obviously the terms , homogeneous to sills, are symmetric and the matrices

    Bp whose generic terms are are symmetric, semi-definite positive: they correspond to the vari-ance-covariance matrix for each basic structure.

    bpij apikapjkk 1=

    N=bpij3.3.1 ProcedureAssuming that the number of basic structures P, as well as all the characteristics of each basic model Cp(h), are defined, the procedure determines all the coefficients and derives the vari-ance-covariance matrices.

    Starting from the experimental simple and cross-covariances on a set of U lags hu, the procedure tries to minimize the quantity:

    (eq. 3.3-1)

    where is a weighting function chosen in order to reduce the importance of the lags with few pairs, and to increase the size of the first lags corresponding to short distances. For more informa-tion on the choice of these weights, the user should refer to the next paragraph.

    Each matrix Bp is decomposed as:

    (eq. 3.3-2)

    where Xp is the matrix composed of the normalized eigen vectors and is the diagonal matrix of the eigen values. Instead of minimizing (eq. 3.3-1) under the constraints that Bp is definite positive, we prefer writing that:

    (eq. 3.3-3)

    imposing that each coefficient

    (eq. 3.3-4)

    apik

    Cij h( )

    Cij* hu( ) Cij hu( )[ ]2 hu( )u 1=

    Ui j,=

    hu( )

    Bp XppXpT=p

    bpij apikapjkk 1=

    N=

    apik pkxpik=

  • Technical References 45

    where is the k-th term of the diagonal of and is the k-th vector of the matrix . This hypothesis will ensure the matrix Bp to be definite positive.

    Equation (eq. 3.3-1) can now be reformulated:

    (eq. 3.3-5)

    pk p xpik

    Cij* hu( ) apikapjkCp hu( )NP

    2

    hu( )U=Without losing generality, we can impose orthogonality constraints:

    (eq. 3.3-6)

    If we introduce the terms:

    (eq. 3.3-7)

    The criterion (eq. 3.3-5) becomes:

    (eq. 3.3-8)

    By differentiations against each , we obtain: for each i, k and p:

    (eq. 3.3-9)

    We shall describe the case of a single structure first before reviewing the more general case of sev-eral nested basic structures.

    k 1=p 1=u 1=i j,

    apikap

    jkk 0= i j( )

    Kij Cij* hu( ) hu( )u 1=

    U=

    Tpq Cp hu( )Cq hu( ) hu( )u 1=

    U=

    Aijp Cp hu( )Cij* hu( ) hu( )

    u 1=

    U=

    Kij apikapjkaqilaqjlTpq 2 apikapjkAijpi j p k, , ,

    i j p q k l, , , , ,+

    i j,=

    apik

    apjkaqilaqjlT

    pq

    j l q, , apjkAijp

    j=

  • 46 Structure Identification in the

    3.3.1.1 Case of a Single Basic StructureAs the number of basic structures is reduced to 1, the indices p and q are omitted in the set of equa-tions (eq. 3.3-9)

    (eq. 3.3-1)ajk ailajlTl

    j ajkAij

    j= i k,Using the orthogonality constraints, the only non-zero term in the left-hand side of the equality is obtained when j=i:

    (eq. 3.3-2)

    If we introduce:

    (eq. 3.3-3)

    then:

    (eq. 3.3-4)

    This leads to an eigen vector problem. If we denote respectively by and xik the eigen values and the corresponding normalized eigen vectors, then:

    (eq. 3.3-5)

    The minimum of is then equal to:

    (eq. 3.3-6)

    where K designates the set of indices corresponding to positive eigen values.

    This result will now be generalized to the case of several nested basic structures.

    aik a( il )2Tl ajkAjk

    j= i k,

    Pi a( il )2l=

    aik Pi( )2T ajkAjkj= i k,

    k

    aik kT-----xik= k 0>

    aik 0= k 0

    Kij k( )2T

    ------------

    k K

    i j,=

  • Technical References 47

    3.3.1.2 Case of Several Basic StructuresThe procedure is iterative and consists in optimizing each basic structure in turn, taking into account the structures already optimized. The following flow chart describes one iteration:1. Loop on each basic structure p=1, ..., P

    If we define:(eq. 3.3-1)

    we optimize in the equation:

    (eq. 3.3-2)

    we then set, due to orthogonality constraints:

    (eq. 3.3-3)

    2. Improvement of the solution by selecting the coefficients mp which minimize:

    (eq. 3.3-4)

    If mp is positive, we update the results of step (1):

    (eq. 3.3-5)

    Return to step (1)Step (2) is used to equalize the weight of each basic structure as the first structure processed in step (1) has more influence than the next ones.

    The coefficient mq is the solution of the linear system:

    Kikp h( ) Cij* h( ) bijqCq h( )

    q p=

    apik

    Kikp hu( ) apikapjkCp hu( )

    p k, 2 hu( )

    i j u, ,

    bpij apik( )2k=

    Cij* hu( ) mpbpijCp hu( )p 2 hu( )

    i j u, ,=

    bpij bpij mp

    apik mpap

    ik

  • 48 Structure Identification in the

    (eq. 3.3-6)

    Note - This procedure ensures that converges but does not induce that the bp converge.

    3.3.2 Choice of the Weights

    mq bpijbqijTpqi j,

    q bpijAijp

    i j,= p

    The principle of the Automatic Sill Fitting procedure is to minimize the distance between the exper-imental value of a variogram lag and the corresponding value of the model. This minimization is performed giving different weights to different lags. The determination of these weights depends on one of the four following rules.

    l Each lag of each direction has the same weights.

    l The weight for each lag of each direction is proportional to the total number of pairs for all the lags of this direction.

    l The weight for each lag of each direction is proportional to the number of pairs and inversely proportional to the average distance of the lag.

    l The weight for each lag of each direction is inversely proportional to the number of lags in this direction.

    3.3.3 Printout of the Linear Model of CoregionalizationThis paragraph illustrates a typical printout for a model established for two variables called "Pb" and "Zn":

    Model : Covariance part ======================= Number of variables = 2 - Variable 1 : Pb - Variable 2 : Zn

    and fitted using a linear combination of two basic structure:

    l an exponential variogram with a scale factor of 2.5km (practical range)l a linear variogram (Order-1 G.C.) with a scale factor of 1km.

    Number of basic structures = 2 S1 : Exponential - Scale = 2.50km Variance-Covariance matrix : Variable 1 Variable 2 Variable 1 1.1347 0.5334 Variable 2 0.5334 1.8167 Decomposition into factors (normalized eigen vectors) : Variable 1 Variable 2 Factor 1 0.6975 1.2737 Factor 2 0.8051 -0.4409 Decomposition into eigen vectors (whose variance is eigen values) :

  • Technical References 49

    Variable 1 Variable 2 Eigen Val. Var. Perc. E.Vect 1 0.4803 0.8771 2.1087 71.45 E.Vect 2 0.8771 -0.4803 0.8426 28.55 S2 : Order-1 G.C. - Scale = 1km Variance-Covariance matrix : Variable 1 Variable 2 Variable 1 0.2562 0.0927 Variable 2 0.0927 0.1224 Decomposition into factors (normalized eigen vectors) : Variable 1 Variable 2

    Factor 1 0.4906 0.2508 Factor 2 -0.1246 0.2438 Decomposition into eigen vectors (whose variance is eigen values) : Variable 1 Variable 2 Eigen Val. Var. Perc. E.Vect 1 0.8904 0.4552 0.3036 80.20 E.Vect 2 -0.4552 0.8904 0.0750 19.80

    For each basic structure, the printout contains the following information:

    In the Variance-Covariance matrix, the sill of the simple variogram for the first variable "Pb" and for the exponential basic structure is equal to 1.1347. This sill is equal to 1.8167 for the second vari-able "Zn" and the same exponential basic structure. The cross-variogram has a sill of 0.5334. These values correspond to the matrix for the first basic structure.

    This Variance-Covariance matrix is decomposed into the orthogonal normalized vectors Y1 and Y2. In this example and for the first basic structure, we can read that:

    (eq. 3.3-1)

    These coefficients are the coefficients in the procedure described beforehand and one can check, for example that for the first basic structure (p=1):

    (eq. 3.3-2)

    The last array corresponds to the decomposition into eigen values and eigen vectors. For example:

    bpij

    Zn 0.6975Y1 0.8051Y2+=Pb 1.2737Y1 0.4409Y2=

    apik

    b111 a111( )2 a112( )2+=1.1347 0.6975( )2 0.8051( )2+=b122 a121( )2 a122( )2+=1.8167 1.2737( )2 0.4409( )2+=b111 a111a121 a112a122+=

    0.5334 0.6975( ) 1.2737( ) 0.8051( ) -0.4409( )+=

  • 50 Structure Identification in the

    (eq. 3.3-3)

    a111 11 x111=0.6975( ) 2.1087 0.4803( )=

    a112 12 x112=0.8051( ) 0.8426 0.8771( )=21 1 21We can easily check that the vectors and are orthogonal and normalized.

    Each eigen vector corresponds to a line and is attached to an eigen value. They are displayed by decreasing order of the eigen values. As the variance-covariance matrix is definite positive, the eigen values are positive or null. Their sum is equal to the trace of the matrix and it makes sense to express them as a percentage of the total trace. This value is called "Var. Perc.".

    a1 1 x1=1.2737( ) 2.1087 0.8771( )=

    a122 12 x122=-0.4409( ) 0.8426 -0.4803( )=

    x11. x1

    2.

  • Technical References 51

    4 Non-stationary Model-ing This page constitutes an add-on to the User Guide for Statistics / Non-stationary Modeling

    This technical reference describes the non-stationary variogram modeling approach, where both the Drift and the Covariance part of the Structure are directly derived in a calculation procedure.

    In the non-stationary case (the variable shows either a global trend or local drifts), the correct tool cannot be the variogram any more as we must deal with variables presenting much larger fluctua-tions. Generalized covariances are used instead. As they can be specified only when the drift hypotheses are given, a Non-stationary Model is constituted of both the drift and the generalized covariance parameters.

    The general framework used for the non-stationary case is known as the Intrinsic Random Func-tions of order k (IRF-k for short). In this scope, the structural analysis is split into two steps:

    m determination of the degree of the polynomial drift.m influence of the optimal generalized covariance compatible with the degree of the drift.

    The procedure described hereafter only concerns the univariate aspect. Conversely, it is developed to enable the use of the external drift feature.

  • 52 Non-stationary Modeling

    4.1 Unique Neighborhood

    4.1.1 Determination of the Degree of the DriftThe principle is to consider that the random variable Z(x) is only constituted of the drift which cor-responds to a large scale function with regard to the size of the neighborhood. This function is usu-ally modeled as a low order polynomial.(eq. 4.1-1)

    fl(x) denotes the basic monomials al are the unknown coefficients K represents the number of monomials and is related to the degree of the polynomial through the dimension of the space

    The procedure consists in a cross-validation criterion assuming that the best (order of the) drift is the one which results in the smallest average error. The cross-validation is a generic name for the process which in turns considers one data point (called the target), removes it and estimates it from the remaining neighboring information. The cross-validation error is the difference between the known and the estimated values. When the theoretical variance of estimation is available, the previ-ous error can be divided by the estimation standard deviation.

    The estimation m*(x) is obtained through a least squares procedure, the main lines of it are recalled here. If designates the neighboring information we wish to minimize:

    (eq. 4.1-2)

    Replacing by its expansion:

    (eq. 4.1-3)

    which must be minimized against each unknown al

    (eq. 4.1-4)

    In matrix notation:

    Z x( ) m x( ) alfl x( )l

    K= =

    Z

    Z m x( )( )2

    =m x( )

    Z2 2 alflZ alamflfml m,+

    l

    =

    am flfm

    m flZ= l

  • Technical References 53

    (eq. 4.1-5)

    The principle in this drift identification phase consists in selecting data points as targets, fitting the polynomials for several order assumptions, based on their neighboring information and derives the minimum square errors for each assumption. The optimal drift assumption is the one which pro-duces, on average, the smallest error variance.

    The drawback to this method is its lack of robustness against possible outliers. As a matter of fact, an outlier will produce large variances whatever the degree of the polynomial and will reduce the

    FTF( )A FTZ( )=discrepancy between results.

    A more efficient criterion, for each target point, is to rank the least squared errors for the various polynomial orders. The first rank is assigned to the order producing the smallest error, the second rank to the second smallest one and so one. These ranks are finally averaged on the different target points and the smallest averaged rank corresponds to the optimal degree of the drift.

    4.1.2 Inference of the CovarianceHere again, we consider the generic form of the generalized covariance:

    (eq. 4.1-1)

    where Kp(h) corresponds to predefined basic structures.

    The idea consists in finding the coefficients bp but, this time, among a class of quadratic estimators.

    (eq. 4.1-2)

    using systematically all the information available.

    The principle of the method is based on the MINQUE theory (Rao) which has been rewritten in terms of generalized covariances.

    Let Z be a vector random variable following the usual decomposition

    (eq. 4.1-3)

    Let us first review the MINQUE approach. The covariance matrix of Z, can be expanded on a basis of authorized basic models:

    (eq. 4.1-4)

    introducing the variance components . We can estimate them using a quadratic form

    K h( ) bpKp h( )p=

    b p ZAZ=

    Z X U+=

    Cov Z Z,( ) 21V1 2rVr+ +=

    2p

  • 54 Non-stationary Modeling

    (eq. 4.1-5)

    where the following conditions are satisfied on the matrix Ap:

    1. Invariance: ApX = 0 (X is the drift vector composed of columns of coordinates)

    2. Unbiasedness:

    2p ZTApZ=

    Tr ApVq( ) pq=

    3. Optimality: ||Ap||2V = Tr(ApVApV) minimumwhere V is a covariance matrix used as a norm.

    Rao suggested defining V as a linear combination of the Vp:

    (eq. 4.1-6)

    The MINQUE is reached when the coefficients coincide with the variance components , but this is precisely what we are after.

    Using the vector which constitutes an increment of the data Z we can refer Ap by:

    where:

    and check that the norm V is only involved through:

    If A and B designate real symmetric n*n matrices, we define the scalar product

    n = Tr(AVBV) (eq. 4.1-7)If A and B satisfy invariance conditions, then we can find respectively S and T, such that:

    (eq. 4.1-8)

    Then:

    (eq. 4.1-9)

    V p2Vp=

    p2 p

    2

    SpT

    TX 0=

    W TV=

  • Technical References 55

    which defines a scalar product on the (n-k)*(n-k) matrix if k designates the number of drift terms.With these notations, we can reformulate the MINQUE theory:

    (eq. 4.1-10)(eq. 4.1-11)

    l The unbiasedness condition leads to:

    (eq. 4.1-12)

    We introduce the following notations:

    (eq. 4.1-13)

    then

    (eq. 4.1-14)

    l The optimality condition:

    (eq. 4.1-15)

    (eq. 4.1-16)

    (eq. 4.1-17)

    (eq. 4.1-18)

  • 56 Non-stationary Modeling

    If H designates the subspace spanned on the Hi, the optimality condition induces that Sp belongs to this space and can be written:

    (eq. 4.1-19)

    The unbiasedness conditions can be written: (eq. 4.1-20)

    This system has solutions as soon as the matrix H(H(i,j) = ) is non singular.

    When the coefficients have been calculated, the matrices Sp and Ap are determined and finally

    the value of is obtained.

    These coefficients must then be replaced in the formulation of the norm V and therefore in W. This leads to new matrices Hi and to new estimates of the coefficients . The procedure is iterated

    until the estimates of have reached a stable position.

    Still there is no guarantee that the estimate satisfies the consistency conditions for K to be a valid generalized covariance.

    It can be demonstrated however that the coefficients linked to a single basic structure covariance lead to positive results which produce authorized generalized covariances.

    The procedure resembles the one used in the moving neighbourhood case. All the possible combi-nations are tested and the ones which lead to non-authorized generalized covariances are dropped.

    In order to select the optimal generalized covariance, a cross-validation test is performed and the model which leads to the standardized error closest to 1 is finally retained.

    pib p

    pi

    b p

    b p

  • Technical References 57

    4.2 Moving NeighborhoodThis time, the procedure is quite different whether we consider a Moving or a Unique Neighbor-hood Technique. It consists in finding the optimal generalized covariance, knowing the degree of the drift.

    4.2.1 Determination of the Degree of the Drift

    The procedure consists in finding the optimal drift considered as the large scale drift with regards to the (half) size of the neighborhood. As a matter of fact, each sample is considered in turn as the seed for the neighborhood search. This neighborhood is then split into two rings: the closest sam-ples to the seed belong to the ring numbered 1, the other samples to ring number 2.

    As for the Unique Neighborhood case, the determination is based on a cross-validation procedure. All the data from ring 1 are used to fit the functions corresponding to the different drift hypotheses. Each datum of ring 2 is used to check the quality of the fit. Then the roles of both rings are inverted. The best fit corresponds to the minimal average variance of the cross-validation errors, of for a more robust solution, to the minimal re-estimation rank. The final drift identification only considers the results obtained when testing data of ring 2 against drift trials fitted on samples from ring 2.

    4.2.2 Constitution of ALC-kWe can then consider that the resulting model is constituted of the drift that we have just inferred, completed by a covariance function reduced to a pure nugget effect, the value of which is equal to the variance of the cross-validation errors.

    The value of the polynomial at the test data (denoted by the index "0") is:

    (eq. 4.2-1)

    This establishes that this estimate is a linear combination of the neighboring data. The set of weights is given by:

    (eq. 4.2-2)

    As the residual from the least squares polynomial of order k coincides with a kriging estimation using a pure nugget effect in the scope of the intrinsic random functions of order k, and as the nug-get effect is an authorized model for any degree k of the drift, then:

    (eq. 4.2-3)

  • 58 Non-stationary Modeling

    is an authorized linear combination of the points . with the corresponding weights .

    We have found a convenient way to generate one set of weights which, given a set of points, consti-tutes an authorized linear combination of order k (ALC-k).

    4.2.3 Inference of the Covariance

    Z{ } Z0,( ){ } 1,( )The procedure is a cross-validation technique performed using the two rings of samples as defined when determining the optimal degree of the drift. Then each datum of the ring 1 is considered with all the data in ring 2: they constitute a measure. Similarly, each datum of ring 2 is considered with all the data of ring 1. Finally one neighborhood, centered on a seed data point, which contains 2N data points leads to (a maximum of) 2N measures.The first task is to calculate the weights that must be attached to each point of the measure in order to constitute an authorized linear combination of order k.

    Now the order k of the random function is known since it comes from the inference performed in the previous step. The obvious constraint is that the number of points contained in a measure is larger than the number of terms of the drift to be filtered.

    A simple way to calculate these weights is obtained through the least square fitting of polynomials of order k.

    We will now apply the famous "Existence and Uniqueness Theorem" to complete the inference of the generalized covariance. It says that for any ALC-k, we can write:

    (eq. 4.2-1)

    introducing the generalized covariance K(h) where designates the value of this function K for the distance between points and .

    We assume that the generalized covariance K(h) that we are looking for is a linear combination of a given set of generic basic structures Kp(h), the coefficients bp (equivalent to sills) of which still need to be determined:

    (eq. 4.2-2)

    We use the theorem for each one of the measures previously established, that we denote by using the index "m":

    K

  • Technical References 59

    (eq. 4.2-3)

    Var mZ mKm=

    bp mKpm

    p=If we assume that each generic basic structure Kp(h) is entirely determined with a sill equal to 1, each quantity:

    (eq. 4.2-4)

    as well as the quantity

    (eq. 4.2-5)

    are known.

    Then the problem is to find the coefficients such that

    (eq. 4.2-6)

    for all the measures generated around each test data. This is a multivariate linear regression prob-lem that we can solve by minimizing:

    (eq. 4.2-7)

    The term is a normation weight introduced to reduce the influence of ALC-k with a large vari-ance. Unfortunately this variance is equal to:

    m2

  • 60 Non-stationary Modeling

    (eq. 4.2-8)

    which depends on the precise coefficients that we are looking for. This calls for an iterative proce-dure.

    Moreover we wish to obtain a generalized covariance as a linear combination of the basic struc-

    tures. As each one of the basic structures individually is authorized, we are in fact looking for a set of weights which are positive or null. We can demonstrate that, in certain circumstances, some coef-ficients may be slightly negative. But in order to ensure a larger flexibility to this automatic proce-dure, we simply ignore this possibility. We should however perform regression under the positiveness constraints. Instead we prefer to calculate all the possible regressions with one non-zero coefficient only, then with two non-zero coefficients, and so on ... Each one of these regres-sions is called a subproblem.

    As mentioned before, each subproblem is treated using an iterative procedure in order to reach a correct normation weight.

    The principle is to initialize all the non-zero coefficients of the subproblem to 1. We can then derive

    an initial value for the normation weights . Using these initial weights, we can solve the regression subproblem and derive the new coefficients. We can therefore obtain the new value of the normation weights. This iteration is stopped when the coefficients bp remain unchanged between two consecutive iterations.

    We must still check that the solution is authorized as the resulting coefficients, although stable, may still be negative. The non-authorized solutions are discarded.

    Anyhow, it can easily be seen that the monovariate regressions always lead to authorized solutions.

    Let us assume that the generalized covariance is reduced to one basic structure

    K(h) = bK0(h) (eq. 4.2-9)The single unknown is the coefficient , which is obtained by minimizing:

    (eq. 4.2-10)

    The solution is obviously:

    m2( )0

  • Technical References 61

    (eq. 4.2-11)

    As is an ALC-k, the term corresponds to the variance of the ALC-k and is therefore m K0 m( )positive. We can check that b* r 0.We have obtained several authorized sets of coefficients, each set being the optimal solution of the corresponding subproblem. We must now compare these results. The objective criterion is to com-pare the ratio between the experimental and the theoretical variance:

    (eq. 4.2-12)

    The closer this ratio is to 1, the better the result.

  • 62 Non-stationary Modeling

    4.3 Case of External Drift(s)The principle of the external drift technique is to replace the large scale drift function, previously modelled as a low order polynomial, by a combination of a few deterministic functions fl known over the whole field. However, in practical terms, the first constant monomial universality condi-tion is always kept; some of the other traditional monomials can also be used so that the drift can now be expanded as follows:(eq. 4.3-1)

    when the fl denotes both standard monomials and external deterministic functions.When this new decomposition has been stated, the determination of the number of terms in the drift expansion as well as the corresponding generalized covariance is similar to the procedure explained in the previous paragraph.

    Nevertheless some additional remarks need to be mentioned.

    The inference (as well as the kriging procedure) would not work properly as soon as some of the basic drift functions and the data locations are linearly dependant.

    In the case of a standard polynomial drift these cases are directly linked to the geometry of the data points: a first order IRF will fail if all the neighboring data points are located on a line; a second order IRF will fail if they belong to any quadric such as a circle, an ellipse or a set of two lines.

    In the case of external drift(s), this condition involves the value of these deterministic functions at the data points and is not always easy to check. In particular, we can imagine the case where only the external drift is used and where the function is constant for all the samples of a (moving) neigh-borhood: this property with the universality condition will produce an instability in the inference of the model or in its use via the kriging procedure.

    Another concern is the degree that we can attribute to the IRF when the drift is represented by one or several external functions. As an illustration we could imagine using two external functions cor-responding respectively to the first and second coordinates of the data. This would transform the target variable into a IRF 1 and would therefore authorize the fitting of generalized covariances such as K(h) = |h|3. As a general rule we consider that the presence of an external drift function does not modify the degree of the IRF which can only be determined using the standard monomials: this is a conservative position as we recall that the generalized covariance that can be used for an IRF(k), can always be used for an IRF(k+1).

    E Z x( )[ ] m x( ) a0 alfl x( )l+= =

  • Technical References 63

    5 Quick InterpolationsThis page constitute an add-on to the Users Guide for Interpolate / Interpolation / Quick Interpola-tion

    The term Quick Interpolation is used to characterize an estimation technique that does not require any explicit model of spatial structure. They usually correspond to very basic estimation algorithms widely spread in the literature. For simplicity purpose, only the univariate estimation techniques are proposed.

  • 64 Quick Interpolations

    5.1 Inverse DistancesThe estimation is a linear combination of the neighboring information.

    (eq. 5.1-1)

    Z Z=

    The weight attached to each information is inverse proportional to the distance from the data to the target, at a given power (p):

    (eq. 5.1-2)

    If the smallest distance is smaller than a given threshold, the value of the corresponding sample is simply copied at the target point:

    1

    dP------

    1dP------

    -------------=

  • Technical References 65

    5.2 Least Square Polynomial FitThe neighboring data is used in order to fit a polynomial expression of a degree specified by the user.

    If designates each monomial at the point the least square system is written:fl x(eq. 5.2-1)

    which leads to the following linear system:

    (eq. 5.2-2)

    When the coefficients al of the polynomial expansion are obtained, the estimation is:

    (eq. 5.2-3)

    where designates the value of each monomial at the target location.

    Z lfll

    2 minimum

    al fl fl

    l Zfl

    = l

    Z alZf0ll=

    f0l

  • 66 Quick Interpolations

    5.3 Moving Projected SlopeThe idea is to consider the data samples 3 by 3. Each triplet of samples defines a plane whose value at the target location gives the plane-estimation related to that triplet. The estimated value is obtained by averaging all the estimations given by all the possible triplets of the neighborhood. This can also be expressed as a linear combination of the data but the weights are more difficult to estab-lish.

  • Technical References 67

    5.4 Discrete SplinesThe interested reader can find references on this technique in Mallet J.L., Automatic Contouring in Presence of Discontinuities (In Verly et Al eds., Geostatistics for Natural Resources Characteriza-tion, Part 2, Reidel, 1984). The method has only been implemented on regular grids.The global roughness is obtained as a combination of the following constraints, defined in 2D:if we interpolate the top of a geological stratigraphic layer, as such layers are gener-ally nearly horizontal, it is wise to assume that the interpolator is such that:

    (eq. 5.4-1)

    l if we consider the layer as an elastic beam that has been deformed under the action of geologi-cal stresses, it is known that shearing stresses in the layer are proportional to second order deriv-atives. At any point where the shearing stresses exceed a given threshold, rupture will occur. For this reason, it is wise to assume the following condition at any point where no discontinuity exists:

    (eq. 5.4-2)

    The global roughness can be established as follows:

    (eq. 5.4-3)

    where is a real number belonging to the interval [0, 1].

    Practice has shown that the term has little influence on the result. For this reason, the term

    is often dropped from the global criterion.

    Finally, as we are dealing with values located on a regular grid, we replace the partial derivatives by their digital approximations:

    Z x y,( )=

    R1 ( )x

    2and= R2 ( ) y 2are minimum=

    R3 ( )x2

    2

    ,= R4 ( )y2

    2

    and= R5 ( )x y2

    are minimum=

    R ( ) R1 ( ) R2 ( )+{ } 1 ( ) R3 ( ) R4 ( ) R5 ( )+ +{ }+=

    R5 ( )

    R5 ( )

  • 68 Quick Interpolations

    x------

    i j,( )

    i 1 j,+( ) i 1 j,( )=

    y------

    i j,( )

    i j 1+,( ) i j 1,( )=

    2 (eq. 5.4-4)

    Due to this limited neighborhood for the constraints, we can minimize the global roughness in an iterative process, using the Gauss-Seidel Method.

    x2 i j,( )

    i 1 j,+( ) 2 i j,( ) i 1 j,( )+=

    y2

    2

    i j,( ) i j 1+,( ) 2 i j,( ) i j 1 ),(+=

    2x y-----------

    i j,( )

    i 1+ j 1+,( ) i 1 j 1+ ) i 1+ j 1,( ) i 1 j 1,( )+,(=

  • Technical References 69

    5.5 Bilinear Grid InterpolationWhen the data are defined on a regular grid, we can derive a value of a sample using the bilinear interpolation method as soon as the sample is surrounded by four grid nodes:(fig. 5.5-1)

    (eq. 5.5-1)

    We can check that the bilinear technique is an exact interpolator as when x = y = 0,

    (eq. 5.5-2)

    Z yy------

    x

    x------ Z i 1 j 1+;+( ) 1x

    x------ Z i j 1 )+;(+ =

    + 1 yy------

    xx------ Z i 1 j;+( ) 1

    x

    x------ Z i j );(+

    Z Z i j,( )=

  • 70 Quick Interpolations

  • Technical References 71

    6 Grid TransformationsThis page constitutes an add-on to the On-Line Help for: Interpolate / Interpolation / Grid Operator / Tools / Grid or Line Smoothing.

    Except for the Grid filters, located in the Tools / Grid or Line Smoothing window and discussed in the last section, all the Grid Transformations can be found in Interpolate / Interpolation / Grid Operator and are performed on two different variable types:

    l The real variables (sometimes called colored variables) which correspond to any numeric vari-able, no matter how many bits the information is coded on,

    l The binary variables which correspond to selection variables.

    Any binary variable can be considered as a real variable; the converse is obviously wrong.

    The specificity of these transformations is the use of two other sets of information:

    l The threshold interval: it consists of a pair of values defining a semi-open interval of the type [a,b[. This threshold interval is used as a cutoff in order to transform a real variable into its indi-cator (which is a binary variable).

    l The structuring element: it consists of three parameters defining the extension of the neighbor-hood, expressed in terms of pixels. Each dimension is entered as the radius of the ball by which the target pixel is dilated: when the radius is null, the target pixel is considered alone; when the radius is equal to 1, the neighborhood extension is 3 pixels,...

    An additional flag distinguishes the type of the structuring element: cross or block. The following scheme gives an example of a 2-D structuring element with radius of 1 along X (horizontal) and 2 along Y (vertical). The left side corresponds to a cross type and the right side to a block type.

    (fig. 6.0-1)

  • 72 Grid Transformations

    When considering a target cell located on the edge of the grid, the structuring element is reduced to only nodes those which belong to the field: this produces an edge effect.

  • Technical References 73

    6.1 List of the Grid TransformationsThe transformations are illustrated on a grid of 100 by 100 pixels, filled with two multigaussian iso-tropic simulations called the initial simulations. We will also use a binary version of this simula-tions by coding as 1 all the positive values and as 0 the negative ones: this will be called the binary simulations. (fig. 6.1-1)

    The previous figure presents the two initial simulations on the upper part and the corresponding binary simulations on the bottom part. The initial simulations have been generated (using the Turn-ing Band method) in order to reproduce:

    - a spherical variogram on the left side- a gaussian variogram on the right side

    Both variograms have the same scale factor (10 pixels) and the same variance.Each transformation will be presented using one of the previous simulations (either in its initial or binary form) on the left and the result of the transformation on the right.

  • 74 Grid Transformations

    In this paragraph, the types of the arguments and the results of the grid transformations are speci-fied using the following coding:

    m v binary variablem w real or colored variablem s real or colored selection variablem t threshold

    l v = real2binary(w)

    converts the real variable w into the binary variable v. The principle is that the output variable is set to 1 (true) as soon as the corresponding input variable is different from zero.

    l w = binary2real(v) converts the binary variable v into the real variable w.

    l v = thresh(w,t) transforms the real variable w into its indicator v through the cutoff interval t. A sample is set to 1 if it belongs to the cutoff interval and to 0 otherwise.

    l v2 = erosion(s,v1)

    performs the erosion on the input binary image v1, using the structuring element s, storing the result in the binary image v2. A grain is transformed into a pore if there is at least one pore in its neighborhood, defined by the structuring element. The next figure shows an erosion with a cross structuring element (size 1).

    (fig. 6.1-2)

  • Technical References 75

    l v2 = dilation(s,v1)

    v2 is the binary image resulting from the dilation of the binary image v1 using the structuring element s. A pore is replaced by a grain if there is at least one grain in its neighborhood, defined by the structuring element. The next figure shows an erosion with a cross structuring element (size 1).(fig. 6.1-3)

    l v2 = opening(s,v1)

    v2 is the binary image resulting from the opening of the binary image v1 using the structuring element s. It is equivalent to erosion followed by a dilation, using the same structuring element. The next figure shows an erosion with a cross structuring element (size 1).

  • 76 Grid Transformations(fig. 6.1-4)

    l v2 = closing(s,v1)

    v2 is the binary image resulting from the closing of the binary image v1 using the structuring ele-ment s. It is equivalent to a dilation followed by an erosion, using the same structuring element. The next figure shows an erosion with a cross structuring element (size 1).

    (fig. 6.1-5)

    l v3 = intersect(v1,v2)

    v3 is the binary image resulting from the intersection of two binary images v1 and v2. A pixel is considered as a grain if it belongs to the grain in both initial images.

  • Technical References 77

    l v3 = union(v1,v2)

    v3 is the binary image resulting from the union of two binary images v1 and v2. A pixel is con-sidered as a grain if it belongs to the grain in one of the initial images at least.

    l v2 = negation(v1)

    v2 is the binary image where the grains and the pores of the binary image v1 have been invertedl w2 = gradx(w1)

    w2 is the real image which corresponds to the partial derivative of the initial real image w1 along the X axis, obtained by comparing pixels at each side of the target node.

    (eq. 6.1-1)

    l w2 = grad_xm(w1)

    w2 is the real image which corresponds to the partial derivative of the initial real image w1 along the X axis, obtained by comparing the value at target with the previous adjacent pixel.Practically on a 2D grid:

    (eq. 6.1-2)

    l w2 = grad_xp(w1)

    w2 is the real image which corresponds to the partial derivative of the initial real image w1 along the X axis, obtained by comparing the value at target with the next adjacent pixel. Practically, on a 2D grid:

    (eq. 6.1-3)

    Note - The rightmost vertical column of the image is arbitrarily set to pore (edge effect).The next figure represents the gradient along the X axis of the initial (real) simulation.

    w2 ix iy,( )w1 ix 1+ iy,( ) w1 ix 1 iy,( )

    2 dx--------------------------------------------------------------------------=

    w2 ix iy,( )w1 ix iy,( ) w1 ix 1 iy,( )

    dx-----------------------------------------------------------------=

    w2 ix iy,( )w1 ix 1+ iy,( ) w1 ix iy,( )

    xd-----------------------------------------------------------------=

  • 78 Grid Transformations(fig. 6.1-6)

    l w2 = grady(w1)

    w2 is the real image which corresponds to the partial derivative of the initial real image w1 along the Y axis, obtained by comparing pixels at each side of the target node.

    Practically on a 2D grid:

    (eq. 6.1-4)

    l w2 = grad_ym(w1)

    w2 is the real image which corresponds to the partial derivative of the initial real image w1 along the Y axis, obtained by comparing the value at target with the previous adjacent pixel.Practically on a 2D grid:

    (eq. 6.1-5)

    l w2 = grad_yp(w1)

    w2 is the real image which corresponds to the partial derivative of the initial real image w1 along the Y axis, obtained by comparing the value at target with the next adjacent pixel. Practically on a 2D grid:

    (eq. 6.1-6)

    w2 ix iy,( )w1 ix iy 1+,( ) w1 ix iy 1,( )

    2 dy--------------------------------------------------------------------------=

    w2 ix iy,( )w1 ix iy,( ) w1 ix iy 1,( )

    dy-----------------------------------------------------------------=

    w2 ix iy,( )w1 ix iy 1+,( ) w1 ix iy,( )

    yd-----------------------------------------------------------------=

  • Technical References 79

    Note - The upper line of the image is arbitrarily set to pore (edge effect).The next figure represents the gradient along the Y axis of the initial (real) simulation.(fig. 6.1-7)

    l w2 = gradz(w1)

    w2 is the real image which corresponds to the partial derivative of the initial real image w1 along the Z axis, obtained by comparing pixels at each side of the target node.

    Practically on a 2D grid:

    (eq. 6.1-7)

    l w2 = grad_zm(w1)

    w2 is the real image which corresponds to the partial derivative of the initial real image w1 along the Z axis, obtained by comparing the value at target with the previous adjacent pixel.Practically on a 2D grid:

    (eq. 6.1-8)

    l w2 = grad_zp(w1)

    w2 is the real image which corresponds to the partial derivative of the initial real image w1 along the Z axis, obtained by comparing the value at target with the next adjacent pixel.Practically on a 2D grid:

    w2 ix iz,( )w1 ix iz 1+,( ) w1 ix iz 1,( )

    2 dz-------------------------------------------------------------------------=

    w2 ix iz,( )w1 ix iz,( ) w1 ix iz 1,( )

    dz----------------------------------------------------------------=

  • 80 Grid Transformations

    (eq. 6.1-9)

    l w2 = laplacian(w1)

    w2 is the real image which corresponds to the laplacian of the initial image w1. The next figure represents the laplacian of the initial (real) simulation.

    w2 ix iz,( )w1 ix iz 1+,( ) w1 ix iz,( )

    dz----------------------------------------------------------------=Practically on a 2D grid:

    (eq. 6.1-10)

    Note - The one pixel thick frame of the image arbitrarily set to pore (edge effect).

    (fig. 6.1-8)

    l w4 = divergence(w1,w2,w3)

    w4 is the real image which corresponds to the divergence of a 3-D field, whose components are expressed respectively by w1 along X, w2 along Y and w3 along Z.

    Practically on a 3D grid:

    w2 ix iy,( )w1 ix 1+ iy,( ) 2w1 ix iy,( ) w1 ix 1 iy,( )+

    x2d-------------------------------------------------------------------------------------------------------------+=

    w1 ix iy 1+,( ) 2w1 ix iy,( ) w1 ix iy 1,( )+y2d

    -------------------------------------------------------------------------------------------------------------

  • Technical References 81

    (eq. 6.1-11)

    w4 ix iy iz,,( )w1 ix 1+ iy iz,,( ) w1 ix iy iz,,( )

    xd--------------------------------------------------------------------------------+=

    w2 ix iy 1 iz,+,( ) w2 ix iy iz,,( )yd

    --------------------------------------------------------------------------------+

    w3 ix iy, iz 1+,( ) w3 ix iy iz,,( )zd

    --------------------------------------------------------------------------------l w4 = rotx(w1,w2,w3)

    w4 is the real image which corresponds to the component along X of the rotational of 3D field, whose components are expressed respectively by w1 along X, w2 along Y and w3 along Z.

    Practically on a 3D grid:

    (eq. 6.1-12)

    l w4 = roty(w1,w2,w3)

    w4 is the real image which corresponds to the component along Y of the rotational of 3D field, whose components are expressed respectively by w1 along X, w2 along Y and w3 along Z.

    Practically on a 3D grid:

    (eq. 6.1-13)

    l w4 = rotz(w1,w2,w3)

    w4 is the real image which corresponds to the component along Z of the rotational of 3D field, whose components are expressed respectively by w1 along X, w2 along Y and w3 along Z.

    Practically on a 3D grid:

    w4 ix iy iz,,( )w2 ix iy 1+ iz,,( ) w2 ix iy iz,,( )

    yd--------------------------------------------------------------------------------

    -=

    w3 ix iy, iz 1+,( ) w3 ix iy iz,,( )zd

    --------------------------------------------------------------------------------

    w4 ix iy iz,,( )w3 ix iy iz 1+,,( ) w3 ix iy iz,,( )

    yd--------------------------------------------------------------------------------

    -=

    w1 ix 1+ iy iz,,( ) w1 ix iy iz,,( )xd

    --------------------------------------------------------------------------------

  • 82 Grid Transformations

    (eq. 6.1-14)

    l w2=gradient(w1)

    w4 ix iy iz,,( )w2 ix iy 1+ iz,,( ) w2 ix iy iz,,( )

    yd--------------------------------------------------------------------------------

    -=

    w1 ix 1+ iy iz,,( ) w1 ix iy iz,,( )xd

    --------------------------------------------------------------------------------w2 is the real image containing the modulus of the 2D gradient of w1.

    Practically on a 2D grid:

    (eq. 6.1-15)

    (fig. 6.1-9)

    l w2=azimuth2d(w1)

    w2 is the real image containing the azimuth (in radian) of the 2D gradient of w1.

    Practically on a 2D grid:

    (eq. 6.1-16)

    w2 ix iy,( )w1 ix 1+ iy,( ) w1 ix iy,( )

    xd-----------------------------------------------------------------

    2 w1 ix iy 1+,( ) w1 ix iy,( )yd

    ----------------------------------------------------------------- 2

    +=

    w2 ix iy,( )w1 ix iy 1+,( ) w1 ix iy,( )

    yd-----------------------------------------------------------------

    w1 ix 1+ iy,( ) w1 ix iy,( )xd

    ----------------------------------------------------------------- ,atan=

  • Technical References 83 (fig. 6.1-10)

    l w = labelling_cross(v) w is the real image which contains the ranks (or labels) attached to each grain component that can be distinguished in the binary image v. A grain component is the union of all the grain pixels that can be connected using a grain path. Here two grains are connected as soon as they share a common face. The labels are strictly positive quantities such that two pixels belong to the same grain component if and only if they have the same label. The grain component labels are ordered so that the largest component receives the label 1, the second largest the label 2, and so on. The pore is given the label 0. In the following figure, only the 14 first largest components are repre-sented separately (using different colors); all the smallest