glenhaven 3d passive seismic survey...
TRANSCRIPT
19/09/2018
Glenhaven 3D Passive Seismic Survey CTSCo
Seismic event detection and localization by using Matched-Field processing of very small tremor type
events.
This report covers the data processing of the passive seismic data recorded at the Surat Basin in the
Wandoan Area, Queensland, Australia. This report addresses the following section of the above referenced
project:
1. Detection and localization of microseismic events.
The report presents a new method that we developed for the detection and localization of microseismic
events recorded on a dense array, and is methodological in nature. For this purpose, we focussed on the
continuous seismic data recorded during the first night of the active seismic survey and used an initial
simple homogeneous velocity model that is suitable to detect and locate events at the depth of the reservoir.
We provide some brief interpretation of the results and show that the findings are feasible, but the results
for the entire dataset will be shown and interpreted in the final report.
2
Executive summary:
This report focuses on data recorded during the first night of the seismic survey. We used a
sub-set of 1,440 receivers out of 10,050 available receivers to retrieve the spatial distribution of
microseisms. We used an initial simple homogeneous velocity model representative of the
average velocity of source travelling from the reservoir to surface. This velocity model will be
replaced by a more reliable vertically and horizontally varying velocity model for the analysis and
interpretation of the entire dataset, which will be presented in the final report.
No strong seismic events were detected in the study area (event with sufficient signal-to-noise
ratio so that seismic phases can be identified), with conventional triggering algorithms. However,
when using advanced techniques that exploit the dense seismic array and constructive
interference, some clusters of subsurface ambient seismic noise source are present in the survey
area. We refrain from interpreting these structures until the final results have been achieved.
3
Introduction
Carbon capture and storage (CCS) is considered to be a high-potential method to reduce a large amount
of CO2 emitted into the atmosphere. However, as it is still a quite novel approach, possible consequences
of CO2 underground injection remain unknown.
Of course, it is crucial to estimate the site and the reservoir properties before the beginning of the storage
process (e.g., Lepore and Ghose, 2015, Arts et al., 2004, Arts et al., 2008, Xue and Ohsumi, 2004, Carcione
et al., 2006). The importance of imaging possible fractures and local seismicity in the area is related to the
risk that the subsurface injection could trigger seismicity. This risk has been well documented (e.g. Raleigh
et al., 1976). Moreover, recent studies have shown an increase in seismicity due to injected wastewater
volumes and hydraulic fracturing in, for example, the mid-continental USA (Ellsworth, 2013, Wang et al.,
2016).
In the case of hydraulic fracturing, microseismic monitoring has shown that fractures can be complex,
with fractures showing an extended fracturation length. Hydraulic fractures often interact with pre-existing
fractures in the reservoir, which can create an entire fracture network (Maxwell et al., 2011). Also,
injections of hydraulic fracturing fluid might cause changes in loading conditions in pre-existing faults,
even if there is no direct hydrological connection (Ellsworth, 2013).
This is the reason why it is important to study in details the area of future CCS, including possible
microseismic activity and the distribution of fractures in the reservoir. The aim of this project is to detect
fault and fractures zones that may affect the CO2 injections, in particular fractures in the reservoir
(Precipice Sandstone), basement and other shallow reservoir, to detect leaky zones in seals (Evergreen and
other shallow formations).
In this project, we analyzed the passive seismic data recorded during the dense Glenhaven 3D seismic
survey. The passive data was continuously recorded on 10050 seismic stations for 5 days. The active survey
was recorded at day time. In this study we focus on “calm periods”, for example the night when there was
no active VibroseisTM seismic shooting. Our aim for the second part of the project is to see if there is any
local microseismic activity in the area and if those can be used to reveal local geological features, such as
faults or fractures. The dense seismic dataset presents computational challenges. In order to establish the
best procedure to process the data, we focus our attention on one night of data and only use a subset of
stations for this report. Additionally, a homogeneous velocity model that is suitable to detect and locate
events at the depth of the reservoir is used to simplify the procedure. Due to these simplifications, the
4
report is intended to be methodological in nature, while the final report will show the results obtained for
the full dataset while also incorporating a realistic velocity model. However, even with these
simplifications, the results shown in this report indicate interesting features of the reservoir that we believe
will be of value to the CTSCo.
Motivation
Recent developments in seismic data acquisition hardware and computational power have enabled
geophysicists to deploy and analyse dense, wireless and autonomous seismic monitoring networks. In
geophysics, spatially dense arrays enhance the spatial and spectral characterization of the various body
waves and surface waves propagating in the medium. However, the growing volume of recorded seismic
data and continuous need for monitoring and imaging zones of seismic hazard require the development of
automated methods.
Traditionally, localization methods were based on P-wave and/or S-wave first arrivals and required
accurate phase-picking. In order to ease this procedure, some automatic picking techniques were developed
(Allen 1982, Bai and Kennett 2000, Saragiotis et al. 2002). However, in surface microseismic monitoring,
the signal-to-noise ratio (SNR) is often low and the phase arrivals are not easily recognizable. This issue
can be overcome with full waveform based techniques,that can benefit from the waveform information.
The waveform based methods can be divided into two approaches: reverse time imaging (McMechan 1982,
Fink et al. 2000; Kawakatsu and Montagner 2008, Larmat et al. 2008) and diffraction stack imaging (Kao
and Shan 2004; Chambers et al. 2010, Liao et al. 2012, Lacazette et al., 2013). The reverse time imaging
uses wavefield backpropagation from each virtual source (receiver) to the point where the maximum
energy focuses (e.g., Steiner et al., 2008). It includes a numerical solution of the wave equations, which
can be computationally challenging depending on the source mechanism. The second approach, diffraction
stacking imaging, consists of delaying and summing observed seismic waveforms. This method is easier
to implement, although its accuracy depends on the radiation patterns imposed by the source mechanism
(e.g., Trojanowski and Eisner, 2017).
In particular, the amplitude stacking migration method needs to correct for polarity change. Different
approaches have been proposed to uniformise the amplitude: absolute amplitudes (e.g., Kao and Shan,
2004), squared amplitudes of the signal (Gajewski et al. 2007), ratio of short term average to long-term
average (Drew et al. 2005), signal envelopes (Gharti et al. 2010). The joint inversion for seismic tensor
and localization/detection increases the resolution (e.g., Gharti et al., 2011), although it might be
computationally demanding.
5
A relative approach can simplify the localization procedure, especially regarding the complexity of the
velocity model. The relative methods explore the waveform similarities with detected events. They are
often based on a cross-correlation of a pre-located ‘master event’ or ‘template’ with the data (e.g.,
Evernden, 19691,2; Deichmann and Garcia-Fernandez, 1992; Got et al., 1994). The relative cross-
correlation based approach has been successfully applied in microseismic monitoring (e.g., Roux et al.,
2014; Grechka, 2015; Grigoli 2016), though it requires a priori information in the form of a template
(master event). Moreover, it only allows relative localization in the proximity of the template location.
However, absolute cross-correlation based methods were also developed to evaluate the coherency
between wavefields. In particular, Matched Field Processing (MFP: Kuperman and Turek, 1997), which
is similar to “cross-correlation based beamforming” (e.g., Ruigrok et. al, 2017), was successfully used by
Corciulo et al. (2012) to localize microseismic sources at the exploration scale using ambient noise data.
Also, Wang et al. (2012) used MFP to detect and locate microearthquakes in a geothermal field.
In the following, we propose a new workflow for detection and localization of microseismic events.
We use MFP, and in particular the Bartlett processor, to automatically detect and localize noise sources at
depth with the need for minimal a priori information. This array-processing based approach brings together
elements from diffraction stack imaging and reverse time imaging, and benefits from patch-array
acquisition (Roux et al., 2014). We show that when adequate preprocessing and parameterization of the
Bartlett algorithm is used with a simple velocity model, it is possible to retrieve information about different
zones that generate microseismic noise and to characterize their spatial and temporal evolution.
The proposed approach is an automatic procedure that results in numerous detections, which allows us to
image and monitor the area of interest. It can also be generalized to different geophysical contexts, such
as localization of natural earthquake or monitoring volcanic activity.
Method: Matched-Field Processing
Matched-field processing is an array-processing technique that allows for localization of low-amplitude
incoherent noise sources. MFP has proven useful not only in ocean acoustics, but also to monitor geyser
activity (Cros et al., 2011; Vandemeulebrouck et al., 2013), in an exploration context (Corciulo et al., 2012;
Chmiel et al., 2016) and in geothermic event detection (Wang et al, 2012). Matched-field processing is an
extension of conventional beamforming. Beamforming is a general term for summing phase-shifted
waveforms and it can be interpreted as a spatial filter. It is based on phase-delay measurements, for which
spatial coherence is required. Note that the diffraction stacking can be also qualified as a beamforming
operation in the time domain.
6
MFP is classically performed in the frequency domain, which makes it similar to Time Reversal Source
imaging (e.g., Kawakatsu and Montagner 2008), as both operations are based on phase conjugation
principles (Kuperman et al., 1998). Time-reversal processing is a back-propagation operation which
benefits from spatial reciprocity in the propagation medium (Zhang et al., 2010). In practice, MFP brings
together elements from diffraction stack imaging and reverse time imaging. Indeed, MFP can be applied
to any complex medium if the signals recorded on near-by sensors are spatially coherent. MFP is a forward
propagation process, which involves placing a “trial source” at each point of a search grid, computing
model-based Green’s functions on the receiver array and then correlating the modeled Green’s functions
with recorded signals. In the present application, MFP is limited to the phase match between the frequency-
domain modeled Green’s functions and the Fourier transform of time-windowed data, with the ambition
to transform the MFP technique into an automated detection / localization procedure.
MFP can also be compared to semblance, although semblance measures phase and amplitude coherency
of a signal, whereas our application of MFP measures only phase coherence and disregards the amplitude.
Phase only coherence is generally accepted to perform better in areas where the signal-to-noise ratio of the
signal to be detected and located is less than one (i.e. buried in the noise).
MFP consists of the following steps:
7
The output value of the ambiguity map is bound between 0 and 1. The maximum value of 1, can be
obtained only in the case of: a) A perfect match between the data and the model; b) the absence of
incoherent noise; c) the expression of the source radiation pattern on the array is isotropic which also
implies that the source is spatially localized and its lateral extension is smaller than the wavelength. As all
of these conditions are rarely fulfilled, we might only interpret relative changes of Bartlett amplitude
instead of absolute values.
8
Higher spatial resolution can be obtained with adaptive MFP techniques. However, these algorithms
are more sensitive to incoherent noise and velocity model errors, as they require CSDM inversion (Jensen
et al., 2011). Using a homogeneous velocity model for detection and localization of downhole noise fosters
the use of the Bartlett processor, due to its stability and robustness. More complex Green’s functions can
of course be used as forward models (e.g. velocity-gradient, layered media). However, the idea behind the
present study is to define an automatic and simple-to-use approach that provides information about
heterogeneities in a reservoir with minimal a priori information.
We propose using a minimization algorithm to localize each micro-seismic source that relies on the
downhill simplex search method (Nelder-Mead optimization) of Nelder and Mead (1965) and Lagarias et
al. (1998), based on a constrained multivariable nonlinear programming solver for the position s(x, y, z).
As the exploration of the solution space is characterized by a certain level of randomness, the optimization
algorithm should be resistant to local minima. In particular, the choice of the starting point of the algorithm
might influence its performance. We discuss the choice of the starting point of the algorithm in the
following section.
The final source position s0 is defined by the maximum output of the MFP optimization, as in equation
4:
(4).
This heuristic approach rapidly explores the search space stochastically, so it does not depend on the
grid size or the spatial step of the grid. From there, we develop a detection and localization automatic
workflow.
The potential bias associated with the homogeneous velocity model, the source mechanism correction
and the minimization algorithm are discussed in more details in the following sections.
9
Potential bias due to source mechanism and homogeneous velocity model approximations
In the present MFP approach, we use an acoustic approximation without amplitude correction for the
source mechanism. It has been shown by many studies that this omission can reduce the detections and
cause mis-locations of events (e.g., Trojanowski and Eisner, 2017). We showed in Chmiel et al. (2018)
the effect of the polarity and the homogeneous velocity model on detections. As expected, we observed a
strong misfit in localization and an important drop in MFP amplitudes in the least favorable configuration,
for example half patches have a positive polarity and half patches have a negative polarity. Second, the
homogeneous velocity model is also a significant simplification of the medium as it does not account for
the layered structure of the medium and possible lateral variations or near surface effects. The localization
errors slowly increased with a misfit in localization that reaches at most one wavelength. For both tests,
we observed that the horizontal misfit in localization (offset) is less influenced by the changes in polarity
and velocity model.
However, we underline the fact that the “acoustic” approximation drastically simplifies and fastens the
detection and localization algorithm, which becomes mandatory for an automated localization procedure
on long duration multi-sensor recordings (as described in the discussion of Roux et al., 2014). At worst,
these MFP results can be used as a first-order approximation of the dominant noise source localization in
and around the fractured zone. The outputs of the simple automated detection method could then be used
to define a subset of data samples and trial location points to process at a more sophisticated level with
more complex algorithms and velocity models, thus simplifying overall computational effort yet retaining
final precision.
It is important to note that detection capability and accuracy of this processing method is similar to that
of other matched-field processing methods (such as SSA , conventional beamforming or CCBF), but that
there are two important considerations that make the method computationally much more efficient than
other methods. Firstly, instead of searching every single time sample as a possible event initiation time
(such as the source scanning algorithm), time is split in to windows. This immediately reduces the number
of computations by a factor of more than 100. Secondly, by examining the coherence of signals in
frequency domain, processing speed is further improved.
In fact, the method we implement is mathematically equivalent to cross-correlation beamforming in
time domain with dirac delta Green’s function approximation. On the other hand, as has been shown by
Ruigrok (2017) cross-correlation beamforming performs slightly better than conventional beamforming.
However, these methods are mathematically equivalent to source scanning algorithm if auto-correlations
are included in analysis and the exclusion of the auto-correlation functions improves the robustness of the
method to incoherent noise (Ruigrok, 2017).
10
The choice of this processing method is, in our opinion, the method that will result in the best results
and can also be implemented to process the entire dataset in a reasonable time (1x105 hours for each core
used), compared to conventional beamforming (1x107 hours for each core used). For a 100 core CPU, our
method will take around 1 month to process all data compared to 10 years for conventional beamforming.
In the following processing, we use the homogeneous P-wave velocity model that is extracted from the
velocity cube obtained from conventional 3D processing of the survey data. The homogeneous velocity
model (Vp = 3000 m/s) is calculated by using an average of the velocity cube over the depth and
Easting/Northing.
Data analysis The dataset consists of 10,050 1C (vertical component only) stations (Figure 1) installed over an area
of 10 km2. However, to perform the first tests and develop an appropriate methodology, we decided to use
a subset of 1440 receivers. In our previous studies, we noticed that an acquisition with patch-array
configuration (Roux et al, 2014, Chmiel et al., 2016, Chmiel et al., 2018) gives new possibilities in event
detections and localization. The specificity of patch-acquisition lies in a multi-scale approach: it ensures a
strong local spatial coherence of ambient noise measured on a single patch array and moreover, its coverage
extends to the array scale.
First, it should be noted that a lot of time was spent to try and detect “conventional” seismic events, in
other words events that produce sufficient ground motions (signal-to-noise ratio) so that the seismic phases
can be picked. For this purpose we investigated traditional detection methods such as short term average
over long term average (STA/LTA ratio) and kurtosis-based methods methods. We found that no events
were detected near the study area, but regional seismic events were recorded (such as an m6.9 that occurred
in PNG, as shown in Milestone 1).
This was to be expected, since there is no known driving force for seismic activity here and no
stimulation event (such as injection) has been performed. This is in contrast to other studies where MFP
has been applied. Even though background seismic activity is clearly very low, it does not mean that there
are no detectable and locatable seismic signals being generated at depth. Any geological discontinuity can
produce faint tremor-like signal that can be detected by using a dense array and a suitable processing
method. Detection and location of these types of subsurface noise signals can also be useful, as it can
illuminate pre-existing fractures present in the reservoir.
11
We conducted exhaustive testing on synthetic data to confirm that the method and array geometry is
capable of detecting signal far below the background noise level. An example is shown in Figure 2, where
a synthetic signal is masked by noise nearly 1000 times more energetic. However, the MFP method is still
able to easily locate the correct source while the random background noise does not constructively
interfere.
Figure 1: 10,050 1-C stations in Glenhaven.
Figure 2: Left: Synthetic example showing a signal that is superimposed by noise that is ~830 times as energetic. Right: calculating the Bartless amplitude at each location shows that the method is easily capable of detecting the weak signal, whereas the random noise destructively interferes. The red dots indicate the locations of the patch centres.
12
To make sure that the detection algorithm works as intended, we implemented the method with P-wave
velocity corresponding to events located at surface. The first and most obvious test is the detection of
signals from the vibrating truck, however the signal-to-noise ratio of this signal is much too strong to
confirm our ability to detect weak signals. Instead, we show that a cars can be located frequently (see
Figure 2). Here we see a car moving on the highway to the north of the array. Based on the vehicle’s
location over time, we can estimate that the vehicle is traveling at around 70 km/h. The ground motions
produced by the vehicle are far below the background noise level, but MFP methods can easily detect the
pressure wave signals generated by the vehicle. Since the vehicle is outside the extent of the array, there is
location uncertainty associated with the locations of the signals from the car (why the signals don’t exactly
co locate with the road).
In order to benefit from a “patch-array approach” we divide the whole network in Glenhaven into a set of
small sub-arrays, so called “patches”. One patch consists of 40 stations (e.g., two lines of 20 stations). In
total, we constructed 272 patches (some patches have overlapping stations). Inside one patch, the inter-
station distance is 100 m the y direction and 10 m in the x direction, that results in the dimensions of one
patch to be roughly 200 m x 200 m. This is similar to forming a 200m grid of receiver bins across the entire
survey, then beam-forming within each bin. Some modification is required to compensate for irregularities
of the network geometry, and this is accommodated internally in the grid, with the easternmost bins
retaining a full complement of receivers and an adjacent “wedge” of thin cells accommodating the changes
in network geometry. Figure 4 presents the division of the Glenhaven array into patches. Each patch is
Figure 3: Detction of car travelling on the highway to the north of the array. The car generates a strong signal that is easy to locate with this method when velocity corresponding to the surface is used. By looking at the location of the signal over time, we can see that the car is travelling roughly 70 km/h.
13
represented in a different color. Previous studies showed that the patch array ensures the coherence of sub-
surface signals, while decreasing the coherence of surface sources (Chmiel et al., 2018).
Next, we choose a sub-set of 36 patches, as presented in Figure 5. In total, we use 1440 stations. This
configuration allows us to sub-sample the network, while maintaining almost the same network aperture
as the original array. This network also reduces the computational requirements of the original network,
while still enabling us to access the locally dense information and the phase coherence. For the data
processing based on phase coherence this configuration is more adequate than keeping just 1440 stations
equally distributed. The distance between the neighboring patches is around 300 m in y direction and 500
m in x direction on average.
Figure 4: The division of the Glenhaven array into “patch-array” configuration.
14
where r is the transverse focal spot size, F- the distance from the source position (focal length)
and D the total aperture of the array.
By using all the patches together (D~3 km), we reduce the size of the focal spot up to r~/2 = 75 m
(Eq. 5 with ~150 m for body waves at 20 Hz and F~1.2 km), which corresponds to the optimal diffraction
limit in wave physics.
In this study, we do not have any strong a priori information that could provide a satisfactory starting
point of the minimization algorithm. Moreover our first tests, based on classical detection algorithms such
as short-term averation over long-term average ratios (STA/LTA), showed that there seems to be no strong
seismic activity present in the area. In order to enhance the convergence of the algorithm into a global
minimum, we: a) Choose 7 starting points of the algorithm in the study area (Figure 6). b) For each
iteration, we take as a starting point the best point from the previous iteration. It means that in each iteration
the algorithm will return an equal or a better position than in the previous iterations to converge rapidly
Figure 5: The selection of the patch arrays used for the detection and localization during the first studied night.
15
towards a set of optimal localizations (Chaovalitwongse et al., 2008, Singer and Singer, 1999). If the output
point is located at the limit of the the search-zone (horizontal limits: green fox on Figure 6, vertical limits:
750 m – 1250 m depth), the search restarts at the starting point. For each search, the maximum number of
iterations is set as 600.
In the following, we describe an automated procedure developed from the combination of the MFP
algorithm and the optimization algorithm in order to detect all of the microseismic events at depth in and
around the reservoir area during 7 hours of recorded data (between 23:00:00, 26/06/2015 and 06:00:00,
27/06/2015 local time).
The MFP is performed with overlapping time windows of 2.5 s, with an overlap of 40%. For each time
window, the CSDM is calculated from the multi-patch vector data that contain 36 patches (N=1440
receivers in total). Using patches in a joint manner, i.e., one data vector for all of the receivers, gives one
vector d(t) recorded over the NP receivers r1, …., rNP (N elements within P patches; NP receivers). The
modeled Green’s function obtained with a homogeneous velocity model (vp = 3000 m/s) is matched to the
CSDM using a Bartlett processor (Eq. 3). The detection and localization were performed in the (10-30) Hz
Figure 6: The starting points of the minimization algorithm
16
frequency band using an incoherent average over Q=10 frequencies. The choice of this frequency band of
(10-30) Hz was based on the cancellation of surface waves that dominate the signal up to 10 Hz.
The optimization algorithm looks for a position (x0; y0; z0) that maximizes the Bartlett output within
specified limits. The horizontal limits are large: 4107 m x 3884 m (the green square in Figure 6). The
vertical limits (for which the resolution is poorer) are restricted to events at depths zc - 250 m < z0 < zc +
250 m where z0 = 1000 m (the depth of the precipice sandstone, the principal reservoir). In the following,
we use a positive-downwards depth convention (i.e., receivers’ depth (ground surface) = ~ -256 m).
The first criterion for false alarm detection is purely geometrical. Here, we assume that the algorithm
correctly converges if the optimal location (x0; y0; z0) is within the search-zone (localizations at the
borders are excluded).
In order to correctly interpret the spatial and temporal MFP outputs, all of the localizations are first
projected onto a regular grid. Each cell of the grid has dimensions of 20 m × 20 m × 5 m. Note, that this
projection also partially corrects a possible misfit in location that might be due to the use of the
homogeneous velocity model and no source mechanism correction. Figure 7 shows all the of the MFP
outputs over the recording period as summed in each cell of the grid (horizontal view; vertical slice).
17
Figure 7 reveals a few areas of higher downhole activity. We are looking for clustering of events that
might reveal position of faults or fractures at depth of the reservoir.
Figure 8 shows a normalized distribution of the amplitudes of all of the MFP detections in the frequency
band (10-30) Hz. There are single detections with a strong Bartlett amplitude (up to 0.8), but for the sake
of representation, Figure 8 was limited to Bartlett output values up to 0.02. The weak Bartlett values are
mostly due to: (1) low signal-to-noise ratios for weak microseismic events in each 2.5-s selected time
window; (2) low spatial coherence for weak microseismic events over the large number of receivers spread
over the 36 patches, (3) the combination of the MFP outputs on a large frequency band with different levels
of coherence. The MFP amplitude distribution (Figure 8) can be used to further improve the final MFP
localizations image by selecting detections above a certain output threshold.
In order to do so, we fit a theoretical Rayleigh’s distribution (Strutt, 1945) to the experimental
distribution. The Rayleigh distribution is an example of continuous, non-Gaussian distribution. It was
Figure 7: Detection and localization of microseismic events using 36 patches with a homogeneous velocity model and the Bartlett MFP algorithm. Horizontal map and two vertical cross-sections. The MFP outputs over the time period are summed in each cell of the grid with 20 m × 20 m × 5 m unit size.
18
introduced for the analysis of interference of harmonic oscillations with random phases. We use a two-
parameter Rayleigh distribution (Khan et al., 2010), whose density function is given by:
By using the mean of the fitted distribution as a threshold we eliminate 38% of weak detections and 1%
of unusually strong detections, which results in the updated MFP detection and localization maps. The
update results are presented in Figure 9. Figure 8 represents the theoretical cumulative Rayleigh
distribution fitted to the experimental distribution. We keep 61% of measurements, with MFP amplitudes
> 0.0026 and < 0.006. Moreover, we choose to represent only cells with more than 4 detections.
Note that, as classically used with MFP algorithms, the length of the selected time window (2.5 s) is
large compared to the duration of an actual microseismic events. In general, microseismic events are
signals of an impulsive character in time, the time duration of which is approximately 0.1 s. In the present
case, the choice of 2.5 s as the unit time of the MFP processing is a compromise between the use of a
prohibitive computing time with smaller time windows and the poorer and unsatisfactory signal-to-noise
ratio for longer time windows. By using 2.5-s time windows in the MFP processing, we can localize
tremor-like incoherent signals whose long-duration shape is very different from impulsive coherent
microseismic events. This means that clusters of actual, point-like microseismic events can be detected,
19
and also other downhole signals (for example long-period, long-duration seismic events; Das and Zoback,
2013).
Figure 8: The experimental histogram of MFP output amplitudes (in green) with fitted theoretical Rayleigh distribution (in red). The mode (Bartlett MFP output = 0.0026) and the 99th percentile are presented in blue. By using the mean of the fitted distribution as a threshold we eliminate 38% of weak detections and 1% of unusually strong detections, which results in the updated MFP detection and localization maps. The update results are presented in Figure 9.
Figure 9: The theoretical cumulative Rayleigh distribution fitted to the experimental distribution. We keep 61% of measurements, with MFP amplitudes > 0.0026 and < 0.006.
20
In order to focus on zones of clustering, we perform additional processing of the results. We exclude
cells that do not show clustering, e.g. the cells with no neighbouring detections. Next, we smooth the image
and represent the contours of detection and localization maps. The resulting image of the zone is presented
on Figure 11.
Figure 10: Updated detection and localization maps for microseismic events after the use of a detection threshold and the exclusion of cells with a single detection. To be compared to the original maps in Figure 6; the horizontal maps and vertical cross-sections.
21
We observe a few zones where detections cluster, which might indicate possible fractures in the
reservoir. The biggest spot in the Northern part of the area corresponds to the position of one of the
wellheads: Polaris 223, Figure 12. There are at least two possible explanations: a) Water drainage from the
shallow layers causes a drop in pore pressure and results in changes in the stress field in the reservoir,
which results in the creation of fractures; b) The pump is active during the first night and generates body
waves that are then reflected/ refracted and detected by our algorithm.
However, we still observe some other zones of clustering. The detected events cluster in space and they
seem to reveal a shape of possible structures at depths. Moreover, the high sensitivity of MFP allows us to
obtain a temporal evolution of the microseismic activity. In Figure 13, we present the temporal evolution
of the MFP amplitudes in one of the noise zones (marked by a red square in Figure 12 and presented on
Figure 14a). The curve was filtered with a zero-phase filter over a 30-s window. We observe an increase
Figure 11: Interpretation of detection and localization maps using contour function
22
in the MFP amplitudes (i.e., an increase in spatial coherence of the received signals) between 03:00 - 05:00,
27/06/2015 (4th and 6th hour of the monitoring).
`
Figure 12: Results of detection and localization using MFP superimposed with the aerial photo of the area. The red rectangle shows a zone of microseismic activity, presented in Figure 12.
23
Figure 14 shows an example of a detected event in the analysed zone (Figure 14a). The position of the
detection is marked in pink start in Figure 14a (the MPF amplitude = 0.0057). Figure 14b presents the
corresponding seismic signal recorded on 201 receivers located in the proximity of the detection (the
receivers are marked with red dots on Figure 14 a). The signal is filtered and whiten in (10-30) Hz
frequency band. The traces are aligned and stacked using propagation time calculated within the
homogeneous velocity model (3000 m/s) with respect to the position found with MFP. The stack of the
traces reveals two possible candidates at 0.5 s (in purple) and 1.7 s (in blue). This analysis confirms that
we detect coherent events buried in noise and illustrate the weak nature of the signals we are
detecting and locating.
Figure 13: The temporal evolution curve of the noise zone (marked as a red rectangle in Figure 12) in the frequencies of (10-30) Hz.
24
Figure 14: (a) Zoom on a microseismic activity zone, analysed detection. (b) Seismic signal recorded on a set of 201 receivers (red dots in (a)). The signal is aligned and stacked using propagation time calculated within the homogeneous velocity model with respect to the position found with MFP.
25
Interpretation
Interpretations will be reserved for the final report after a realistic 3D velocity model has been
implemented and all that data has been processed.
Conclusions This report provided a detailed blueprint for processing continuous seismic data recorded on a
dense array to detect and locate microseismic events for CCS site characterisation. First, we used
conventional triggering algorithms to search for clear and obvious seismic events. No seismic
events were detected during the study period, which indicates that the area is not very
seismicically active. After confirming this, we used matched-field processing to search for
subsurface ambient seismic noise sources.
The processing focussed on data recorded during the first night of the seismic survey, used a
sub-set of 1,440 receivers and used a simple homogeneous velocity model representative of the
average velocity of source travelling from the reservoir to surface. This simple velocity model
enabled us to test various methods and define the best strategy to achieve reliable detections. We
will then apply a more reliable horizontal and vertical variable velocity model for the analysis
and interpretation of the entire dataset, which will be presented in the final report.
This simple model causes the loss of location accuracy and detection sensitivity, however,
the results indicate that clusters of subsurface ambient seismic noise sources are present in the
survey area. These clusters could indicate pre-existing faults or fractures.
If pre-existing faults/fractures are present in the survey area, it will be important to take these
in to consideration for modelling of CO2 plume migration and determining the most stable
injection location. However, at this stage it is too early to interpret the findings.
The next steps will consist of: 1. processing the four remaining nights, 2. using the entire array
for detection and localisation of events, 3. using at least 1D velocity model (depth varying and
laterally homogeneous). The results for the entire dataset will be shown and interpreted in the
final report.
26
Bibliography Allen, R., 1982, Automatic phase pickers: Their present use and future prospects, Bulletin of
the Seismological Society of America, 72 (6B): S225-S242.
Arts, R., O. Eiken, R.A. Chadwick, P. Zweigel, L. Van Der Meer, B. Zinszner, 2004.
Monitoring of CO2 injected at Sleipner using time-lapse seismic data, Energy, 29, 1383–1392.
Arts, R., R.A. Chadwick, O. Eiken, S. Thibeau, S. Nooner, 2008, Ten years' experience of
monitoring CO2 injection in the Utsira Sand at Sleipner, offshore Norway, First Break, 26, 65–
72.
Bai, CY. And B. Kennett, 2001, Journal of Seismology, 5(217).
https://doi.org/10.1023/A:1011436421196
Born, M., and E. Wolf, 1999, Principles of Optics: Electromagnetic Theory of Propagation,
Interference and Diffraction of Light (7th Edition). Cambridge University Press.
Carcione, J.M., S. Picotti, D. Gei, G. Rossi, 2006, Physics and seismic modeling for monitoring
CO2 storage. Pure Appl. Geophys., 163, 175–207.
Chambers, K., J. M. Kendall, and O. Barkved, 2010, Investigation of induced microseismicity
at Valhall using the life of field seismic array: The Leading Edge, 29, 290–295. doi:
10.1190/1.3353725.
Chmiel, M., P. Roux, and T. Bardainne, 2016, Extraction of phase and group velocities from
ambient surface noise in a patch-array configuration: Geophysics, 81, no. 6, KS231-KS240.
Chmiel, M., P. Roux, and T. Bardainne, 2018, High-sensitivity Microseismic Monitoring:
automatic detection and localization using Matched-Field Processing and dense patch arrays:
Geophysics, in submission.
Corciulo, M., P. Roux, M. Campillo, D. Dubucq, and W. A. Kuperman, 2012, Multiscale
matched-field processing for noise-source localization in exploration geophysics: Geophysics, 77,
no. 5, KS33–KS41, doi: 10.1190/geo2011-0438.1.
27
Cros, E., P. Roux, J. Vandemeulebrouck, and S. Kedar, 2011, Locating hydrothermal acoustic
sources at old faithful geyser using matched field processing: Geophysical Journal International,
187, 385–393, doi: 10.1111/gji.2011.187.
Das, I., and M. D. Zoback, 2013, Long-period, long-duration seismic events during hydraulic
stimulation of shale and tight-gas reservoirs — part 1: Waveform characteristics: Geophysics, 78,
no. 6,KS107–KS118.
Deichmann, N., and M. Garcia-Fernandez, 1992, Rupture Geometry from High-Precision
Relative Hypocentre Locations of Microearthquake Clusters: Geophysical Journal International,
110, 501–517.
Drew J., Leslie D., Armstrong P. and Michaud G. 2005. Automated microseismic event
detection and location by continuous spatial mapping. In: 2005 SPE Annual Technical Conference
and Exhibition, Dallas, USA, SPE95513.
Ellsworth, W., 2013, Injection-Induced Earthquakes: Science, 341, no. 6142, 1225942, doi:
10.1126/science.1225942,.
Evernden1, J. F., 1969, Precision of epicenters obtained by small numbers of world-wide
stations: Bulletin of the Seismological Society of America, 59, 1365–1398.
Evernden2, J. F., 1969, Identification of earthquakes and explosions by use of teleseismic data:
Journal of Geophysical Research, 74, 3828–3856.
Fink, M., D. Cassereau, A. Derode, C. Prada, P. Roux, M. Tanter, J.-L. Thomas and F. Wu,
2000, Time-reversed acoustics, Reports on Progress in Physics, 63(12)
Gajewski D., Anikiev D., Kashtan B., Tessmer E. and Vanelle C, 2007, Localization of seismic
events by diffraction stacking. 77th SEG meeting, San Antonio, USA, Expanded Abstracts, 1287–
1291.
Gharti H.N., Oye V., Roth M. and K¨uhn D. 2010. Automated microearthquake location using
envelope stacking and robust global optimization. Geophysics 75(4), MA27–MA46.
28
Gharti H.N., Oye V., K¨uhn D. and Zhao P. 2011. Simultaneous microearthquake location and
moment-tensor estimation using timereversal imaging. 81st SEG meeting, San Antonio, USA,
1632–1637.
Got, J.-L., J. Fréchet, and F. W. Klein, 1994, Deep fault plane geometry inferred from multiplet
relative: Journal of Geophysical Research, 99, no. 94, 375–386,.
Grechka, V., A. De La Pena, E. Schissele-Rebel, E. Auger, and P.-F. Roux, 2015, Relative
location of microseismicity: Geophysics, 80, no. 6, WC1–WC9, doi: 10.1190/geo2014-0617.1.
Grigoli, F., S. Cesca, L. Krieger, M. Kriegerowski, S Gammaldi, J. Horalek, E. Priolo, T.
Dahm, 2016, Automated microseismic event location using Master-Event Waveform Stacking:
Scientific Reports, Nature Publishing Group, 6, 25744, doi: 10.1038/srep25744.
Jensen, F. B., W. A. Kuperman, M. Porter, and H. Schmidt, 2011, Computational Ocean
Acoustics:Springer.
Khan, H. M., Provost, S. B., & Singh, A, 2010, Predictive inference from a two-parameter
Rayleigh life model given a doubly censored sample. Communications in Statistics—Theory and
Methods, 39(7), 1237-1246
Kao, H., Shan, S.-J., 2004, The Source-Scanning Algorithm: mapping the distribution of
seismic sources in time and space, Geophysical Journal International, 157(2), 589–594,
https://doi.org/10.1111/j.1365-246X.2004.02276.x
Kawakatsu, H. and J. Montagner, 2008, Time‐reversal seismic‐source imaging and moment‐
tensor inversion. Geophysical Journal International, 175, 686-688. doi:10.1111/j.1365-
246X.2008.03926.x
Kuperman, W. A., and G. Turek, 1997, Matched field acoustics: Mechanical Systems and
Signal Processing, 11, 141–148, doi: 10.1006/mssp.1996.0066.
Kuperman W A, Hodgkiss W S, Song H C, et al., 1998, Phase conjugation in the ocean:
Experimental demonstration of an acoustic time-reversal mirror. J Acoust Soc Am, 103: 25–40
29
Lacazette, A., J. Vermilye, S. Fereja, Ch. Sicking, 2013, Ambient Fracture Imaging: A New
Passive Seismic Method, Unconventional Resources Technology Conference held in Denver,
Colorado, USA, 12-14 August 2013.
Lagarias, J. C., J. A. Reeds, M. H. Wright, and P. E. Wright, 1998, Convergence properties of
the Nelder-Mead simplex method in low dimensions: SIAM Journal of Optimization, 9(1), 112–
147.
Larmat, C., J. Tromp, Q. Liu, J.‐P. Montagner, 2008, Time reversal location of glacial
earthquakes, Journal of Geophysical Research: Solid Earth.
Liao, Y.-C., H. Kao, A. Rosenberger, S.-K. Hsu, and B.-S. Huang, 2012. Delineating complex
spatiotemporal distribution of earthquake aftershocks: An improved source-scanning algorithm,
Geophys. J. Int., 189(3), 1753–1770.
Lepore, S., and R. Ghose, 2015. Carbon capture and storage reservoir properties from
poroelastic inversion: A numerical evaluation, Journal of Applied Geophysics, 122, 181-191.
Maxwell, S. C., M. Jones, D. Cho, and M. Norton, 2011, Understanding hydraulic fracture
variability throught integration of microseismicity and reservoir characterization: EAGE
Extended Abstract: Third Passive Seismic Workshop –Actively Passive!, 27-30 March, 2011,
Athens, Greece, PAS03.
McMechan, G.A., 1983, Migration by extrapolation of timedependent boundary values,
Geophysical Prospecting, 31,413-420, 1983.
Nelder, J. A. and Mead, R., A Simplex Method for Function Minimization, The Computer
Journal, Volume 7, Issue 4, 1 January 1965, Pages 308–313,
https://doi.org/10.1093/comjnl/7.4.308
Raleigh C.B., J.H., Healy, J.D. Bredehoeft, 1976, An experiment in earthquake control at
Rangely, Colorado, Science, 191(4233):1230e7.
Ruigrok, Elmer, Steven Gibbons, and Kees Wapenaar. "Cross-correlation
beamforming."Journal of Seismology 21.3 (2017): 495-508.
30
Roux, P.-F., J. Kostadinovic, T. Bardainne, E. Rebel, M. Chmiel, M. Van Parys, R. Macault,
and L. Pignot, 2014, Increasing the accuracy of microseismic monitoring using surface patch
arrays and a novel processing approach: First Break, 32, 95–101.
Ruigrok, E., S. Gibbons, and K. Wapenaar, 2016. Cross-correlation beamforming, J. Seismol.,
pp. 1–14.
Saragiotis, C. D., L. J. Hadjileontiadis, and S. M. Panas, 2002, PAI-S/K: Arobust automatic
seismic P phase arrival identification scheme, IEEETrans. Geosci. Remote Sens., 40,1395–1404.
Steiner, B., E. H. Saenger, and S. M. Schmalholz, 2008, Time reverse modeling of low-
frequency microtremors: Application to hydrocarbon reservoir localization, Geophysical
Research Letters, 35, L03307, doi: 10.1029/2007GL032097.
Strutt, J. W. (Lord Rayleigh), 1945, The Theory of Sound, Vol. 1, 2nd ed., Dover Publications,
New York.
Trojanowski, J., and Eisner, L., 2017, Comparison of migration‐based location and detection
methods for microseismic events, Geophysical Prospecting, 65, 47–63.
Vandemeulebrouck, J., P. Roux, and E. Cros, 2013, The plumbing of Old Faithful geyser
revealed by hydrothermal tremor: Geophysical Research Letters, 40,1989–1993.
Wang, J., D. Templeton, and D. Harris, 2012, A New Method for Event Detection and Location
- Matched Field Processing Application to the Salton Sea Geothermal Field, Search and Discovery
Article #40946, Adapted from extended abstract prepared in conjunction with oral presentation at
AAPG Annual Convention and Exhibition, Long Beach, California, April 22-25, 2012.
Wang, P., M. J. Smalla, W. Harbertb, and M. Pozzia, 2016, A Bayesian Approach for
Assessing Seismic Transitions Associated with Wastewater Injections: Bulletin of the
Seismological Society of America, 106, 832-845, doi: 10.1785/0120150200.
Xue, Z., T. Ohsumi, 2004, Seismic wave monitoring of CO2 migration in water-saturated
porous sandstone, Explor. Geophys. 35, 25–32.