Bert Tutorial
Bert Tutorial
In this tutorial we demonstrate how to do ERT inversion using the software pack-
age BERT. Some small but instructive examples, all real field cases, are discussed
to show how the different options in the configuration file can be used to yield
case-specific inversion results.
The examples start from 2d inversion of surface measurements with and without
topography. The same is later demonstrated in 3d, where mesh generations becomes
more an issue. We show how to include structural information and how buried
electrodes are handled.
Also, measurements on closed objects, such as trees, humans, soil columns and
model tanks are shown. Finally we show how to handle time-lapse resistivity mea-
surements. The user is invited to follow by processing the data in the examples
directory.
1
Contents
1. Introduction 3
1.1. BERT 1 and 2, DCFEMLib, GIMLi - history and names . . . . . . . . . . . . . 3
1.2. Options and commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Other 2D geometries 13
3.1. 2D ERT with topography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2. 2D cross-hole data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3. Closed 2d geometry - tree tomography . . . . . . . . . . . . . . . . . . . . . . . 16
4. 3D geometries 17
4.1. Flat-earth surface measurements . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2. 3D Topography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3. 3D-Crosshole measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.4. Closed 3D geometries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.4.1. 3d model tanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6. Time-lapse ERT 29
6.1. Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.2. Processing and options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.3. Crosshole timelapse measurements . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.4. Soil column measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
E. User stories/HowTos 42
E.1. How to do 2d inversion with external topographic information . . . . . . . . . . 42
E.2. 2D inversion with given 3d coordinates . . . . . . . . . . . . . . . . . . . . . . . 43
E.3. User-defined regular 2d mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2
E.4. How to invert 1d resistivity in a 2D domain . . . . . . . . . . . . . . . . . . . . 47
E.5. Inversion on user-defined regular 3d mesh . . . . . . . . . . . . . . . . . . . . . 49
E.6. 3D inversion and visualization of 2d profiles in a 3d topography . . . . . . . . . 51
E.7. Use a Hydrus3D mesh of a soil column for forward calculation . . . . . . . . . . 54
E.8. How to use Hydrus2D simulations for synthetic data inversion . . . . . . . . . . 56
E.9. Simulate CEM ring electrodes in tilted boreholes . . . . . . . . . . . . . . . . . 61
E.10.How to define arbitrary geometries, boundaries and regions using an external
mesh generator (Gmsh) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
1. Introduction
1.1. BERT 1 and 2, DCFEMLib, GIMLi - history and names
Direct current electrical measurements are used in a wide range of applications such as medical
imaging, geophysical surface or subsurface measurements or the investigation of trees and soil
probes. This inverse problem is known under the terms ERT (electrical resistivity tomography),
ERI (... imaging), EIT (... impedance tomography) or DC resistivity inversion. The aim of
our software is to present an extremely flexible solution that works on all geometries.
Main advantage is the possibility to work on arbitrary geometries. Therefore we decided to
consequently use unstructured1 finite element meshes for forward calculation as well as for the
parameter identification. By the use of triangles (2d) and tetrahedrons (3d) we can follow
any prior geometry, probe or structural information we have from the subsurface. Due to this
generality we decided to call it BERT - Boundless Electrical Resistivity Tomography. The
name BERT can be associated to
BERT version 1 (released 2009-2013) has been a part of the C++ library DCFEMLib - Direct
Current Finite Element Method Library that is still contained in the BERT source code. Parts
of the DCFEMLib-based tools for mesh generation are still in use but will be replaced by
Python tools soon. From version 2 on (starting 2010), BERT is based on GIMLi - Generalized
Inversion and Modelling Library, a multi-method C++/Python library for various geophysical
methods. Please have a look at http://www.pygimli.org for details how to retrieve, build
and use the code. GIMLi and BERT are licensed under the GPL (GNU public license)2 . Our
vision is giving back to the academic community what we learned without companies letting
exploit it, which is why we never cared for a graphical user interface.
The basic theory and technology of BERT is described by G unther et al. (2006) and bases
on the finite element modelling techniques discussed by R ucker et al. (2006). First developed
for the usual point electrodes, the finite element modelling was later extended to arbitrary
1
Unstructured or irregular means that there is no order or rule of shape for the elements, regular discretizations
with quadrangles, hexahedra or prisms are just special variants.
2
See https://www.gnu.org/licenses/gpl.html for license conditions
3
electrode shapes using the complete electrode model (R ucker and Gunther, 2011) and long
electrodes using the shunt electrode model (Ronczka et al., 2015). Generally the inversion
is based on a smoothness-constrained Gauss-Newton inversion described by G unther et al.
(2006). It was later formulated as a flexible minimization and regularization scheme described
in detail by Rucker (2010).
The default inversion scheme is represented by a triple-grid scheme. Most inversion algorithm
use a dual-grid scheme, i.e. the forward calculation is calculated on a mesh that is finer than the
inversion one. We added another one in order to use a secondary field approach and thus have
a very fast forward calculation3 Figure 1 shows the three grids: On a coarse and resolution-
dependent grid the parameters are defined. On a globally refined and prolonged mesh the
forward calculation is done. And a very fine primary mesh is used to calculate the primary
potentials (for a homogeneous subsurface), but only once directly after the mesh creation.
The overall scheme is visualised in Figure 2. It starts with the generation of the three meshes.
Then the primary potentials are calculated and interpolated onto the secondary mesh. From
this geometric factors are derived yielding the apparent resistivity and the sensitivity matrix
is created for the homogeneous case. Finally the actual inversion is carried out: An inverse
sub-problem is used to update the resistivity model, a forward calculation is carried out and
checked against the data. The latter is done until the data are fitted well or the process
stagnates.
BERT is available under Linux and Windows4 , either from pre-compiled binaries or self-
compiled code5 . The paths to the binaries and the library must be known, e.g. by setting
$ export PATH=$PATH:/c/Software/BERT for Windows or under Linux
$ export PATH=$PATH:/path/to/BERT/bin
$ export LD LIBRARY PATH=$LD LIBRARY PATH:/path/to/BERT/lib BERT comes along with
the Python modules pyGIMLi and pyBERT that need to be found by setting the variable
PYTHONPATH to its place (without the pygimli or pybert at the end!). All variables can be
set permanently in a .bashrc file, either for all users in the directory where bert is, or for the
current user in the home directory.
4
Topography Electrodes Data
Inversion
Constraints Sensitivity
Check
Solution
Figure 2: The BERT inversion scheme, from G unther et al. (2006): the geometrical information
is used to prepare the actual inversion (rectangle).
consisting of KEY=value type, as in bash everything behind the #-sign is ignored and can be
used for comments. Note that the keys must be uppercase6 . There is only one mandatory
key: the DATAFILE key holding the name of the data file. Other important keys are
DIMENSION (2 or 3) and TOPOGRAPHY (0 or 1, meaning false or true). We suggest
to create a new directory for each problem or also for different strategies to solve it. The data
file must be in the unified resistivity.net format (see www.resistivity.net?unidata) or any
of the supported instrument files (see below, Python required!).
For list of possible options (with default values) see appendix D or call bert opts. However
only few of them are of frequent use7 . In order to create a new project there are special
commands for the individual tasks, namely bertNew2d, bertNew2dTopo, bertNew3d, bert-
New3dTopo, bertNew2dCirc and bertNew3dCyl for the cases 2d/3d with or without topogra-
phy and for circle/cylinder geometry. For example,
$ bertNew2d datafile.dat > bert.cfg
creates a new configuration file bert.cfg with the lines DATAFILE=datafile.dat, DIMEN-
SION=2 and TOPOGRAPHY=0, but also adds a lot of possible options for this case with
an explanation, most of them inactive/commented by a # character. All bertNew commands
try to convert instrument data files with endings tx0 (4-point light), txt (Resecs Ascii Out-
put), bin (Syscal binary file), flw (Geotom data) to BERT input and adds a .dat ending to the
original file name. The user can now (or later) change the options and run single steps or the
whole of inversion by bert cfgfile commands
where cfgfile is the configuration file and commands can consist of the following:
all makes all, that is probably the first step in most (at least small) cases
6
There is only one exception: cMin/cMax for color scale. We might drop case-sensitivity.
7
For changing default options permanently, we suggest creating a file $HOME/.bertrc. Typical entries are, e.g.,
SENSMATMAXMEM=3000 (available memory in MB), or favourite mesh options
5
meshs just create meshes to check them first (suggested for bigger problems)
nomeshs do everything else but the meshes (after a successful mesh generation)
pot calculates primary potentials and interpolates it to the secondary mesh (this was before
divided into the two jobs primPot and interpolate)
filter filter input data using data limits, add geometric factor and estimate error
calc only filtering and inversion (no change of mesh), e.g. after changing inversion options
save saves all important results (model&response for each iteration, log file, cfg file, meshes)
in a directory called result<date> <time>
mrproper deletes all stuff except input and result directories (releases disk memory fully)
plotting functions (2D only, with Python)
show show inversion result or other model vector (2D and 3D)
mkpdf generate a pdf of the resulting resistivity or other model vector (2D)
Note that can be used in the main directory, also while inversion is still running, or in a result
directory.
6
DATAFILE=gallery.dat
DIMENSION=2
PARADX=0.25 # size (in electrode spacings) of cells at the surface
PARA2DQUALITY=34.0 # defines how fast the mesh is growing (33-fast,35-slow)
The resulting file bert.cfg holds the data file, the dimension, some default options and some
possible options that are deactivated by the hash. Before running an inversion we want to
have a look at the data by calling8
$ bert bert.cfg showdata
DD 1
DD 2
DD 3
DD 4
DD 5
DD 6
DD 7
DD 8
5 10 15
Figure 3 shows the data, i.e. the apparent resistivity pseudosection. DD denotes dipole-dipole
array and the number the spacing. You can also look at the error by replacing showdata with
showerror.
The simplest inversion run is started by
$ bert bert.cfg all
and comprises mesh generation and actual inversion. The output, which is also stored in the
file bert.log, ends with
4: Model: min = 69.271; max = 756.813
4: Response: min = 85.2701; max = 364.752
4: rms/rrms(data, Response) = 3.8314/1.71759%
4: chi^2(data, Response, error, log) = 1.7069
4: Phi = 198.001+19.2525*20=583.052
In each iteration (number at the beginning) the minimum and maximum values are specified
for the model and its forward response. Additionally, the inversion specifies several measures
of data fit: an absolute root-mean square (rms) in Ohmm, a relative rms (rrms), the error-
weighted chi-square fit 2 = d /N and the objective function consisting of data misfit d =
((di fi (m))/i )2 plus the regularization parameter (here 5) times the model roughness9 .
P
Obviously the data are fit with a good rrms of about 1.7%, which is not completely within the
error model (2 = 1 means a perfect fit).
$ bert bert.cfg show
shows the inversion result displayed in Figure 4 using default options. It clearly images the
8
Plotting any show command will only work if Python is installed (and found with the PATH environment
variable or defined by PYTHON) and the modules pygimli and pybert are found via PYTHONPATH.
9
See Gunther et al. (2006) for details and definitions.
7
cavity at about 20 m and another resistive anomaly whose origin is not completely clear from
2d measurements. See section 4.1 for inversion of a 3d data set.
120
34
56
78
0 10 20 30 40
It shows a rising well-conducting loamy overburden over a medium conductor, in which two
distinct resistive bodies are embedded. The central one is the well-known mining gallery, the
origin of the other is not completely clear. One can click into the model getting some informa-
tion on the model cell. The colour scale is chosen automatically by using 3% interpercentile
values (this can be changed by setting the variable INTERPERC) unless not explicitly spec-
ified in the cfg file by the keywords cMin and cMax. The keyword USECOVERAGE (0 or
1-default) defines whether the coverage is used for alpha-shading the model). The model plot
can be saved using
$ bert bert.cfg mkpdf
A visual inspection of the data fit can be obtained by calling
$ bert bert.cfg showfit
We see hardly any visual difference between the measured and modelled data. However, the
misfit plot shows some systematic layering hinting that there might be still some information
left. Ideally, the misfit plot is an uncorrelated random distribution. To save the result for
further use, we use the command
$ bert bert.cfg save
It saves all input files, meshes, log files and results (vectors, vtk and pdf files) to a result folder.
We can later go to this folder and do some visualization (or even start a new inversion). Save
also cleans the working directory from temporary files. If you want to clean manually, call
$ bert bert.cfg clean (only the working directory) or
$ bert bert.cfg veryclean (deletes also mesh and potential directories).
$ bert bert.cfg pack even compresses all result folders.
8
DD 1
DD 2
DD 3
DD 4
DD 5
DD 6
DD 7
DD 8
5 10 15
Figure 5: Measured apparent resistivity (top), model response (center) and percentage data
misfit (bottom).
120 120
34 34
56 56
78 78
0 10 20 30 40 0 10 20 30 40
Figure 6: Result for a regularization parameter LAMBDA of 200 (left) and 2 (right).
Figure 6 shows the result for the two regularization strengst. The one for = 200 is clearly
over-smoothed and cannot fit the data appropriately (2 = 7.6, rrms=3.1%). The value of
= 2 can fit the data (it stops at 2 = 0.8, rrms=1.2%) and could be a model. Even lower
values will lead to oscillations in the model. By default, the regularization parameter remains
constant, however in some cases it can be beneficial to start with a higher value and to decrease
it in every iteration (say by 20%) using LAMBDADECREASE=0.8. The regularization
strength can also be optimized by two ways: One is to adjust it such that the data are perfectly
fit withing error (2 = 1) by using CHI1OPT=1. This is particularly interesting for synthetic
data. Another option is to find the maximum curvature of the L-curve for a certain range of
lambda values as described by G unther et al. (2006) using LAMBDAOPT=1.
Note that LAMBDA controls the smoothness compared to the data misfit term, in which the
data errors serve as weighting. Higher errors correspond to lower regularization and vice versa.
Note that stacking errors from instruments are not appropriate measures in the meaning of how
well we can fit the data. A way to estimate errors is the analysis of reciprocal data as explained
by Udphuay et al. (2011) and references therein. In the absence of errors in the data file an
error estimation is made using a fixed percentage (INPUTERRLEVEL) and a voltage error
9
(INPUTERRVOLTAGE). If the current is not in the file, a value of 100 mA is assumed.
For different current strengths the voltage error has to be adapted. If there are errors in the
file, they can be discarded by a new error estimation by using OVERRIDEERROR=1.
Note that an L2 norm (squared) minimization assumes a Gaussian distribution of the (error-
weighted) misfit. In case of significant outliers in the data one can use ROBUSTDATA=1,
an L1 norm scheme based on iteratively reweighted least squares (IRLS) after Claerbout and
Muir (1973). However, one can easily loose resolution, which is why it is not recommended by
default, but should be used after observing systematic outliers in the data.
In many cases the default regularization strength will lead to a good start, providing the error
estimation meets the data quality. We recommend to have a look at distribution of the misfit
(Fig. 5c). In almost all surface measurements there is a need for anisotropic regularization, i.e.
for a lower penalty of vertical contrasts due to a predominant layering (and also supported by
a lower resolution of the method). The variable ZWEIGHT defines the relative weight for a
purely vertical boundary in the model. For arbitrary boundaries (as they occur for triangular
and tetrahedral discretizations) they are computed as described by Coscia et al. (2011).
120
34
56
78
0 10 20 30 40
Figure 7 shows the result for a vertical weight of 0.4, which probably improves the interpre-
tation a bit. As a result, the layering in the misfit function disappeared. Note that reducing
ZWEIGHT from the default value of 1 decreases the overall smoothness and can lead to over-
fitting by small-scale anomalies. In these cases the LAMBDA value should be increased. In
order to overcome the smooth transitions in the model, one can use BLOCKYMODEL=1,
i.e. the L1 norm minimization scheme. Just like the ROBUSTDATA option it used IRLS
for weighting the individual gradients such that stronger contrasts are occurring.
So far we used first-order smoothness constraints, i.e. the constraint type CONSTRAINT-
TYPE of 1 (default). One can also use second-order smoothness (2), i.e. the curvature of the
model is minimized. Zeroth-order smoothness (0) means that only the norm of the model dif-
ference to the reference model is minimized. This can easily lead to strange oscillations in the
model and is therefore only recommended if a very good starting model is known. Other pos-
sibilities comprise a mix of minimum-length constraints with first or second order smoothness
using the values 10 and 20.
10
0
6
2: 0.0
8
10
10 5 0 5 10
We observe two bodies, an inner in red (the parameter domain with the marker 2), and
a much bigger outer with the marker 1, which is needed for the forward calculation. The
depth of the parameter domain is by default automatically determined based on 1d sensitivity
studies (calling the command paradepth gallery.cfg, but can be adjusted using the key
PARADEPTH. Sometimes it is advisable to increase the default value a bit. The value
of PARABOUNDARY defines how far (in % of the electrode extension, default=5) the
boundary is outside of the electrodes (red dots). Between the electrodes there are auxil-
iary points (in black) that define the mesh refinement at the surface, which is controlled by
the key PARADX. A value of 0.5 means one additional point (as in Fig. 8), 0.3 means
two points. Lower values means that two points are set to the left and right of each elec-
trode (if EQUIDISTBOUNDARY=0). If EQUIDISTBOUNDARY=1 is set (default),
1/PARADX is rounded to a natural number and equidistant points are set. The standard
value for PARADX is 0.25 for flat topography and 0.33 with topography.
The coarse-ness towards the boundary is controlled by the mesh quality PARA2DQUALITY,
which denotes a minimum angle. The higher the quality is, the more regular the triangles be-
come, and the numerical accuracy increases but with an increasing number of cells and thus
run-time. In triangle10 (Shewchuk, 1996) version 1.6, our favoured 2d mesh generator, the
range goes from 25-30 (bad quality) to 34-35 (good quality). We use the default tarting value
of 34, but opt to increase it towards 35. Another way of avoiding a too coarse mesh is the
maximum cell size by setting PARAMAXCELLSIZE (in m2 ). Alternatively the param-
eter PARAEDGELENGTH is used to compute the area of a regular triangle to be used
for PARAMAXCELLSIZE. We recommend editing the cfg file and calling bert bert.cfg
showmesh until the user is satisfied.
In Figure 9 the resulting parameter meshes for different settings are displayed. Clearly, too
low mesh qualities can lead to large and coarse triangles at the lower boundary, we recommend
values between 34 and 34.6. Restricting the maximum cell size can help but also create
artificially low triangle sizes somewhere in the model. For the outer model part there is the
switch BOUNDARY, measured in multiples of the electrode extension, i.e. a value of 5
means that in our case there is 5x40 m space in -x, +x and -z direction.
11
0 0
2 2
4 4
6 6
8 8
10 10
0 10 20 30 40 0 10 20 30 40
0 0
2 2
4 4
6 6
8 8
10 10
0 10 20 30 40 0 10 20 30 40
0 0
2 2
4 4
6 6
8 8
10 10
0 10 20 30 40 0 10 20 30 40
Figure 9: Meshes with different parameters: dx=0.25 & q=34 (top left, default), dx=0.25 &
q=33 (top right), dx=0.25 & q=34.5 (center left), dx=0.25 & q=34, maxA=1 (center
right), dx=0.5 & q=34 (bottom left), dx=0.5 & q=34.5, maxA=1 (bottom right)
electrodes. Particularly the outer boundary becomes inefficient with rectangles. Therefore the
regular parameter meshes are surrounded by triangles. Regular (quadrangle) meshes can be
easily used by using GRID=1 (Python+pygimli needed). Additionally to PARADX and
PARADEPTH, the key PARADZ can be used to define the first layer thickness (otherwise
equal to PARADX). The key LAYERS defines the number of layers (by default 11) created
in such a way that the layer thickness increases linearly. Figure 10 shows the result of the
default grid options, which is very comparable to Figs. 6b or 7.
120
34
56
78
0 10 20 30 40
12
total chargeability representing the integral of the decay curve (in ms)
integral chargeability of a time window normalized by gate length & voltage (in mV/V)
3. Other 2D geometries
3.1. 2D ERT with topography
In 2d inversion, topography is easily integrated by setting the heights of the electrodes. All
the rest should be done automatically, if necessary, additional electrodes must be inserted.
However, rarely all electrodes will be measured topographically. Often it is sufficient to have
a few points. Note that in the current stage BERT requires the topographical information in
the first section, not at the end of the file. For this case we recommend the use of DC2dInvRes
(Gunther, 2007) that will roll the positions along the surface. For this task, use Data:Save
Ohm file and specify whether the x values are along measure tape or real x.
13
growth. Additionally, the PRIMP2MESH (default 1) determines the primary potentials
being computed by quadratic shape functions (P2 refinement). As stated by R ucker et al.
(2006) the necessary refinement for a P2 mesh is about a/10 and a/100 for a P1 mesh. Usually
the points at the surface are linearly interpolated, SPLINEBOUNDARY=1 forces a spline
interpolation, which is useful for round geometries or smooth topography.
After computing the primary potentials including interpolation onto the secondary forward
mesh (done by bert bert.cfg pot), there are two files primaryPot/pot.ohm and primaryPot/pot.collect
describing the potential for a unit conductivity for the chosen array and all electrode combina-
tions, respectively. These are used to compute the geometric factor and the apparent resistivity
(bert bert.cfg filter). By calling bert bert.cfg topoeff one can see the topographic
effect (Rucker et al., 2006), i.e. the ratio of the geometric factors with and without topog-
raphy t = Gf lat utopo ( = 1 S/m). bert bert.cfg showtopocorr displays additionally the
apparent resistivity based on the flat and topographic geometric factor (s. Fig. 11). Several
anomalies can be explained solely by topographical undulations. The resistivity distribution
shows a conductive interior and a resistive hard pan as discussed by G unther et al. (2006).
0 10 20 30 40 50 x/m 70
we2
we4 0 10 20 30 40 50 x/m 70
we6 120
we8
we10
we12 Raw Data in m z/m
7.94 10 13 16 20 25 32 110
0 10 20 30 40 50 x/m 70
we2 105
we4
we6
100
we8
we10
we12 Topography effect 95
Figure 11: Topographic effect and inversion result of the slagdump site.
14
PARADX to refine the model at the electrodes by a node between each of the 0.1 m sepa-
rated electrodes using PARADX=0.05 11 . Due to the layering, we use ZWEIGHT=0.1, a
larger LAMBDA=100 and ROBUSTDATA=1. Figure 12 shows the obtained resistivity
distribution at the very beginning of a tracer experiment.
0.0
0.5
1.0
1.5
2.0
2 3 4 5 6
15 36 87 208 500
Resistivity [m]
In BERT 2, electrodes do not have to be points anymore. However, for surface measurements it
makes almost always sense to use their positions as node to construct the mesh. This is different
in cross-hole geometries. Similar to the script polyFlatWorld there is a script polyFreeWorld
that replaced the former in the cfg file.
Since there a too few constraints to triangle, the resulting mesh is very coarse over the whole
area. We could place some points to ensure locally small triangles and use a good mesh quality
for slowly increasing edge lengths. Alternatively we can (globally) provide a maximum triangle
area, e.g. by using the option PARAEDGELENGTH, where it the area is computed based
on a regular triangle. By setting the actual value to 0.05 we obtain a fine enough mesh
independent on electrode positions.
The third way is to create a regular mesh with either GRID=1 or a user-defined Python script
(examples/inversion/2dhx/reg/mymesh.py). Figure 13 shows the two alternative results. In
general the resistivity distribution is very similar, i.e. the good conductor at depth and the
resistor on the top left. However there are differences in the small-scaled aquifer anomalies.
0.0 0.0
0.5 0.5
1.0 1.0
1.5 1.5
2.0
2 3 4 5 6 2.0
2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5
Figure 13: Inversion results on alternative mesh types: free electrodes (left) and rectangular
mesh (right).
11
Note that, different from 2d surface measurements, it is treated absolutely by polyFlatWorld.
15
3.3. Closed 2d geometry - tree tomography
ERT on hollow lime tree
For 2d bodies the electrodes are usually on the whole boundary and the PLC can easily be
formed by a close polygon. For tree geometry a dedicated GUI named TreeBERT (before
DC2dTree) was created making it easy to process the data visually. EIT on trees has been
successfully established to investigate decay of trees (Martin and G
unther, 2013). The example
in examples/circle/tree was measured and provided by Niels Hoffmann, formerly HAWK
Gottingen. It represents a lime tree, measured by 24 steel electrodes that are plugged into the
bark. Dipole-dipole measurements have been applied using a Geotom instrument.
The configuration file reads as follows
DATAFILE=hollow_limetree.ohm
DIMENSION=2
TOPOGRAPHY=1 # activates the primary mesh
CYLINDER=1 # defines a closed geometry
SURFACESMOOTH=1 # makes a nicer surface
EQUIDISTBOUNDARY=1 # equidistant refinement
PARADX=0.2 # 5 segments between the electrodes
PARA2DQUALITY=34.8 # very good quality, almost the upper limit
SPLINEBOUNDARY=1 # round geometry
PRIMDX_R=0.001 # refinement of primary mesh in radial direction
LAMBDA=10 # regularisation strength
BLOCKYMODEL=1 # enhance contrasts by robust (L1) methods
For this case an equidistant refinement, the use of splines and a high quality ensures a nice
mesh with a round boundary. The primary refinement is done in radial direction. Additionally
we used the robust modelling in order to obtain a clearer contrast of the high resistivity.
0.2
z/m
0.1
0.05
0.05
0.1
0.15
0.2
0.25
Figure 14: Tree cut (left), inversion result (center) and overlay.
After the measurements the tree was cut and revealed a cavity inside caused by decay. Figure 15
shows a photograph, the inversion result and an overlay of both. Clearly the cavity is marked
by high resistivity that is in almost perfect accordance with the photo.
16
ERT and IP on healthy oak
The data provided in examples/inversion/circle/oak/ are described by Martin and G unther
(2013), who also give details on both the measurement and the inversion. Measurements were
taken at a standing healthy oak (section HI in the paper) in both summer and winter. We
take the measurements in summer (hp5-s.dat) and invert it with the given cfg file hp5-s.cfg.
Looking into the output that is also stored in bert.log we see the following:
After inverting the DC data, dcinv looks for the IP data and would stop if there were none
(min/max=0). Then, the IP data are inverted, by default the output is silent to that we do
not see the individual inversions (as it would hinder looking at the DC data fit), but only the
last iteration.
In this case the model (imaginary resistivity) lies between some milliOhms and several Ohms,
corresponding to phase values of more than 100 mrad. The data fit is 12%, however this
measure might not be meaningfull when there are ip data close to zero present. The chi-square
error is based on an assued phase error of 1 mrad. The resulting phase image can be shown by
$ bert hp5-s.cfg show phase.vector or exported to pdf using
$ bert hp5-s.cfg mkpdf phase.vector.
The default regularization strength (LAMBDA value is taken for both) was in this case
good for both DC and IP inversion. In general, different values might be appropriate. If the
parameter LAMBDAIP is given then this value will be used for IP inversion only. Note that
a rerun of the calculation (calc) needs to be done. More flexible possibilities for IP inversion
are available through the Python classes Resistivity (for single DC/IP inversion) and SIPdata
(for spectral IP inversion).
4. 3D geometries
3D surface measurements
3D surface measurements can be carried out in several variants:
17
0.7 0.7
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0.0 0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.1 0.2 0.3 0.4 0.5 0.6
Figure 15: Resistivity (left) and phase (right) inversion result for the healthy oak tree, measured
in summer.
1. Layout of an electrode grid. However, due to the limited electrode number grids are
restricted to small areas.
In any case, the electrode positions and measurements must be defined according to the unified
data format. The data for the first two types can be easily organized by hand. For number 3
(and 2) we suggest to prepare 2d files and to write a pro-file containing of lines with the 2d file
name and x-y pairs of points where the line is going. This file can be read into DC3dInvRes
G unther (2008) and used to write the 3d file. In case of topography it is best to do the tape
correction on the 2d files before using DC2dInvRes and Export Ohm.
The most flexible element in 3d is the tetrahedron. The tetrahedralization is done by a mesh
generator. Out primary choice is Tetgen (Si, 2008), a free and versatile quality mesh generator
(Si, 2015), however you can also use the mesh generator GMesh (Geuzaine and Remacle, 2009)
as described in the Appendix. The quality measure is different from 2d and describes a radius-
to-edge ratio, note that small values point to higher quality. Appropriate values for (primary
field) forward calculation are 1.12 to 1.2, for the inverse (and thus secondary) mesh values of
1.2-1.5 are appropriate, the keys are PRIM3DQUALITY and PARA3DQUALITY.
18
$ bertNew3d gallery.dat > bert.cfg
The inversion is then fully run (with command all) converging to a chi-squared misfit of about
2 (rrms=4-5%). In order to fit the data better, the regularization parameter is decreased using
LAMBDA=5, which leads to an relative rms error of about 3% (2 = 1). Another way is to
use an anisotropic regularization (ZWEIGHT below 1) also leading to 2 1.
The result is saved and converted to a vtk file12 using
$ bert bert.cfg show
Figure 16: Inversion result of the 3d gallery data set using a smoothed iso-surface of 650m
and a Plane Clip, the red spheres are the used electrodes.
Figure 16 shows a Paraview visualisation that has been created by the following steps: i) Cell
Data To Point Data, ii) Clip by Scalar 650 (m), iii) Extract Surface, iv) Smooth Surface, v)
Another Clip based on Cell2Point with Plane, vi) representation of the input as Outline and
Cube Axes. The color bar is logarithmic with a manual range of 100-1000m. The electrodes
have been included as point vtk file and displayed by Glyph as Spheres of radius 0.05. After
some exercise the reader will be able to create nice images, plots and calculate results such as
extensions or volumes of geological bodies.
4.2. 3D Topography
The definition of a 3d topography is much more complicated than in 2d, where every shape
can be described by a simple polygon. The input PLC consists of faces instead of edges, the
resulting poly file has a similar but different format13 . Generally the proceeding is the following:
i) create a flat surface mesh, ii) interpolate heights from topographic information, iii) make
a small (inversion mesh) and a large (forward mesh) box around it, iv) make refinement, if
necessary, and v) create the mesh using tetgen. The whole procedure has explained in more
detail by Udphuay et al. (2011) for a steep cliff.
For specifying topography, there are two different ways:
the electrodes in the data file have an elevation and all other points are interpolated
12
can be displayed in 3d software ParaView, see http://www.paraview.org
13
See http://tetgen.berlios.de
19
there is a digital elevation model (DEM) or at least a list of measured topo points (in a
3-column file containing x,y and z)
Whereas the first case is sufficient for smooth topography and/or dense electrode coverage,
the latter is more general. The topographic points are Delaunay triangulated. For every point
of the meshes, also the electrodes, the elevation is linearly interpolated. Therefore electrodes
with measured elevations should be included in the topo file as well to make sure their z values
are correct. We specify this topographical list by the line TOPOPOINTS=filename.
In examples/inversion/examples/acucar there is a project measured by the Federal Insti-
tute of Geology and Natural Resources (BGR) Hannover (M. Furche14 ). The site is an old
slag dump that comprises a topography reminding on the sugar hat in Rio. Two resistivity
profiles have been measured crossing the top of the isolated hill. Another profile was realised
around the hill in a more or less constant elevation. Although this is not a dense sampling as
an electrode grid it should be sufficient to obtain a rough image.
Additionally to the electrodes, some topographical points have been measured and put into
the file points.xyz. So we create a new project using
$ bertNew3dTopo acucar.ohm > bert.cfg
and add the line TOPOPOINTS=points.xyz to the model. If we now call bert bert.cfg
meshs we see the mesh does not show the hill, since the topography overrides the electrode
elevation. Therefore we have to add the electrode definition (lines 3-230) to the topography
file and see then the hill (Figure 17 left). However due to the point density the electrode line
appears as a sharp edge that is not really the truth but sufficient in this case.
In other cases we might have a digital elevation model. In order to show this on the same
example, we created one by cubic interpolation of the available points on a regular grid of 2m
spacing. In order to avoid interpolation errors between the electrodes we created a polygon
file poly.xyz for the three profiles15 and introduce it by TOPOPOLY=poly.xyz. Figure 17
shows the surface mesh of both variants. The sharp edges are now disappeared.
Figure 17: Surface mesh for the point-wise topographic information (left) and the digital ele-
vation model (right), the electrodes are shown as red points.
Finally the inversion result is visualised in Figure 18. It shows a conductive interior of the slag
dump and different sediments at the surface, e.g. a resistive top. Of course the data coverage
is low between the profiles and at the model boundaries. Therefore the model becomes more
or less interpolated by the smoothness constraints.
14
Now at Leibniz institute of Applied Geosciences
15
Several polygons are separated by a blank line.
20
Figure 18: Inversion result of the 3dtopo case.
21
4.4. Closed 3D geometries
Closed geometries are actually easier than open ones since we do not need a mesh prolongation
and two different regions. However since the whole boundary is of Neumann type, we must
ensure two additional conditions that are not necessary in the open case:
The current cannot vanish in infinity, therefore we must use dipole sources, e.g. by a
reference current node.
Since only derivatives are present in the boundary value problem, we must make the
forward solution unique, e.g. by adding a reference potential node, whose potential is
forced to zero.
We create an empty cfg file (or use bertNew3dCyl) with the lines
DATAFILE=soil_column.dat
DIMENSION=3
TOPOGRAPHY=1
CYLINDER=1 # ensures the closed geometry
16
Since the electrodes cannot be show a significant extension compared to the column size, we put the points
not onto the surface but moved it 1cm inside.
22
We add our PLC script to the PARAGEOMETRY variable such that mesh/mesh.poly will be
created by it.
PRIMDX=0.01
PRIMP2MESH=1
Figure 20 shows the course from the mesh input via the parameter mesh to the final result.
Figure 20: PLC (left), parameter mesh (center) and inversion result (right) of the soil column
experiment.
23
The file bedrock.xz contains the course of the boundary as x-z pairs. We now include this
file into the configuration file using the INTERFACE option. In order to compare the result
with and without structure we call
$ bertNew2d bedrock.dat > bert.cfg
$ bert bert.cfg all show
$ echo INTERFACE=bedrock.xz >> bert.cfg
$ bert bert.cfg all show
0 0
20 20
40 40
60 60
80 80
100 100
120 120
140 140
0 100 200 300 400 500 0 100 200 300 400 500
Figure 21: Resistivity distribution without (left) and with (right) structural information.
Figure 21 shows the subsurface images without and with the structural information. Obviously
the additional information leads to a much clearer image of the subsurface. At most positions
there is a sharp resistivity contrast at the boundary. However at some positions there is either
a difference to velocity or the refraction result is ambiguous.
Several interfaces may be present in the INTERFACE file, separated by a blank line. Struc-
tural constraints may also come from borehole descriptions indicating a layer at a certain
depth. This results in several small lines extending to the sides depending on the lateral repre-
sentativitiy of the boreholes. Three-dimensional interfaces are point lists that are triangulated
and put as facets in the model.
24
#no start Ctype MC zWeight Trans lBound uBound
0 100 1 1 0.2 log 50 1000
1 30 0 0.2 1 log 10 200
#no single start
2 1 100
#no background
-1 1
#inter-region
0 2 0.2
The number No can also be a * for all regions and should lead the token list. The other tokens
define values for the numbered token.
Ctype constraint type (1/-smoothness 1st/2nd order, 0-minimum length, 10/20 mixed 1st/2nd
with zeroth order)
background this region is background and filled with resistivity of the neighbouring regions
for the forward calculation, for a non-Neumann domain without region file the lowest
number is the background by default
inter-region two regions are by default decoupled (constraint weight is zero), but can be
coupled
In the above example there are two main regions with different starting values, valid ranges
and constraints. Another region is treated constant and is coupled to region 0 weakly. Finally,
there is one background region.
25
1. We start as for a topographic case and generate the meshing input bert bert.cfg
domain
3. We need to add the water surface by an edge between the left and the right shore
(innermost points with zero altitude). A view into the poly file shows that they are
represented by the points 3 and 138. So we add another edge at the end of the edges
(line 303) by inserting the line 151 3 138 -1 (number n1 n2 edgemarker) and increase the
number of edges in line 152 from 151 to 152
4. Finally we add a region marker somewhere in the lake with marker 1 (not inverted) by
appending the line 2 50 -1 1 0.0 (number x y marker maxTriangleSize) and increasing
the number of regions from 2 to 3.
Figure 22: Representation of the input PLC for the lake case.
Figure 22 shows a section of the input PLC after calling bert bert.cfg showpoly. As usual,
we have the outer region with marker 1 and the inner region with marker 2, additionally we
have the marker 3 for the lake. By default the outer region is a background region in case
several regions are available without a region file.
We use the following options
and the inversion converges at 1 < 2 < 2. Figure 23 shows the resulting resistivity distri-
bution. The lake sediments show generally by very low resistivities even below the measured
value of 22.5 m.
The regions are automatically decoupled (no smoothness constraints between them) as in the
last example. Figure 24a shows the inversion result using normal treatment (by setting the
region marker of the lake to 2 or by using polyFlatWorld as PARAGEOMETRY), i.e.
17
see triangle page http://www.cs.cmu.edu/~quake/triangle.research.html for file description
26
240
68
10
12
14
16
0 20 40 60 80
15 32 67 142 300
Resistivity [m]
Figure 23: Inversion result of the water case using two regions that are automatically decoupled.
smoothness constraints across the whole model. We see a lot of structures related to the lake
bottom and unrealisticly high water resistivity. Now we start treating the
REGIONFILE=region.control
We first define the water region (marker 3) as a single-parameter region and the subsurface
as a normal inversion region with smoothness constraints that enhance vertical structures, a
range of 10-1000 m and a starting resistivity of 100 m:
#No start Trans zWeight Ctype lBound uBound
2 100 log 0.1 1 10 1000
#No single start
3 1 22.5
#No background
1 1
Figure 24b shows the result. The main image is similar but the oscillation at the sea-bottom
are gone. The resistivity increase at the bottom disappeared and reveals more information
about the deeper layers. However, the resistivity of the lake obtains values of about 16 m,
which is lower than the expected value of 22.5 m measured at the side, even though we used
it as starting value.
We could narrow the resistivity value of the lake by expanding the lines for region 3 by upper
and lower resistivity bounds:
#No single start Trans lBound uBound
3 1 22.5 log 22 23
Whereas a single region is represented by one unknown, one can also associating it with a fixed
value, thus making it a background region (i.e. removing it from the inversion completely).
#No fix
3 22.5
Whereas a normal background (region 1) is filled up (prolongated) from the inversion model
with neighbouring resistivity, this region is filled with the fixed value.
The result, shown in Figure 24c, is equally well fitting the data. Both results are obviously
equivalent, in this full-space problem the lake resistivity cannot be independently obtained and
has to be added by additional information.
27
02
46
108
12
14
16
0 20 40 60 80
02
46
108
12
14
16
0 20 40 60 80
02
46
108
12
14
16
0 20 40 60 80
02
46
108
12
14
16
0 20 40 60 80
02
46
108
12
14
16
0 20 40 60 80
15 32 67 142 300
Resistivity [m]
Figure 24: Inversion result with different options (top to bottom): a) smoothness-everywhere,
b) lake as single-parameter region, c) lake as fixed region, d) inter-region constraints
between lake and lake-bottom, e) like d but with variable water body.
28
Alternatively we can assume that the lake bottom sediments have very similar resistivity to
the water. Therefore we remove the fix again and introduce inter-region constraints between
the two regions:
So the otherwise decoupled regions are connected by smoothness constraints with a strenth
that is a factor 2 weaker than normal. The result, equivalently fitting the data, is shown in
Figure 24d. It is very similar to Fig. 24c and thus supporting the measured conductivity.
On the other hand it is not clear from our simple probes that the water resistivity is really
constant. So we can treat the water as normal region with proper upper and lower bounds. In
order to have our target structure in the water layer we increase the model control (relative
regularization parameter) for that region by a factor of 2. The inter-region constraints are
kept.
The result (Fig. 24e) however, is not very different from the other but demonstrates the flexi-
bility of the region approach.
6. Time-lapse ERT
We are often interested in ongoing physical processes and use ERT for monitoring experiments.
An arbitrary number of subsequent data sets can be processed by writing their file names in a
text file and passing it by TIMESTEPS=filename.
6.1. Strategies
There is a large number of different strategies for doing time-lapse inversion and accordingly
a large number of associated keys.
1. The easiest is an independent inversion, which can be easily done using a simple shell
script that just overwrites DATAFILE. However, very frequently artifacts arise, espe-
cially when looking at the changes (ratios).
2. Another variant is ratio or quotient inversion, i.e. calculating the data ratio and inverting
them as apparent resistivity (Schutze et al., 2002). However, this approach is valid only
for small contrasts since it does not take the sensitivity distribution as function of the
real model into account. BERT1 used this approach by default, here it can still be used
by RATIOSTEP=1.
29
3. There is the class of reference model based schemes, where for each frame a full min-
imization is done, but the models are constrained taking the model of either the first
(default) or the preceding (TIMELAPSESTEPMODEL=1) frame as reference. It is
the most general scheme since the used measuring arrays and even the electrode positions
can vary over time. Therefore it is the default method.
4. A special scheme is the so-called difference inversion after LaBrecque and Yang (2001).
It is based on the observation that there is a significant amount of systematic errors in
all time steps. Therefore, if TIMELAPSEREMOVEMISFIT=1, the data dk of k th
frame is corrected by the misfit of the first frame (d0 ) such that
5. From an inversion point of view, the most rigorous method is a full discretization in time
and simultaneous inversion. This is done efficiently using block matrices but up to now
available only in the Python managers.
30
hold both absolute values and relative changes with respect to the first. For 2D inversions, a
multi-page pdf file of the absolute resistivities or their ratios is created with the target mkpdf
followed by one of the binary files, e.g.
$ bert cfg mkpdf modelAbs.bmat
creates a multipage 2d result pdf.
z/m z/m
1 1
1.5 1.5
2 2
10 12.6 15.8 20 25.1 31.6 39.8 50.1 63.1 79.4 100 10 12.6 15.8 20 25.1 31.6 39.8 50.1 63.1 79.4 100
2 2.5 3 3.5 4 4.5 x/m 5.5 2 2.5 3 3.5 4 4.5 x/m 5.5
0 0
z/m z/m
1 1
1.5 1.5
2 2
10 12.6 15.8 20 25.1 31.6 39.8 50.1 63.1 79.4 100 10 12.6 15.8 20 25.1 31.6 39.8 50.1 63.1 79.4 100
Figure 25: Inversion results 3 hours (upper left), 7 hours (upper right), 12 hours (lower left)
and 16.5 hours (lower right) after tracer injection.
Mostly, one is interested in the changes (ratios) and wants to test several algorithms, which is
done for different settings in Fig. 26.
From the inversions with regularization orders 0, 1 and 2 (lines 3-5) the classical smoothness
constraints (1st/2nd with slightly decreased vertical weights) exhibit the largest effects, but
31
4 hours 12 hours
Abs
Step
C0
C1
C2
C20
Figure 26: Resistivity ratio for timesteps 4 hours (left) and 12 hours (right) and inversion
schemes: individual inversion, step-wise constrained and difference inversion using
constraint orders 0, 1, 2 and 0+2
32
also the largest artifacts above and below the tracer. Minimum-length regularization of the
model difference does not show such artifacts but increases at the electrodes interrupting the
tracer shape. If (isotropic) smoothness and pure deviation are combined (last line), the least
artifacts are observed, but the shape of the tracer remains interrupted. Further tuning may
lead to even nicer images, however it is not clear beforehand which method is best and how
reality looks like.
Figure 27: Relative resistivity difference (in %) for the repeated measurements at about 2, 4,
6, 10 and 16 hours after irrigation.
Acknowledgements
We like to thank all the guys that provided the very instructive data and user stories: Folker
Donner (formerly TU Freiberg), Markus Furche and Ulla Noell (BGR Hannover), Thomas
Schicht (K-UTec GmbH Sondershausen), Niels Hoffmann (formerly HAWK Gottingen), Oliver
Kuras (British Geological Survey), Joseph Doetsch (ETH Zurich) and Ilaria Coscia (formerly
ETH Zurich), Sarah Garre (Uni Liege, Gembloux). Furthermore we acknowledge all the users
and testers of BERT that made the software what it is now, a powerful expert tool.
References
Bazin, S. and Pfaffhuber, A. (2013). Mapping of quick clay by Electrical Resistivity Tomogra-
phy under structural constraint. Journal of Applied Geophysics, (0):.
Bechtold, M., Vanderborght, J., Weiherm uller, L., Herbst, M., G
unther, T., Ippisch, O., Kas-
teel, R., and Vereecken, H. (2012). Upward Transport in a Three-Dimensional Heterogeneous
Laboratory Soil under Evaporation Conditions. Vadose Zone Journal, 11(2).
33
Claerbout, J. F. and Muir, F. (1973). Robust modeling with erratic data. Geophysics,
38(1):826844.
Coscia, I., Greenhalgh, S., Linde, N., Doetsch, J., Marescot, L., Gunther, T., and Green,
A. (2011). 3D crosshole apparent resistivity static inversion and monitoring of a coupled
river-aquifer system. Geophysics, 76(2):G4959.
Doetsch, J., Coscia, I., Greenhalgh, S., Linde, N., Green, A., and G unther, T. (2010). The
borehole-fluid effect in electrical resistivity imaging. Geophysics, 75(4):F107F114. in print.
Garre, S., G
unther, T., Diels, J., and Vanderborght, J. (2012). Evaluating Experimental Design
of ERT for Soil Moisture Monitoring in Contour Hedgerow Intercropping Systems. Vadose
Zone Journal, 11(4).
Geuzaine, C. and Remacle, J.-F. (2009). Gmsh: a three-dimensional finite element mesh gen-
erator with built-in pre- and post-processing facilities. International Jounral for Numerical
Methods in Engineering, 79:13091331.
G
unter, T., Martin, T., and R ucker, C. (2016). Spectral inversion of sip field data using
pygimli/bert. In 4th International Workshop on Induced Polarization.
G
unther, T. (2002-2007). DC2dInvRes - Direct Current 2d Inversion and Resolution. resistiv-
ity.net productions, http://dc2dinvres.resistivity.net.
G
unther, T. (2003-2008). DC3dInvRes - Direct Current 3d Inversion and Resolution. resistiv-
ity.net productions, http://dc3dinvres.resistivity.net.
G
unther, T. (2004). Inversion Methods and Resolution Analysis for the 2D/3D Reconstruction
of Resistivity Structures from DC Measurements. PhD thesis, University of Mining and
Technology Freiberg. available at http://fridolin.tu-freiberg.de.
34
G
unther, T. and Martin, T. (2016). Spectral two-dimensional inversion of frequency-domain
induced polarisation data from a mining slag heap. Journal of Applied Geophysics, in press.
accepted.
G
unther, T. and R
ucker, C. (2006). A general approach for introducing structural information
- from constraints to joint inversion. In Ext. Abstract, EAGE Near Surface Geophysics
Workshop. 3.-6.9.06, Helsinki(Finland).
G
unther, T., R
ucker, C., and Spitzer, K. (2006). 3-d modeling and inversion of DC resistivity
data incorporating topography - Part II: Inversion. Geophys. J. Int., 166(2):506517.
Kuras, O., Pritchard, J., Meldrum, P. I., Chambers, J. E., Wilkinson, P. B., Ogilvy, R. D.,
and Wealthall, G. P. (2009). Monitoring hydraulic processes with Automated time-Lapse
Electrical Resistivity Tomography (ALERT). Compte Rendus Geosciences - Special issue
on Hydrogeophysics, 341(10-11):868885.
LaBrecque, D. J. and Yang, X. (2001). Difference Inversion of ERT Data: a Fast Inversion
Method for 3-D in Situ Monitoring. J. Environ. Eng. Geophys., 6:83.
Martin, T. and G
unther, T. (2013). Complex resistivity tomography (CRT) for fungus detec-
tion on standing oak trees. European Journal of Forest Research, 132(5):112.
R
ucker, C. (2010). Advanced Electrical Resistivity Modelling and Inversion using Unstructured
Discretization. PhD thesis, University of Leipzig. in preparation.
R
ucker, C. and G
unther, T. (2011). The simulation of finite ERT electrodes using the complete
electrode model. Geophysics, 76(4):F227F238.
R
ucker, C., G
unther, T., and Spitzer, K. (2006). 3-d modeling and inversion of DC resistivity
data incorporating topography - Part I: Modeling. Geophys. J. Int., 166(2):495505.
Sch
utze, C., Friedel, S., and Jacobs, F. (2002). Detection of three-dimensional transport
processes in porous aquifers using geoelectrical quotient tomography. Eur. J. of Env. and
Engin. Geophysics, 7:319.
35
Si, H. (2015). TetGen, a Delaunay-Based Quality Tetrahedral Mesh Generator. ACM Trans-
actions on Mathematical Software, 41(2):136.
Udphuay, S., G unther, T., Everett, M., Warden, R., and Briaud, J.-L. (2011). Threedimen-
sional resistivity tomography in extreme coastal terrain amidst dense cultural signals: appli-
cation to cliff stability assessment at the historic DDay site. Geophys. J. Int., 185:201220.
in print.
36
2.0.14 26.11.2015 GelMon edition
2.0.13 08.10.2015 (first working Resistivity Python class)
2.0.12 25.09.2015 from here only 64bit with WinPython3.4
2.0.11 14.09.2015 first 64bit version with Python3.4-64bit
2.0.11 14.08.2015 only 32bit version with Python3.4-32bit
2.0.10 09.07.2015 also Win64 with own Python64bit
2.0.9 18.05.2015
2.0.8 21.03.2015
2.0.7 16.12.2014
2.0.6 17.10.2014
2.0.5 03.06.2014
2.0.4 02.12.2013
2.0.3 08.08.2013 also Win64 with Python32bit
2.0.2 17.04.2013
BERT 1 (2004-2008-2013)
1.3.2 05.04.2013 final Win 32bit version
1.3.1 22.11.2013 also (final) Win64 version
1.3.0 30.05.2012 no more new features
1.2.5 30.08.2011 also first Win64 version
1.2.4 10.05.2011 Linux
1.2.3 21.04.2011 Linux
1.2.2 11.01.2011
1.2.1 03.12.2010
37
1.2.0 28.10.2010 Linux
1.1.0 25.06.2010 (also BERT2 beta versions starting)
1.1.1 27.10.2010
1.0.3 17.11.2009
pre-versions 2006-2009
more constraint types (zeroth, first and second order plus combinations of them) and a
stabler way of anisotropic regularization
use of different electrode types: nodes, surfaces (CEM electrodes) or node-free points (or
a mix of different types)
lower/upper resistivity bounds independent of the data (in BERT1 they were also applied
to a )
Changes in commands
primPot/interpolate pot - Formerly the primary potential was computed and interpo-
lated in two steps. Now the potentials are interpolated onto the forward mesh on-the-fly unless
otherwise stated by KEEPPRIMPOT=1.
correct filter - As the command correct (called before calc) was leading to confusions,
the command filter now transforms the raw data into the desired inversion input including
apparent resistivity, geometric factor and error. Additionally filtering is done using MIN/MAX
keywords. For timelapse problems filter will homogenize the data array for the individual time
steps. In future versions an automated normal reciprocal analysis can be called if the data
allow for that.
calc/calcSensM calc - The calcSensM command is not needed any more and fully replaced
by calc. Recalculation of the Jacobian is the default.
38
New commands
sensOnly - calculate only the Jacobian matrix (for sensitivity or resolution studies)
version - prints out the version
show, showmesh, showdata, showerror, showfit, mkpdf - plots of different stuff (in case
of 2d geometry)
mkpdf, mkmeshpdf - automated pdf generation
39
bertNew2D/2DTopo/2DCirc - CFG file generators for 2d cases (flat,topo,circle)
bertNew3D/3DTopo/3DCyl - CFG file generators for 3d cases (flat,topo,tank)
dcinv - actual inversion routine (see dcinv -h)
dcmod - forward modelling for synthetic (e.g. primary) potentials (see dcmod -h)
dcedit - filter raw data using numerical geometric factors
Mesh creation and alteration:
#
# Data s e t t i n g s
#
OVERRIDEERROR=0 # o v e r r i d e s g i v e n e r r o r s w i t h INPUTERRLEVEL and INPUTERRVOLTAGE
INPUTERRLEVEL=3 # i n p u t e r r o r l e v e l ( i n p e r c e n t ) i f no e r r o r g i v e n ( d c e d i t p but s t i l l e )
40
INPUTERRVOLTAGE=100e6 # i n p u t v o l t a g e e r r o r (V) i f no e r r o r g i v e n
KMIN=9e 9 9 # minimum g e o m e t r i c f a c t o r (BERT2)
KMAX=9e 9 9 # maximum g e o m e t r i c f a c t o r (BERT2)
RMIN=0 # minimum a p p a r e n t r e s i s t i v i t y (BERT2)
RMAX=9e 9 9 # maximum a p p a r e n t r e s i s t i v i t y (BERT2)
IPMIN=9e 9 9 # minimum IP v a l u e (BERT2)
IPMAX=9e 9 9 # maximum IP v a l u e (BERT2)
ERRMAX=9e 9 9 # maximum e r r o r e s t i m a t e (BERT2)
DOWNWEIGHT=0 # downweight i n s t e a d o f d e l e t i n g d a t a t o k e e p d a t a s t r u c t u r e (BERT2)
ABSRHOA=0 # F o r c e a l l a p p a r e n t r e s i s t i v i t i e s t o be p o s i t i v e
#
# Inversion settings
#
CONSTRAINT=1 # c o n s t r a i n t t y p e (1/2 f i r s t / s e c o n d o r d e r , 0min . l e n g t h , 1 0 / 2 0 mix )
LAMBDA=20 # regularization strength
ZWEIGHT=1 # w e i g h t f o r p u r e l y v e r t i c a l g r a d i e n t s (BERT2)
LAMBDAOPT=0 # o p t i m z e lambda by u s i n g l c u r v e
CHI1OPT=0 # o p t i m i z e lambda s u c h t h a t CHI2=1 (BERT2)
REGIONFILE= # r e g i o n f i l e f o r i n v e r s i o n o p t i o n s f o r e a c h r e g i o n (BERT2)
BLOCKYMODEL=0 # i t e r a t i v e l y ( L1 ) r e w e i g h t e d model r o u g h n e s s
ROBUSTDATA=0 # i t e r a t i v e l y ( L1 ) r e w e i g h t e d d a t a m i s f i t
LOWERBOUND=0.0 # l o w e r r e s i s t i v i t y bound ( l o g a r i t h m i c b a r r i e r )
UPPERBOUND=0.0 # u p p e r r e s i s t i v i t y bound ( 0 . 0 = d e a c t i v a t e d )
RECALCJACOBIAN=1 # r e c a l c u l a t e j a c o b i a n e a c h i t e r a t i o n s t e p ( d e f a u l t i n BERT2)
MAXITER=20 # maximum number o f i t e r a t i o n s t e p s
STARTMODEL= # s t a r t i n g model , e i t h e r a v a l u e ( homogeneous model ) o r a f i l e n a m e h o l d i n g a v e c t o r
RESOLUTIONFILE= # f i l e w i t h i n d i c e s / p o s i t i o n s t o compute r e s o l u t i o n k e r n e l s f o r (BERT2)
LAMBDADECREASE=1 # d e c r e a s e lambda i n e a c h i t e r a t i o n by f a c t o r ( e . g . 0 . 8 )
LOCALREGULARIZATION=0 # l o c a l r e g u l a r i z a t i o n ( c o n s t r a i n o n l y model u p d a t e i n s t e a d o f model )
#SINGVALUE=1 # p o t e n t i a l v a l u e a t e l e c t r o d e s , f o r s e n s i t i v i t y (BERT1)
SENSMATDROPTOL=0 # drop t o l e r a n c e f o r J a c o b i a n v a l u e s ( s p a r s e s t o r a g e )
SENSMATMAXMEM=2000 # a v a i l a b l e memory i n MB f o r prea l l o c a t i o n o f s p a r s e J a c o b i a n
#
# Time l a p s e i n v e r s i o n s e t t i n g s
#
LAMBDATIMELAPSE= # s e p a r a t e lambda v a l u e f o r t i m e s t e p s (BERT2)
TIMELAPSECONSTRAINT=1 # c o n s t r a i n t t y p e ( s e e CONSTRAINT)
TIMELAPSEREGIONFILE= # r e g i o n f i l e ( s e e REGIONFILE)
TIMELAPSESTEPMODEL=0 # u s e p r e c e d i n g model a s r e f e r e n c e i n s t e a d o f f i r s t
FASTTIMELAPSE=0 # no j a c o b i a n r e c a l c u l a t i o n f o r t i m e s t e p s ( o n l y i d e n t i c a l a r r a y s ) (BERT2)
RATIOSTEP=0 # v e r y s i m p l e ( lowc o n t r a s t ) and f a s t method
TIMELAPSEREMOVEMISFIT=0 # d i f f e r e n c e i n v e r s i o n a f t e r LaBreque e t a l . ( 1 9 9 6 ) (BERT2)
#
# Mesh s e t t i n g s
#
PARAMAXCELLSIZE=0 # maximum c e l l s i z e volume (m3) (DIMENSION= 3 ) ; a r e a (m2) (DIMENSION=2) f o r p a r a mesh
PARAEDGELENGTH=0 # compute PARAMAXCELLSIZE by volume o f r e g u l a r t r i a n g l e / t e t r a h e d r o n ( e a s i e r t o c o n t r o l )
PRIMMAXCELLSIZE=0 # maximum c e l l s i z e volume (m3) (DIMENSION= 3 ) ; a r e a (m2) (DIMENSION=2) f o r prim mesh
PARADEPTH=0 # maximum d e p t h o f p a r a m e t e r domain i n m e t e r ( 0 = a u t o m a t i c e s t i m a t i o n by 1d c o v e r a g e )
PARABOUNDARY=5 # boundary around e l e c t r o d e s i n p a r a m e t e r domain ( p e r c e n t )
SPLINEBOUNDARY=0 # s p l i n e c i r c l e boundary i n s t e a d o f p i e c e w i s e l i n e a r i n t e r p o l a t i o n
EQUIDISTBOUNDARY=1 # e q u i d i s t a n t r e f i n e d s p a c e between e l e c t r o d e s ( 2 d ) ( d e f a u l t i n BERT2)
BOUNDARY=500 # s i z e o f boundary a r e a around p a r a m e t e r domain
MESHGEN=t e t g e n # command o r l o c a t i o n 3d mesh g e n e r a t o r
TETGENTOLERANCE=1e 12 # tetgen tolerance limit
TETGENPRESERVEBOUNDARY=0 # t e t g e n s h o u l d s u p p r e s s s p l i t t i n g o f boundary f a c e t s o r s e g m e n t s
LOOPTETGEN=0 # u s e t e t g e n i t s e l f t o s l i g h t l y i m p r o v e mesh q u a l i t y due t o r e p e a t i n g t e t g e n c a l l s
PARADX=0.0 # r e f i n e m e n t f o r p a r a mesh ( v a l u e s >0.5 w i l l be f o r c e d t o 0 . 5 )
GRID=0 # g e n e r a t e r e g u l a r ( q u a d r i l i t e r a l o r h e x a h e d r a l ) mesh ( o n l y TOPOGRAPHY=0 , BERT2)
LAYERS=10 # number o f l a y e r s t o u s e f o r g r i d ( f o r GRID=1 o n l y )
PARADZ=1 # l a y e r t h i c k n e s s t o u s e f o r g r i d ( f o r GRID=1 o n l y )
PRIMDX=0.1 # e l e c t r o d e r e f i n e m e n t ( prim mesh ) ( n o t e : r e l a t i v e i n 2d , a b s o l u t e i n 3d )
PRIMDX R=0.0 # e l e c t r o d e r e f i n e m e n t ( prim mesh ) d r i n c e n t e r d i r e c t i o n ( o v e r i d e s PRIMDX)
PARA2DQUALITY=33.0 # p a r a m e t e r g r i d ( from 25 ( v e r y bad ) t o 35 ( good ) )
PRIM2DQUALITY=33.4 # p r i m a r y g r i d ( from 25 ( v e r y bad ) t o 35 ( good ) )
PARA3DQUALITY=1.5 # p a r a m e t e r g r i d ( from 1 . 1 1 ( good ) t o 2 ( bad ) )
PRIM3DQUALITY=1.2 # p r i m a r y g r i d ( from 1 . 1 1 ( good ) t o 2 ( bad ) )
SURFACEQUALITY=30 # q u a l i t y o f t o p o g r a p h i c a l s u r f a c e g r i d ( from 20 ( bad ) t o 35 ( good ) )
SURFACEMAXTRISIZE=0.0 # maximal t r i a n g l e a r e a o f p a r a m a t r i c s u r f a c e g r i d
SURFACESMOOTH=0 # improve q u a l i t y o f t o p o g r a p h i c a l s u r f a c e g r i d
ICDROPTOL=0.0 # i f number o f n o d e s 200 k drop t o l e r a n c e i s s e t f o r ICCG s o l v e r ( o n l y BERT1)
TOTALPOTENTIAL=0 # don t u s e s e c o n d a r y p o t e n t i a l a p p r o a c h ( i . e . f o r CEM meshes )
OMITSECREFINE=0 # omit g l o b a l r e f i n e m e n t f o r f o r w a r d c a l c u l a t i o n ( f o r f i n e p a r a m e t e r meshes )
#SECMESHREFINE=1 # t h e o p p o s i t e (BERT1)
SECP2MESH=0 # u s e q u a d r a t i c s h a p e f u n c t i o n s i n f o r w a r d mesh (BERT2)
PRIMP2MESH=1 # u s e p r i m a r y p2 mesh ( d e f a u l t i n BERT2)
#
# Directory settings
#
MESHBASENAME=mesh # basename f o r mesh f i l e s
DIRMESHS=mesh # d i r e c t o r y name f o r mesh f i l e s
DIRPOT=p r i m a r y P o t # d i r e c t o r y name f o r p r i m a r y and i n t e r p o l a t e d p o t e n t i a l s
DIRPRIMPOT=p o t e n t i a l s # s u b d i r e c t o r y name f o r p r i m a r y p o t e n t i a l s ( o b s o l e t e )
DIRINTERPOLPOT=i n t e r p o l a t e d # s u b d i r e c t o r y name f o r i n t e r p o l a t e d p o t e n t i a l s ( o b s o l e t e )
DIRSENS=sensM # d i r e c t o r y name f o r s e n s i t i v i t y m a t r i x
OLDPRIMMESHSTYLE=0 # a l t e r n a t i v e way t o c r e a t e p r i m a r y mesh ( f o r i n t e r n a l u s e o n l y )
41
#
# Data f i l t e r i n g options
#
KMIN=9e 9 9 # minimum geometric factor
KMAX=9e 9 9 # maximum geometric factor
RMIN=0 # minimum apparent r e s i s t i v i t y
RMAX=9e 9 9 # maximum apparent r e s i s t i v i t y
IPMIN=9e 9 9 # minimum apparent phase
IPMAX=9e 9 9 # maximum apparent phase
ERRMAX=9e 9 9 # maximum error estimate
#
# P l o t t i n g s e t t i n g s (2 d p l o t s with p y t r i p a t c h )
#
PYTHON=python # command o r l o c a t i o n o f python e x e c u t a b l e
PYTRIPATCH=p y t r i p a t c h # command and l o c a t i o n o f p y t r i p a t c h ( 2 d mesh p l o t t i n g )
USECOVERAGE=1 # use coverage f o r alpha shading
SHOWELECTRODES=1 # show e l e c t r o d e p o s i t i o n s a s b l a c k d o t s
INTERPERC=3 # u s e 3% i n t e r p e r c e n t i l e s f o r model d i s p l a y w i t h p y t r i p a t c h
cMin= # minimum f o r c o l o u r s c a l e
cMax= # minimum f o r c o l o u r s c a l e
Permanently active options can be saved in a file .bertrc in the home directory (user-specific)
or in the folder where bert is located (for all users). This is particulaly interesting for SENS-
MATMAXMEM (recommend 50-80% of the available RAM in MB), the number of CPUs
to use in parallel (BERTTHREADS), or the Python executable (PYTHON).
E. User stories/HowTos
Sometimes users have very specific tasks requirements, mostly geometries, that are to be
incorporated. We present solutions, usually by some Python or bash scripts, that go beyond
the usual documentation, the corresponding input files are located in sub-folders of doc/HowTo.
Some of them are actually workarounds that are to be integrated in the following releases of
BERT 2 via new keywords but can still show users how to create solutions generally. Others are
user stories that demand quite complicated approaches that will never be integrated into BERT
directly. We hope that the existing stories tell you how to create solutions by combination of
the existing ones. If you have an interesting case already solved or that cannot be solved easily
but could be of interest for many users, dont hesitate to contact us to integrate it.
Note that the solutions mainly deal with creating appropriate meshes and are written as shell
scripts, python scripts or a combination of both. Hence a basic knowledge in programming is
required to understand it. Here we present the main components used part by part, for the
full scripts see the files in the indicated directories. Usually we skip some general parts of it
such as the importing of libraries, usually
import p y g i m l i a s pg
import numpy a s np
import m a t p l o t l i b . p y p l o t a s p l t
For a complete list of the available functions, see the API documentation of GIMLi/pygimli
located in doc/html after calling doxygen with the prepared cfg file, Numpy/SciPy (http:
//www.scipy.org) or Matplotlib (note that pylab includes numpy and matplotlib). Main
data type to be worked with is the GIMLi::RVector, which is almost compatible with np.array.
42
Solution: data have to be split and transformed to standard format.
TODO: generate a pygimli-only solution
Before, the usual recommendation was to use DC2dInvRes and its option Save-Ohm-File, which
does the tape correction. Now we solve the problem in a shell script and invoke Python directly.
First we extract the number of electrodes and data and write the first parts (electrodes+data)
of the file such that be read by pygimli. Moreover we read the topography list and export it
to another file:
i n f i l e = p r o f i l e topo . dat
d a t f i l e = p r o f i l e . dat
o u t f i l e = p r o f i l e . ohm
t o p o f i l e = topo . xz
read l i n e < $ i n f i l e
n e l=${ l i n e %#}
l i n e = head n $ [ n e l + 3 ] $ i n f i l e | t a i l n 1
ndata=${ l i n e %#}
head n $ [ n e l + $ndata + 4 ] $ i n f i l e > $ d a t f i l e
l i n e = head n $ [ n e l + $ndata + 5 ] $ i n f i l e | t a i l n 1
ntopo=${ l i n e %#}
echo $ n e l $ndata $ntopo
t a i l n $ntopo $ i n f i l e > $ t o p o f i l e
We now switch to python and read in the file and extract electrode positions. We compute
the tape distance along the profile and use the topography to interpolate it onto the electrode
positions.
data = pg . DataContainer ( $ d a t f i l e )
p r i n t ( data )
x , z = np . l o a d t x t ( $ t o p o f i l e , unpack=True )
t t = np . h s t a c k ( ( 0 . , np . cumsum ( np . s q r t ( np . d i f f ( x)2+np . d i f f ( z ) 2 ) ) ) )
xe = [ e l . pos ( ) [ 0 ] f o r e l i n data . e l e c t r o d e s ( ) ]
z = np . i n t e r p ( xe , x , z )
However, usually the electrode positions have tape distances and not real x coordinates. There-
fore we have to account for their heights such that the tape distance remains constant. After
it we set
x = xe [ 0 ] + np . h s t a c k ( ( 0 , np . cumsum ( np . s q r t ( np . d i f f ( xe )2np . d i f f ( z ) 2 ) ) ) )
de = np . s q r t ( np . d i f f ( x)2+np . d i f f ( z ) 2 )
f o r i i n r a n g e ( data . s en s o r Co u n t ( ) ) :
data . s e t S e n s o r P o s i t i o n ( i , pg . RVector3 ( x [ i ] , 0 . 0 , z [ i ] ) )
data . s a v e ( $ o u t f i l e , a b m n u i R , x z )
Thats it! With the created file we can do a very classical inversion.
43
1. Read data file and fit a profile line through electrodes
Input could be:
1a) 2d data file with 3d coordinates (x,y,z) for each electrode
1b) 2d flat earth data file and additional GPS positions file
I suppose the latter, more general, case with GPS positions for some (not necessarily all)
electrodes in a separate file gps.txt (profile distance, height, northing and easting).
t t , hh , yy , xx = np . l o a d t x t ( gps . txt , unpack=True )
# t a p e d i s t a n c e , h e i g h t , n o r t h i n g and e a s t i n g
After reading and plotting the coordinates we see typical errors of up to a few meters due
to GPS inaccuracies, which we want to eliminate. Therefore we fit a smooth curve by using
harmonic functions implemented in the pygimli function harmfit. From the resulting x and y
we compute the distance t along the tape, which is used for inversion.
x = h a r m f i t ( xx , t t , n C o e f f i c i e n t s = 1 0 ) [ 0 ]
y = h a r m f i t ( yy , t t , n C o e f f i c i e n t s = 1 0 ) [ 0 ]
z = h a r m f i t ( hh , t t , n C o e f f i c i e n t s = 1 5 ) [ 0 ]
t = np . h s t a c k ( ( 0 . , np . cumsum ( np . s q r t ( np . d i f f ( x)2+np . d i f f ( y ) 2 ) ) ) ) # 2d d i s t a n c e
The number of coefficients nCoefficients defines the complexity of the fitted curve and should
be large enough to describe the course, but not too large to avoid small-scaled oscillation.
Experience with different numbers with easily give good values for any data.
p l t . p l o t ( t t , hh , bx , t t , z , r )
p l t . p l o t ( xx , yy , bx , x , y , b )
Figure 28 shows how the coordinates are approximated.
320
300
height in m
280
260
240
220
200
1800 200 400 600 800 1000
profile distance in m
350 +5.7259e6
300
250
200
y in m
150
100
50
00 100 200 300 400 500 600 700 800 900
x in m +3.5686e6
Figure 28: Measured positions (blue) and approximated profile (red) along the tape (top) and
in the x-y plane
44
2a. project positions onto profile
We now load the data container and interpolate the (tape) coordinates xe onto the 2d positions
t and z. We then run through all electrodes and change their 3d position to the 2d position
before saving the data to a 2d file.
data = pg . DataContainer ( p r o f i l e . dat )
xe = [ pos [ 0 ] f o r pos i n data . s e n s o r P o s i t i o n s ( ) ]
x2d = np . i n t e r p ( xe , t t , t )
z2d = np . i n t e r p ( xe , t t , z )
f o r i i n r a n g e ( data . s en s o r Co u n t ( ) ) :
data . s e t S e n s o r P o s i t i o n ( i , pg . RVector3 ( x2d [ i ] , 0 . 0 , z2d [ i ] ) )
data . s a v e ( p r o f i l e 2 d . ohm , a b m n u i , x z )
Furthermore we create a map from the 2d to the 3d coordinates for result projection.
POS = np . v s t a c k ( ( x2d , x , y ) )
np . s a v e t x t ( pos . map , POS . T)
45
The result is then displayed in Figure 29. Doing so for several profiles they can easily displayed
together in Paraview or also exported to other programs such as GoCad or any GIS software.
nb = 2
x = pg . a s v e c t o r ( np . a r a n g e ( xmindxnb , xmax+dx ( nb +1) , dx ) )
z = pg . a s v e c t o r ( np . a r a n g e ( np . c e i l ( zmax / dx ) , 1 . ) dx )
46
c . setMarker ( 2 ) # s e t a l l markers t o 2
Figure 30: Created mixed mesh (left) and inversion result using regular meshes
Regular meshes may be advantageous over unstructured meshes if either (i) a predominant
layering should be better imaged or (ii) if cells with constant size are required, e.g. for crosshole
measurements.
47
Note that the similar exists for create3DMesh. For appropriate outer space we insert this mesh
into a big box and fill the intermediate space by triangles using appendTriangleBoundary:
mesh2 = appendTriangleBoundary ( mesh , 7 0 . , 7 0 . , marker =0, i s S u b S u r f a c e=True )
p r i n t ( mesh , mesh2 )
marker2 = mesh2 . c e l l M a r k e r ( )
showMesh ( mesh2 , data=marker2 , l i n e a r=True )
mesh2 . s a v e ( mesh/mesh . bms )
The python script applied using PARAMESH=mkmesh.py. Additionally we need to specify
a region file to treat each layer as a constant single region and the outside as background.
Between the layers smoothness constraints are defined using the inter-region constraints:
#No s i n g l e t r a n s lBound uBound
1 log 1 300
#No background
0 1
#I n t e r r e g i o n
1
Figure shows the inversion result derived from a data set using vertical buried electrodes in
the freshwater-saltwater interface below the north sea island Borkum. Since the data fit is at
about 1%, we presume that ehe lithology is just a rough image of fine layerings. Below the
last clay layer, the salt-water with resistivities below 1 m starts.
48
E.5. Inversion on user-defined regular 3d mesh
(not really stable due to tetgen face creation)
doc/howto/regular mesh3d
Task: Use a regular 3D (FD-like) discretization for inversion with BERT
Problem: Create mixed mesh of tetrahedra (inversion part) and hexahedra (outer box)
BERT 2 supports hexahedral finite element computation both naturally or by dissecting each
hexahedron into 5 or 6 tetrahedra. Whereas the first variant is restricted to orthogonal hex-
ahedra, the latter could be used for a deformed regular mesh, e.g. with surface topography.
For inversion in unbounded domains, an outer box around the inversion domain is needed to
ensure the correctness of boundary conditions and thus accuracy. If the meshes are prolongated
regularly, we obtain very ugly cells with bad numerical behaviour and have a huge number of
nodes in the forward computation. Therefore we want to prolongate the mesh with tetrahedra.
A regular mesh can be created in pygimli using built-in functions. First, vectors are created
(specify min/max/dx) and used to create a mesh:
x = np . a r a n g e ( xmindxnb , xmax+dx ( nb +1) , dx )
y = np . a r a n g e ( ymindxnb , ymax+dx ( nb +1) , dx )
z = np . a r a n g e (np . c e i l ( zmax / dx ) , 1 . ) dx
mesh = pg . Mesh ( 3 )
mesh . c r e a t e 3 D G r i d ( x , y , z )
A second mesh is created by H2 refinement of the hexahedra into tetrahedra: The outer mesh
boundaries, which only have one neighboring cell, are set to marker 1 to be identified later.
mesh2 = pg . Mesh ( 3 ) # new 3d mesh
mesh2 . createH2Mesh ( mesh ) # r e f i n e d
p r i n t mesh , mesh2 # d i s p l a y node / c e l l / boundary numbers
f o r b i n mesh2 . b o u n d a r i e s ( ) :
i f not ( b . l e f t C e l l ( ) and b . r i g h t C e l l ( ) ) :
b . setMarker ( 1 )
mesh . s a v e ( para )
mesh . exportVTK ( para )
Finally we extract the outer boundaries of the mesh and save it as poly file:
p o l y = pg . Mesh ( 3 )
p o l y . createMeshByBoundaries ( mesh2 , mesh2 . findBoundaryByMarker ( 1 ) )
p o l y . e x p o r t A s T e t g e n P o l y F i l e ( paraBoundary )
We now create a world (big box with FE modelling boundary markers), merge it with the
parameter outer boundary and add a hole marker in the middle (in bash):
polyCreateWorld x100 y100 z50 world # make b i g box
polyMerge world paraBoundary w o r l d S u r f a c e # can t a k e a w h i l e
polyAddVIP x 0 y 0 z 0.1 w o r l d S u r f a c e # h o l e marker i n t h e middle
The PLC has now the faces of both and can be meshed with moderate quality
t e t g e n pazVACq2 w o r l d S u r f a c e # r e s u l t w i l l be w o r l d S u r f a c e . 1 .
meshconvert vBDM i t o worldBoundary w o r l d S u r f a c e . 1 # c o n v e r t
We now extract the outer surface of the resulting mesh by their -2 markers
49
worldBoundary = pg . Mesh ( worldBoundary . bms )
worldPoly = pg . Mesh ( 3 )
worldPoly . createMeshByBoundaries ( worldBoundary ,
worldBoundary . findBoundaryByMarker ( 2 , 0 ) )
worldPoly . e x p o r t A s T e t g e n P o l y F i l e ( worldBoundary . poly )
and obtain a triangulated surface mesh of the outer box without the inner. Therefore we have
to merge both surface meshes
$ polyMerge worldBoundary paraBoundary allBoundary
Note that the latter procedure can take really long since intersections between all face pairs
have to be checked. We can avoid this by adding the option -N to polyMerge, because we know
there are no intersections. As a side effect the nodes at the inner box boundary are doubled
and have to be removed, which we can establish by the script readmypolyfile.py:
Poly=pg . Mesh ( 3 )
f=open ( a l l B o u n d a r y . poly , r )
l i n e 1=f . r e a d l i n e ( )
nnodes=i n t ( l i n e 1 . s p l i t ( ) [ 0 ] )
nodes = [ ]
f o r i i n r a n g e ( nnodes ) :
pos=f . r e a d l i n e ( ) . s p l i t ( )
p=pg . RVector3 ( f l o a t ( pos [ 1 ] ) , f l o a t ( pos [ 2 ] ) , f l o a t ( pos [ 3 ] ) )
n=Poly . createNodeWithCheck ( p )
nodes . append ( n )
l i n e 2=f . r e a d l i n e ( )
n f a c e s=i n t ( l i n e 2 . s p l i t ( ) [ 0 ] )
f o r i in range ( n f a c e s ) :
bla = f . re adl ine ()
ind = f . r e a d l i n e ( ) . s p l i t ( )
f a = Poly . c r e a t e T r i a n g l e F a c e ( nodes [ i n t ( i n d [ 1 ] ) ] , nodes [ i n t ( i n d [ 2 ] ) ] ,
nodes [ i n t ( i n d [ 3 ] ) ] , 0 )
f a . s e t M a r k e r ( 2) # f o r o u t e r boundary ( mixed boundary c o n d i t i o n s )
f . close ()
Poly . e x p o r t A s T e t g e n P o l y F i l e ( t e s t . poly )
The resulting test.poly can now be meshed using tetgen -Y (keep faces):
$ tetgen -pazVACY -q2 test
If there are errors due to intersecting facets they can be identified using
$ tetgen -d test
$ meshconvert -V -it -o wrong test.1
We observed that tetgen 1.4.3 is doing a good job in contrast to 1.4.2. Check which version you
use (tetgen -h) and make sure you use 1.4.3. If tetgen meshes correctly, we have two meshes,
which have to be merged
Outer = pg . Mesh ( r i g h t . bms ) # o u t e r box ( world )
I n n e r = pg . Mesh ( para . bms ) # i n n e r box ( p a r a m e t e r s )
p r i n t ( Outer , I n n e r )
# s e t a l l o u t e r c e l l s t o 1 and i n n e r c e l l s t o 2
f o r c i n Outer . c e l l s ( ) : c . s e t Ma r k e r ( 1 )
f o r c in Inner . c e l l s ( ) :
c . setMarker ( 2 )
50
Outer . c o p y C e l l ( c )
p r i n t ( Outer )
Outer . s a v e ( mesh )
Outer . exportVTK ( mesh )
And there we go. The resulting mesh.bms can be used for inversion easily just by specifying
PARAMESH=mesh.bms in the cfg file. The whole script is either in doall.sh calling the individual
python files or in makeall.sh, where the python commands are directly invoked.
Figure 32: Boundary of the inner 3D grid (red faces and blue lines) and of the outer box (green
lines)
Note that the cells in the interior have the marker 2, i.e. the describe one region as usual in
inversion, however with tetrahedral cells since we divided each hexahedron as such. In order
to avoid this, we do not set the marker and keep the subsequently increasing numbers from
create3DGrid, so that every tetrahedron is one region with 5 tetrahedra. By specifying the
single attribute in a region file
#No s i n g l e t r a n s
1 log
#No background
0 1
#I n t e r r e g i o n
1
we actually invert for hexahedra with smoothness constraints between each other. Besides the
REGIONFILE=region.control keyword, SECMESHREFINE=0 is useful to avoid refinement of the
already refined cells.
51
Since true 3d measurements (grid of electrodes) are often prohibitive due to limited electrodes,
usually pseudo-3d, i.e. 2d profiles - not necessarily parallel and perpendicular, are measured.
For georeferencing, total stations yield a digital elevation model and the course of the profiles
by a polygon of 2 or more points. The location of the profile is stored in a pro-file (used e.g.
by DC3dInvRes) with a line for each 2d profile looking like:
f i l e n a m e o f 2 d f i l e x1 y1 x2 y2 . . .
First we read in the filenames and points:
XL, YL, FILES = [ ] , [ ] , [ ]
f i d = open ( k i l m o r e 2 0 1 1 . pro )
for l i n e in f i d :
FILES . append ( l i n e . s p l i t ( ) [ 0 ] )
XL . append ( np . d o u b l e ( l i n e . s p l i t ( ) [ 1 : : 2 ] ) )
YL . append ( np . d o u b l e ( l i n e . s p l i t ( ) [ 2 : : 2 ] ) )
fid . close ()
Next, we read in the DEM and make a Delaunay triangulation
import m a t p l o t l i b . t r i a s m t r i
x , y , z = np . l o a d t x t ( topo . xyz , unpack=True )
t r i = mtri . Triangulation (x , y )
i n t e r p l i n = mtri . L i n e a r T r i I n t e r p o l a t o r ( t r i , z )
We create a new 3d mesh with the DEM points as nodes and the triangulatation as cells:
f o r x1 , y1 , z1 i n z i p ( x , y , z ) :
mesh . c r e a t e N o d e ( x1 , y1 , z1 )
for v in t r i . t r i a n g l e s :
mesh . c r e a t e C e l l ( pg . T r i a n g l e ( mesh . node ( i n t ( v [ 0 ] ) ) ,
mesh . node ( i n t ( v [ 1 ] ) ) , mesh . node ( i n t ( v [ 2 ] ) ) ) )
52
For a 2d inversion we need the direction along the profile. The tape-true coordinate mapping
is saved to some temporary file that will later be used.
x2d = np . h s t a c k ( ( 0 . , np . cumsum ( np . s q r t ( np . d i f f ( xe )2+np . d i f f ( ye ) 2 ) ) ) )
ALL = np . v s t a c k ( ( te , xe , ye , z e ) )
np . s a v e t x t ( p o s i t i o n s /+ d a t f i l e . r e p l a c e ( . dat , . txyz ) , ALL . T)
First we want to generate a valid 2d file using the latter coordinate, saving exactly the field
read in:
f o r i i n r a n g e ( data . s en s o r Co u n t ( ) ) :
data . s e t S e n s o r P o s i t i o n ( i , pg . RVector3 ( x2d [ i ] , 0 . 0 , zn [ i ] ) )
f i l e n a m e 2 d = 2 d f i l e s / + d a t f i l e . r e p l a c e ( . dat , 2d . dat )
data . s a v e ( f i l e n a m e 2 d , data . i n p u t F o r m a t S t r i n g ( ) )
Next, we set the coordinates to the real position and subsequently add the data to a previously
created empty data container.
f o r i i n r a n g e ( data . s en s o r Co u n t ( ) ) :
data . s e t S e n s o r P o s i t i o n ( i , pg . RVector3 ( xe [ i ] , ye [ i ] , z e [ i ] ) )
v t k f i l e = d a t f i l e . r e p l a c e ( . dat , . vtk )
mesh = pg . Mesh ( v t k f i l e )
xm = pg . x ( mesh . p o s i t i o n s ( ) ) # 2d x
zn = pg . x ( mesh . p o s i t i o n s ( ) ) # z
Then we interpolate from the 2d coordinates to the 3d coordinates. Instead of numpy.interp
we need also to extrapolate since the mesh usually goes beyond the electrodes. For this reason
we write a simle interpolation-extrapolation function:
d e f myextrap ( x , xp , yp ) :
numpy . i n t e r p f u n c t i o n with l i n e a r e x t r a p o l a t i o n
y = np . i n t e r p ( x , xp , yp )
y = np . where ( x<xp [ 0 ] , yp [ 0 ] + ( xxp [ 0 ] ) ( yp [0] yp [ 1 ] ) / ( xp [0] xp [ 1 ] ) , y )
y = np . where ( x>xp [ 1 ] , yp [ 1]+(xxp [ 1 ] ) ( yp[1]yp [ 2 ] ) / ( xp[1]xp [ 2 ] ) , y )
return y
and use this function for determining x and y positions of the mesh nodes
xn = myextrap (xm, x2d , x )
yn = myextrap (xm, x2d , y )
Finally we set the positions of the mesh nodes and export the vtk.
f o r i i n r a n g e ( mesh . nodeCount ( ) ) :
mesh . node ( i ) . s e t P o s ( pg . RVector3 ( xn [ i ] , yn [ i ] , zn [ i ] ) )
53
mesh . exportVTK ( 3 dvtk / + v t k f i l e )
Now all vtk files in the folder 3dvtk can be loaded into paraview to form a fence diagram.
Additionally the result from the 3d inversion can be loaded and compared to the 2d inversions.
If the topography does not show a large curvature, alternatively the 2d inversions could be done
with the original (tape measure) files and the back-transformation will add the topography:
xn = myextrap (xm, t , x )
yn = myextrap (xm, t , y )
zn = zn + myextrap ( tn , t , z )
f o r i i n r a n g e ( mesh . nodeCount ( ) ) :
mesh . node ( i ) . s e t P o s ( pg . RVector3 ( xn [ i ] , yn [ i ] , zn [ i ] ) )
The resulting vtk files can be viewed together in ParaView and their visualization is very easy
to control by grouping the files. For even more convenience, the meshes can be joined into a
single vtk. Finally, the result looks like in Figure 33.
54
With the monitoring of hydraulic processes using ERT the demand for an efficient forward
calculation for a given water content and thus resistivity arises. Hydrus3D uses meshes of
tetrahedra that are usually regularly arranged. For an appropriate forward calculation bound-
aries far away from the parameters are needed. A further regular prolongation will cause a
huge increase of nodes and thus runtime. Therefore the parameter box is surrounded by un-
structed tetrahedra whose resistivity is determined using parameter prolongation as done in
inversion. The problem is very similar to the one of creating regular 3d meshes for inversion,
except that (a) the input is already a tetrahedal mesh and (b) the output must be defined
such that a forward calculation using dcmod can be directly called. According to (a), the first
part of creating the mesh and refining it is omitted. Instead, the Hydrus mesh is imported and
saved using the following pygimli script. (Unzip MESHTRIA.TXT.gz using gzip -d to obtain
the Hydrus3D mesh)
mesh = pg . Mesh ( 3 ) # new mesh
f = open ( MESHTRIA.TXT , r )
f o r i in range ( 6 ) : # read f i r s t 6 l i n e s
line1 = f . readline ()
l i n e 1=f . r e a d l i n e ( )
l i n e 1=f . r e a d l i n e ( )
c e l l s =[]
f o r c i i n r a n g e ( n c e l l s ) : # r e a d a l l c e l l s and append i n mesh
pos = f . r e a d l i n e ( ) . s p l i t ( )
i , j , k , l = i n t ( pos [ 1 ] ) , i n t ( pos [ 2 ] ) , i n t ( pos [ 3 ] ) , i n t ( pos [ 4 ] )
c = mesh . c r e a t e T e t r a h e d r o n ( nodes [ i 1] , nodes [ j 1] , nodes [ k 1] , nodes [ l 1])
c e l l s . append ( c )
f . close ()
p o l y = pg . Mesh ( 3 )
55
p o l y . createMeshByBoundaries ( mesh , mesh . findBoundaryByMarker ( 1 ) )
p o l y . e x p o r t A s T e t g e n P o l y F i l e ( paraBoundary )
The rest is just as in the other example, i.e. - Part 2: create world, mesh to obtain boundary -
Part 3: merge surfaces of inner and outer box and mesh together except that the markers for
the individual cells are not constantly 2 (inversion), but counted starting from 1.
# merge i n n e r mesh i n t o o u t e r
f o r m, c i n enumerate ( I n n e r . c e l l s ( ) ) : # run through a l l c e l l s
nodes = pg . s t d Ve c t o r No d e s ( )
f o r i , n i n enumerate ( c . nodes ( ) ) : # add nodes i f not e x i s t e n t
nodes . append ( Outer . createNodeWithCheck ( n . pos ( ) ) )
Outer . c r e a t e C e l l ( nodes , m+1 ) ;
Note further that the cells in the outer box keep the marker 0. In order to make a forward
calculation we first need to map the markers into resistivities by creating a two-column rho.map
file starting with 0 0 (marker zero is mapped into 0 resistivity, which means filling up with
neighboring values):
res = ... # l o a d from f i l e
ind = np . a r a n g e ( l e n ( r e s ) + 1 )
A = np . v s t a c k ( ( ind , np . h s t a c k ( ( 0 , r e s ) ) ) ) . T
f = open ( rho . map , w )
for row i n A:
f . w r i t e ( %d % row [ 0 ] + \ t + % f % row [ 1 ] + \ n )
f . close ()
The final call to dcmod then reads
$ dcmod -v -S -a rho.map -s datafile mesh.bms
Solution:
56
300
200
100
0
100
0 500 1000 1500
Figure 34 shows the mesh and given resistivity distribution as obtained from a hydraulic
simulation and application of Archies Law.
First we read in the MESHTRIA.TXT using the function readHydrus2dMesh, see pygimli/utils/mesh.py
for the code as an example how to read in ASCII files. Then we create neighboring informa-
tion, number the markers of all cells beginning by 1. With showMesh(mesh) we can display it
(from pygimli.viewer import showMesh).
import p y g i m l i a s pg
xm = np . mean ( x )
ym = np . mean ( y )
a n g l e s = np . a r r a y ( np . a n g l e ( x xm + ( y ym ) 1 j ) )
i n d = np . a r g s o r t ( a n g l e s )
We create an empty mesh, create the boundary nodes and connect them by creating edges:
p o l y = pg . Mesh ( 2 )
f o r i in range ( len ( ind ) ) :
p o l y . c r e a t e N o d e ( bNodes [ i n d [ i ] ] . pos ( ) )
57
f o r i d i n r a n g e ( 0 , p o l y . nodeCount ( ) ) :
p o l y . c r e a t e E d g e ( p o l y . node ( i d ) , p o l y . node ( ( i d + 1)%
p o l y . nodeCount ( ) ) , pg .MARKER BOUND HOMOGEN NEUMANN )
We now extract the four corners to construct the outer box.
y1b = min ( y [ x==min ( x ) ] )
y1t = max( y [ x==min ( x ) ] )
y2b = min ( y [ x==max( x ) ] )
y2t = max( y [ x==max( x ) ] )
x2t = max( x [ y==max( y ) ] )
xbound , ybound = 5 0 0 . , 5 0 0 .
n1 = p o l y . c r e a t e N o d e ( pg . RVector3 ( min ( x ) xbound , y1t , 0 . 0 ) )
n2 = p o l y . c r e a t e N o d e ( pg . RVector3 ( min ( x ) xbound , y1b ybound , 0 . 0 ) )
n3 = p o l y . c r e a t e N o d e ( pg . RVector3 ( max( x ) + xbound , y2b ybound , 0 . 0 ) )
n4 = p o l y . c r e a t e N o d e ( pg . RVector3 ( x2t + xbound , max( y ) , 0 . 0 ) )
The four corners are connected with each other and with the two upper corners of the inner
box:
p o l y . c r e a t e E d g e ( n1 , n2 , pg .MARKER BOUND MIXED )
p o l y . c r e a t e E d g e ( n2 , n3 , pg .MARKER BOUND MIXED )
p o l y . c r e a t e E d g e ( n3 , n4 , pg .MARKER BOUND MIXED )
i 1=i n t ( np . n o n z e r o ( ( x [ i n d]==min ( x ) )&( y [ i n d ] == y1t ) ) [ 0 ] [ 0 ] )
i 2=i n t ( np . n o n z e r o ( ( x [ i n d]==x2t )&( y [ i n d ] == max( y ) ) ) [ 0 ] [ 0 ] )
p o l y . c r e a t e E d g e ( n1 , p o l y . node ( i 1 ) , pg .MARKER BOUND HOMOGEN NEUMANN )
p o l y . c r e a t e E d g e ( p o l y . node ( i 2 ) , n4 , pg .MARKER BOUND HOMOGEN NEUMANN )
We can output the poly file as vtk and show it:
p o l y . exportVTK ( out . poly )
showMesh ( p o l y )
We create a new triangle object, put a marker in and generate the mesh:
t r i = pg . TriangleWrapper ( p o l y ) # new t r i a n g l e o b j e c t
t r i . addRegionMarkerTmp ( 0 , pg . RVector3 ( mesh . xmin ( ) + 1 . ,
mesh . ymin ( ) + 1 . ) , 1 )
t r i . s e t S w i t c h e s ( pzeAfaq34 ) # p l c , 0 index , a t t r i b u t e , q u a l i t y 34
mesh2 = pg . Mesh ( 2 )
t r i . g e n e r a t e ( mesh2 )
Finally we copy all cells from the original mesh into the obtained mesh:
f o r c e l l i n mesh . c e l l s ( ) :
mesh2 . c o p y C e l l ( c e l l )
showMesh ( mesh2 )
mesh2 . s a v e ( mesh . bms )
Since a forward calculation requires higher accuracy we decide to do it by using total potential
calculation on a quadratic grid by making:
mesh3=pg . Mesh ( 2 )
mesh3 . createP2Mesh ( mesh2 )
mesh3 . s a v e ( meshFor . bms )
p r i n t mesh3
58
200
200
400
600
500 0 500 1000 1500 2000
Figure 35: Input for meshing (red and green lines) and combined mesh
Now we come to the actual forward calculation: The resistivity information is retrieved from
a VTK file by importing:
vtk=pg . Mesh ( 2 )
vtk . importVTK ( r e s i s t i v i t y . vtk )
r e s=np . a r r a y ( vtk . exportData ( R e s i s t i v i t y ) )
The resistivity vector is brought into a map file using a std C++ map with the number(marker)
in the first column (key) and res in the second:
nr=np . a r a n g e ( l e n ( r e s ))+1
resmap=np . v s t a c k ( ( nr . T, r e s . T ) ) . T
cellMap = pg . stdMapF F ( )
f o r key , v a l i n resmap :
cellMap [ key ]= v a l
The resulting resistivity map is applied and empty cells, i.e. cells in the created outer region
(marker 0) obtain resistivity by prolongation:
mesh3 . m a p C e l l A t t r i b u t e s ( cellMap )
mesh3 . f i l l E m p t y C e l l s ( mesh3 . f i n d C e l l B y A t t r i b u t e ( 0 . 0 ) , 1.0 )
We now load the scheme file (data without actual data), create a new modelling operator and
calculate it. The resulting u are treated as R.
data=pg . DataContainer ( hydrus2d . shm )
f=pg . D C M u l t i E l e c t r o d e M o d e l l i n g ( mesh3 , data , True )
f . c a l c u l a t e ( data )
data . s e t ( r , data . u ( ) )
data . s a v e ( out . dat , a b m n r )
Now we can do inversion of the data set by using the bert script. The problem here is that
we have both surface and subsurface electrodes which is not yet automatically covered by bert
(but will be soon). Therefore we create a dummy data file with only the surface electrodes,
use it for mesh generation and invert the synthetic data after adding some Gaussian noise to
it:
n s e l =36 # number o f s u r f a c e e l e c t r o d e s
ERR=1
59
i n f i l e =out . dat
o u t f i l e=out0 . dat
d a t f i l e=syn . dat
# c r e a t e dummy data f i l e with o n l y s u r f a c e e l e c t r o d e s
echo $ n s e l > $ o u t f i l e
head n 2 $ i n f i l e | t a i l n 1 >> $ o u t f i l e
head n $ [ n s e l + 2 ] $ i n f i l e | t a i l n $ n s e l | t a c >> $ o u t f i l e
echo 0 >> $ o u t f i l e
# c r e a t e c f g f i l e f o r g e n e r a t i n g meshes
bertNew2DTopo $ o u t f i l e > b e r t 0 . c f g
echo PARADEPTH=300 >> b e r t 0 . c f g
echo PARADX=0.2 >> b e r t 0 . c f g
# do o n l y t h e meshes with t h e dummy f i l e
b e r t b e r t 0 . c f g meshs
# N o i s i f y s y n t h e t i c data with Gaussian N o i s e
d c e d i t vSEN e $ERR u 0 o $ d a t f i l e $ i n f i l e
# c r e a t e normal c f g f i l e and do i n v e r s i o n
bertNew2DTopo $ d a t f i l e > b e r t . c f g
echo LAMBDA=300 >> b e r t . c f g
echo ZWEIGHT=0.1 >> b e r t . c f g
echo BLOCKYMODEL=1 >> b e r t . c f g
# cp mesh . bms mesh/ # u s e Hydrus mesh f o r i n v e r s i o n
b e r t b e r t . c f g pot c a l c
Figure 36 shows the obtained inversion result, which is close to the synthetic model. The very
low contrasts in the model and limited number of electrodes prevent a better reconstruction.
In case of realistic heterogeneities such a result could only be retrieved by a time-lapse scheme,
where all systematic effects and small-scale structures will be cancelled out.
300
200
100
0
100
200 400 600 800 1000 1200
60
E.9. Simulate CEM ring electrodes in tilted boreholes
doc/howto/tilted boreholes
Contribution: Laure Beff (UC Louvain)
Task: Simulate the real geometric factors of ring electrodes using CEM-FEM
Problem: Define a mesh that contains tilted borehole tools with ring electrodes
The borehole probes are cylinders with metal ring electrodes at certain positions. Cylinders
are approximated by regular prisms (e.g. 6 segments), which can be created by
c r e a t e C u b e Z s 6 cube
creating a 1x1x1m cylinder centered at the origin. The space of the cylinder is disregarded
from meshing by adding a hole marker using -H. The probe tool itself consists of a sequence
of non electrodes and electrodes, which are subsequently merged with each other and after
translation to the borehole position they are merged with the big box (world in which the
simulation is done) with the typical boundary conditions:
polyCreateWorld x $SIZE y $SIZE z $SIZE m1 $MESH # b i g box=world
Before some lengths are stored in scalar and vectorial variables, e.g.
RAD=0.045 # r a d i u s o f b o r e h o l e
DIST=(0.103 0 . 1 5 2 0 . 2 0 2 0 . 2 5 2 0 . 3 0 2 0 . 3 5 3 0 . 1 0 0 ) # e l e c t r o d e d i s t a n c e s
At the beginning, we create prototypes of the two different cylinders and shift it such that the
top is zero and they have the correct radius
polyCreateCube H Z s $SEG c y l # c r e a t e ( c l o s e d ) u n i t c y l i n d e r
p o l y T r a n s l a t e z 0.5 c y l # s h i f t t o upper boundary=s u r f a c e
p o l y S c a l e x $RAD y $RAD c y l # s c a l e to c o r r e c t radius
Since the distances differ, their height will be scaled later, the electrode now
cp c y l . p o l y e l e c 0 . p o l y # unit electrode
p o l y S c a l e z $DRING e l e c 0 # right height
Each borehole b is constructed at the origin by starting with a cylinder element
cp cylC . p o l y b o r e h o l e . p o l y # uppermost p i e c e
A variable z holds the current depth and is initialized with the distance to the first electrode,
which is retrieved from the vector PD by using
z=$ {PD[ $b ] }
After each adding of an element z is increased by adding the individual length using awk
z = echo $z $DRING | awk { p r i n t $1 + $2 } # # output i n z
The standard electrode is copied and translated and an electrode marker $ELM. is associated,
which is later used to identify the CEM electrode (starting from -10000 downwards). Finally
the electrode is merged with the borehole
cp e l e c 0 . p o l y e l e c . p o l y #
polyAddVIP B m $ELM e l e c
p o l y T r a n s l a t e z $z e l e c
polyMerge b o r e h o l e e l e c b o r e h o l e
Similar is done with the pieces between the electrode, i.e. scale, translate and merge. The
inner loop
61
for e in {0..6}
do
...
done
runs over the electrodes and the outer (b) over the borehole tools. Before merging with the
world, the electrodes are translated to their positions. However, the boreholes are tilted by
a specific angle (also stored the vector ROT). The rotation around the y axis (i.e. y remains
constant) is done by
p o l y R o t a t e R y $ {ROT[ $b ] } b o r e h o l e # R f o r rad ( d e f a u l t=deg )
The uppermost face (of each of the 6 electrodes) is intersecting the earths surface and must
therefore be moved to z=0. Therefore a copy of the borehole poly file is created and the lines
with the points close to the surface are changed accordingly using head and tail commands
head n1 b o r e h o l e . p o l y > n e w b o r e h o l e . p o l y # keep l i n e 1 , change 2+3
head n3 b o r e h o l e . p o l y | t a i l n2 |
awk { p r i n t $1 \ t $2 \ t $3 \ t 0 \ t $5 } >> n e w b o r e h o l e . p o l y
head n6 b o r e h o l e . p o l y | t a i l n3 >> n e w b o r e h o l e . p o l y # keep 46 e t c .
The problem is that the rectangle side faces of the uppermost segment are not coplanar anymore
due to the movement. One solution could be a more sophisticated positioning. Another way
is to dissect all rectangles into triangles that are coplanar by definition. This is achieved by
transforming the very first piece (the beginner) to the common STL format (that uses triangles)
and then back to POLY using polyConvert:
p o l y C o n v e r t S b o r e h o l e . p o l y
p o l y C o n v e r t P b o r e h o l e . s t l
Unfortunately, the hole marker is lost in the STL file, so it must be re-added:
polyAddVIP H x 0 y 0 z 0.01 b o r e h o l e
Note that all PLCs ca be converted in to VTK (and viewed in ParaView) using
p o l y C o n v e r t V f i l e . p o l y
When looking at the PLC, we see that the lower face of the upper piece is dissected, whereas
the upper face of the adjacent electrode at the same position has still 6 nodes. Therefore we
create all the electrode cylinders with the option -O so that the top and bottom faces are
omitted. As a consequence, we have a valid mesh input, which can be meshes using tetgen and
transformed into a BMS/MESH/VTK format files using:
t e t g e n pazVACq1 . 1 5 $MESH # mesh $PLC with q u a l i t y =1.2 mesh . 1 .
meshconvert vBDMV i t o $MESH $MESH. 1 # c o n v e r t t o BMS/MESH/VTK
To achieve high accuracy we further transform the mesh to quadratic elements using -p
meshconvert vBD p o $ {MESH}P $MESH. bms
Note that if there are meshing errors the problematic zones can be investigated using
t e t g e n d m e s h f i l e
and the resulting mesh only contains the mesh dissections or non-coplanar elements. If there
are additional surface electrodes they can be added using polyAddVIP
polyAddVIP f s u r f a c e e l e c . xyz m 99 $MESH
62
They should be locally refined to a/10, where a is the electrode distance:
polyRefineVIPS z 0.015 $MESH
The whole script reads as below.
Now we want to model with the script and investigate the difference to point electrodes. First
we run dcmod with -1 and the scheme file applied
dcmod v 1 s d a t a f i l e mesh/meshP . bms # o u t p u t s num . ohm
Note that although dcmod tries to find electrodes by their position, there is a priority, first
CEM electrodes and then are searched and then nodes. In cases of close distances between
different types a wrong association may occur resulting in free electrodes. We therefore
recommend to always use CEM before Node and free electrodes by sorting data. Next, we
create a file containing the analytical geometric factor using
d c e d i t f a b m n k o geom . dat d a t a f i l e
Finally we use this file to filter the numerical results with these geometric factors.
d c e d i t vSB c geom . dat o e l e f f . dat num . ohm
As the modeled R = 1/knum , the resulting a = kana R = kana /knum is the electrode effect.
Task: Construct a mesh with arbitrary geometry, boundaries and regions for computations in
BERT.
Problem: For complex geometries, mesh construction using the poly tools can be cumbersome
and lacks of straightforward visual inspection.
Solution: Create the mesh in Gmsh, a 3D finite element mesh generator with parametric input
and advanced visualization capabilities, and convert it to BERT for subsequent modeling and
inversion.
When the scientific task requires a complex finite-element discretization (i.e. incorporation of
structural information, usage of a complete electrode model (CEM), etc.), external meshing
tools with visualization capabilities may be the option of choice for some users. In general, the
bindings provided by pygimli allow to interface any external mesh generation software.
This HowTo presents an example using Gmsh (Geuzaine and Remacle, 2009). Gmsh allows for
parametric input, i.e. physical boundaries and regions (and any other input) can be specified
interactively using the graphical user interface or Gmshs own scripting language. A lot of
profound tutorials can be found on the Gmsh website (geuz.org/gmsh) or elsewhere. Here, a
crosshole ERT example with geological a priori information is presented with a focus on the
usage in BERT.
Geometry We start with the definition of several points to layout the main geometry. A
point is created via the graphical user interface as illustrated in Fig. 37.
63
Figure 37: Steps to create a point via the graphical user interface.
We create a large domain to solve the forward problem and specify the coordinates as well as a
characteristic length (Fig. 38) to constrain the relative size of the mesh elements at that point
(this is also useful for near-electrode refinement for example).
In the geometry file being created, this step would correspond the following command19 :
P oi nt ( 1 ) = { 5000 , 0 , 0 , c l 1 } ;
Note that it is convenient to replace the characteristic length by a variable. During this HowTo,
we will switch between the GUI input and the scripting language. Subsequent to the definition
of the corner points, we can set up the boundaries by connecting the points created, as shown
in Fig. 39.
The corresponding command in the script to connect points 1 and 5 would look like this:
Li ne ( 1 ) = { 1 , 5 } ;
Obviously, we also have to define the electrode positions:
For i In { 0 : 9 }
P oi nt ( newp ) = {15 ,0 , 4( i +2) , c l 2 } ; // B o r e h o l e 1
P oi nt ( newp ) = {35 ,0 , 4( i +2) , c l 2 } ; // B o r e h o l e 2
EndFor
Similarly, we define a set of points describing a geological body and connect them with a spline
curve:
19
During this HowTo, we will switch between the GUI input and the scripting language. Gmshs reload button
allows for quick and straightforward interaction between both modes.
64
Figure 39: Setting the parameters of a point.
When the surfaces (or volumes in 3D) have been defined, the mesh can be generated by simply
clicking the 2D button in the mesh menu (Mesh - 2D). As you will notice, the electrodes are
not located on node points, as they do not layout any geometric feature. To change this, we
can embed them in the surfaces (Mesh - Define - Embedded Points) or directly in the script
via:
P oi nt { 1 0 , 1 2 , 1 4 , . . . , 1 7 , 2 3 , 2 5 , 27} In S u r f a c e { 1 0 6 } ;
P oi nt { 1 8 , 1 9 , 21} In S u r f a c e { 1 0 4 } ; // e l e c t r o d e s w i t h i n t h e t a r g e t
65
Boundaries and regions Since Gmsh allows for parametric input, we can finally specify the
boundary conditions and region marker. This is done in the Physical Groups section under
Geometry. The group numbers can be changed within the script. Number 1 is assigned to a
Neumann-type boundary condition and number 2 to a mixed one.
P h y s i c a l L in e ( 1 ) = { 3 , 2 , 1 } ; // Free s u r f a c e
P h y s i c a l L in e ( 2 ) = { 4 , 5 , 6 } ; // Mixed boundary c o n d i t i o n s
The indices of the regions will directly map to the region marker in BERT.
P h y s i c a l S u r f a c e ( 1 ) = { 1 0 2 } ; // Outer r e g i o n
P h y s i c a l S u r f a c e ( 2 ) = { 1 0 6 } ; // I n v e r s i o n r e g i o n
P h y s i c a l S u r f a c e ( 3 ) = { 1 0 4 } ; // G e o l o g i c a l body
Finally, we assign all electrodes to a Physical Group with the marker 99.
P h y s i c a l P oi nt ( 9 9 ) = { 9 , 1 1 , . . . , 2 4 , 2 6 , 2 8 } ; // S e t t i n g e l e c t r o d e marker ( 9 9 )
Thats it! Now, you can re-run the meshing algorithm and save the result.
Note that in addition to the characteristic length at each point, there are many different ways
to constrain the element size (in general or locally) and the resulting mesh quality, which will
not be discussed here.
Import to BERT Any Gmsh output (2D and 3D) can be imported using pygimli and subse-
quently saved to the binary format used in BERT:
from p y g i m l i . mesh import readGmsh
mesh = readGmsh ( mesh . msh , v e r b o s e=True )
mesh . s a v e ( mesh . bms )
This resulting *.bm file can be directly passed to dcmod and dcinv
dcmod vS a a t t r i b u t e . dat s d i p o l e . shm o d i p o l e . ohm mesh mod . bms
d c e d i t vSBN e 1 u 0 f i l t e r =k>1e4 : A o d i p o l e . data d i p o l e . ohm
d c i n v vJS l 50 p mesh inv . bms d i p o l e . data
For the sake of illustration, the example presented was chosen to be simple and two-dimensional,
although Gmsh and the import function provided allows for much more.
66