Danomics is a multi-well reservoir characterization and mapping tool designed to scale petrophysical interpretations to large datasets (1000s of wells). This is done via a combination of automation, establishing a petrophysical blueprint, integrating spatial data, and providing the necessary tools to QC interpretations.
The Key Well
In Danomics users designate one of their wells to be a “key well”. The method and parameter selections used in this well are used to build a petrophysical blueprint that can then be applied to all of the subsequent wells in a project, allowing them to have their own, custom interpretations. The interpretations in non-key wells can then be modified and updated as needed as part of the project QC (typically performed at the end of each module).
Integrating Spatial Data
Users can integrate spatial data in their interpretations by using grids and spatial tables to populate parameters. This allows the user to spatially vary the value of parameters across a project in a predictable and repeatable way.
For example, a user may have a map of source rock maturity (e.g. Vro) that they want to use in their TOC calculation. The values from this map can be extracted from the map and used to populate that parameter on a zone-by-zone basis.
Spatial tables and grids are very similar but there are differences. A grid is essentially a map file of data that has been gridded using a specified algorithm. A spatial table is a series of points (X location, Y location, Value) that are not gridded, but instead are used to triangulate the values for any wells that fall between those points.
By default Danomics performs evaluations on a top-to-top basis. That is if Zones A, B, and C are present it will calculate from the top of Zone A to the top of Zone B and the top of Zone B to the Top of Zone C. If Zone B is absent it will then calculate from the top of Zone A to the top of Zone C. Although this behavior is acceptable if there is a complete tops dataset and consistent stratigraphy, for larger projects where an apples-to-apples comparison is needed users should define a stratigraphic column.
In Danomics the Zone Editor allows users to set a top and a base for each petrophysical zone. Multiple tops or bases, in order of priority can be designated. Zones can be activated or deactivated as desired. Setting zones allows for consistency across an area and generally makes the interpretation and QC more straightforward.
Interpretations can be QC’d using a combination of maps, cross-sections, and parameter data tables. All of these methods allow users to visually inspect results so that errant results can either be fixed or omitted from the analysis.
There are pre-built maps and cross-sections that correspond to each of the interpretation modules. It is recommended that the QC be performed at the end of each module as this typically results in an overall faster turnaround time for an interpretation.
Data Input / Output
Danomics takes the following data common data types:
- Well headers can be loaded in CSV, Excel, or IHS 297 formats. These are required if you want to map the data or use cross-sections.
- Well logs are loaded as a zipped folder of LAS files.
- Well Formation Tops can be loaded in CSV or Excel formats. Tops can be modified/picked in the platform.
- Well Production can be loaded in CSV, Excel, or IHS 298 formats. These are required for the DCA Tool.
- Points data can be loaded in CSV or Excel formats. The most common use of points datasets are for core data.
- Spatial Tables can be loaded in CSV or Excel format.
- Shape files can be loaded as *.shp files with associated projection files.
Data that is specific to a well must have a UWI or API number – this is how all data is tied together in the platform. The UWI/API must be identical across all data sets.
Petrophysical results can be exported as a summary spreadsheet and the user can select what curves are to be exported in LAS format.
Danomics Petrophysics is divided into a number of modules that are accessed by the Module Selection dropdown menu. The core modules as described below are generally meant to be ran in order, however, there is no strict requirement to do so and the user can move back and forth between modules seamlessly. The advanced modules are typically meant to be ran after the core modules as they typically build on their functionality, but once again this is not required.
The Setup module allows users to set a few basic options with a number of data conditioning tasks occurring behind the scenes. Users set a “Calculation window”, which dictates the interval over which the curve data will be drawn, and a “Stats Window”, which determines the interval over which many basic statistics used throughout the workflow are pulled from.
This module performs the following operations automatically with no user input required:
- Curve aliasing is performed utilizing a pre-built (user extensible) alias table of approximately 7500 curve mnemonics.
- Curve unit conversion is performed by using a combination of well log header data and curve statistics to convert units to those used in the curve calculations
- Curve family conversion is performed so that curves of the same type are converted to the same reference space (e.g., density porosity to bulk density or sonic porosity to travel time)
- Curve lithology standardization is performed so put curves in a common lithological reference space (e.g., density porosity in sand/lime/dolomite units are converted separately, neutron curves are converted to a limestone reference, etc).
- Curves compositing is performed by evaluating all of the available aliases and selecting the first non-null value encountered in the order in which aliases are listed.
At the end of this module the user should have a complete set of curves that are consistent with respect to names, units, and lithological reference.
The Curve Normalization module gives the user options to normalize curves using four different methodologies. These include:
- Simple Shift. This allows the user to shift a curve in a target well, relative to a key well, such that the two curves align at a given percentile value (e.g., align the curves on the p50 value)
- Simple Scale. This allows the user to scale a curve in a target well, relative to a key well, such that the two curves align at two given percentile values (e.g., align the curves so that p10 and p90 values in the target well match those of the key well).
- Advanced Scale. This allows the user to scale a curve in a target well, relative to a key well, such that the two curves align at three given percentile values (e.g., align the curves so that p10, p50 and p90 values in the target well match those of the key well).
- Scaling to Fixed Range. This allows the user to scale a curve to a fixed range. For example, the user designates the p10 to have a value of 30 and the p90 to have a value of 150.
The user also has the option to indicate if “auto-eval” should be used. Auto-eval is the process of parameter remapping. This is where the software evaluates how parameters were picked relative to the statistical distribution of a well, and then attempts to replicate that parameter choice in other wells. For example, if you choose to use GR auto-eval, then the GR clean and clay parameters for Vclay calculations will be mapped to their statistical distribution in the key well, and then remapped to the same points in the target wells. This can essentially be considered the inverse of normalization (e.g., we are changing how we select parameters instead of changing the curve data we select the parameters on).
Badhole ID & Repair
The Badhole ID & Repair module allows users to flag and repair washout for the density, photoelectric, neutron, and sonic curves. Washout for the density and photoelectric curves is done together since they are measured by the same instrumentation. These can be flagged via setting cutoffs on the:
- Borehole rugosity
- Density correction curve
- Bulk density curve
Washout for the Neutron and Sonic curves can be flagged by setting a threshold value for those curves individually.
Users can pad the badhole flag using a lowpass filter to expand the badhole flag to adjacent samples that may be errant but were not flagged.
Once an interval is flagged as badhole options include taking no action, nulling out data flagged as badhole or repairing the curve. Repairing the curve can be done via multiple linear regression (MLR) or via random forest (RF). With the repair options every potential combination of model of the designated type is built and tested, and then the best model (as determined by the minimum sum of the squared error) is applied.
The repaired curve can also be blended with the original curve to create a smoother transition between repaired and original curve intervals.
Curve despiking is also available so that users can remove spikes that fall outside of a given number of standard deviations.
The Clay Volume module provides a means of calculating the clay volume of each interval using a variety of methodologies. Methods include:
- Single Clay Indicators
- Gamma Ray (Linear, Larionov Young, Larionov Old, Clavier, Steiber)
- Spontaneous Potential
- Dual Clay Indicators
- Neutron Density
- Sonic Density
- Neutron Sonic
The final clay volume calculation can then be taken either as the minimum or average of the selected methods.
The TOC Analysis module provides a means of calculating the total organic carbon in weight fraction and the kerogen in volume fraction. TOC can be calculated by the following methods:
- Modified Passey
- Passey Overlay methods
- Passey Sonic
- Passey Neutron
- Passey Density
- RhoB Avg. Method (an average of 12 Schmoker-like models)
The TOC weight fraction is converted to kerogen volume. This can be done either via the Danomics model, Schlumberger model, or via a standard conversion.
The mineral inversion module allows users to calculate the mineral volumes within the well. The mineral volumes are then used to calculate both a grain density a direct estimate of the porosity (via a summation of the fluid phases).
The quality of the inversion can be evaluated by comparing the values of the predicted curves from the mineral assemblage to the actual curves and by looking at the total and average misfit between the predicted and actual curves.
Users can customize their interpretations by setting the mineral components and properties, fluid components and properties, and curves to be used on a zone-by-zone basis.
The Porosity Interpretation module provides users a means of calculating the total and effective porosity. Porosity can be calculated by the following methods:
- Mineral Inversion
- Mineral Inversion (strict)
- Grain Density
The Mineral Inversion porosity uses the grain density from the mineral inversion module, but a user value for the fluid density. The Mineral Inversion (strict) porosity use the summation of fluids from the mineral inversion module results. The Grain Density porosity uses a deterministically calculated grain density solution while the RhoMaa-UMaa method uses a grain density derived from the calculated volumes of quartz, calcite, and dolomite from a ternary solution. The porosity solutions can be TOC corrected (not applicable to the mineral inversion since TOC is implicitly addressed in the mineralogy solution).
The Total Porosity can be converted to an effective porosity by providing a clay porosity estimate.
The Sw Interpretation module provides users a means of calculating the water saturation. Available methods include:
- Archie PhiE
- Archie PhiT
- Modified Simandoux
- Dual Water
There are also several conventional permeability correlations available including:
- Wyllie Rose
The Cutoffs module allows the user to flag gross reservoir, net reservoir, and net pay. The user can designate as many criteria as desired. A minimum thickness for flagging intervals can also be applied.
The lithofacies modelling allows users to create lithofacies clusters via k-means methodology for n-clusters using the selected curves.
The Volumetrics module lets users calculate the oil and gas in place for a given unit area. The user can either choose to enter a formation volume factor or to have one calculated using the Vasquez-Beggs and Hall-Yarborough correlations for oil and gas, respectively.
The Pore Pressure module allows users to calculate the reservoir pressure using the following methods:
- Eaton Sonic
- Modified Eaton Sonic
- Eaton Resistivity
- Modified Eaton Resistivity
- Popielski Sphi-Dphi
An overpressure curve is also calculated based on water gradient information.
The Geomechanics module provides calculations for the brittleness via:
- Simpleton (Danomics)
- Jarvie et al.
- Wang and Gale
It also calculates rock properties such as:
- Young’s modulus
- Bulk modulus
- Shear modulus
- Poisson’s ratio
The module also allows for the user to designate reservoir and frac barrier criteria to help visualize intervals that may be accessed during a stimulation.
1D Geomechanical Earth Model
The 1D GEM module allows users to convert from a dynamic to static Young’s modulus, determine the unconfined compressive strength, and calculate the internal friction angle. If the minimum and maximum horizontal stresses are set and deviation information provided are then mudweight windows can be determined.
Shear Log Modelling
The Shear log modelling module allows users to generate a synthetic shear log via the Greenberg-Castagna models for a given lithology from the compressional sonic log. This is especially useful when paired with the geomechanics modules as many wells do not have shear logs available.
Fluid Substitution Modelling
This module allows users to calculate the change in shear log and Vp/Vs ratios with changing water saturations. It uses the Batzle-Wang correlations for determining fluid properties.
Cased Hole Interpretation
The Cased Hole interpretation module allows users to calculate saturations using the available sigma logs.