//

When Danomics set out to incorporate machine-learning into our petrophysical workflow we had the following objectives:

  • Increase the data available for use in petrophysical calculations
  • Eliminate the need for users to tune correlation parameters on a zone-by-zone basis for things empirically-based correlations like the Gardener-relation
  • Improve the quality of interpretations by providing high-quality repaired curves
  • Reduce the overall analysis time by significantly reducing the time spent QC’ing results in badhole intervals.

We do this by providing an interface that allows the user to interpret badhole by multiple criteria in a simple visual interface. Choose the curves you want to flag and the criteria that meet your needs. Our backend the automatically performs the correction using either a MLR or Random Forest based repair.

Our work has shown that with our methodology you can significantly reduce the amount of well that can’t be interpreted due to washout. For example, in a study of 675 wells our approach allowed users to generate quality interpretations over 97.7% of the intervals flagged as badhole. And doing this is fast and accurate. We have shown that in blind tests our curve repair has a MAE of less than 1% and takes less than 0.1 sec/well.

Best of all, this is just a small part of the data prep and conditioning that we do for you in Danomics. We alias your curves, perform unit standardization, cast everything in the correct matrix space, composite your curves, provide tools for normalization, and now offer best-in-class curve repair.

Categories: Petrophysics