Flows are how Danomics handles many of the repetitive tasks that are part of your day-to-day workflows. Whatever you are trying to accomplish – there’s a Flow for that! (Or at least there will be – we are building out new Flows everyday). Flows will be a new concept to many of you and is based on a concept that we borrowed from the seismic processing world – that is there are many small steps that we take along the way in performing more complex and repetitive tasks. And because these tasks are repetitive, we don’t want to have to duplicate our efforts over and over again, so we build a process that will handle it for us.

The best way to understand this is through a few basic examples.

Example 1: Smoothing a Grid

One of the common tasks that we do in geoscience interpretations is to generate a smoothed grid of some property that we will use in presentations, use as inputs to other interpretations, or to make decisions such as where to drill. Regardless of the use case, the set of steps that we will take is very similar. For example, if we want to make a smoothed map of our average Vclay by zone we would need to do something like the following:

  1. Interpret the clay volume for each zone (for however many wells are in your data set)
  2. Calculate an average of the clay volume over each zone
  3. Filter out errant values and outliers
  4. Generate a grid of the clay volume for each zone
  5. Smooth the grid of the clay volume for each zone
  6. Save the grids

The above is a six step process, with each step building on the output of the previous step. It is easy to imagine how for some updates this process could involve many more steps along the way. This means that if you make a change in the first step (say you update the interpretation for a single well), you still have to go through the button clicks for each step all over again. This presents both a problem not only with respect to button clicks, but also in ensuring that the process is repeated in a consistent manner.

Flows allow you to perform this process without having to recreate each of the subsequent steps. An example flow for this might be like the following:

Here is an explanation of what we are doing in each step shown above:

  • LogInput: This tells the Flow which log database we are using.
  • CpiLogCalc: In this step we tell the Flow which property we want (Average Vclay), which CPI file (petrophysical interpretation) to use, which tops file to use, and which well headers to use.
  • PointsSelect: In this step we tell the Flow which zone to use.
  • PointsToGrid: In this step we tell the Flow how to make the grid (which gridding algorithm, etc).
  • GridOutput: In this step we (optionally) save a copy of the pre-smoothed grid for reference.
  • GridSmooth: In this step we apply a smoothing algorithm to the grid we constructed.
  • GridOuput: In this step we save a copy of the smoothed grid for future use.

Now, if we were to update our petrophysical interpretation and wanted to see how that affected our resultant clay volume maps, all we would need to do is run our flow. This of course is not limited to clay volume – we could do this for any property.

Example 2: Curve Editing

Imagine that you were working a lot with seismic data and as part of this you needed to clean up your sonic logs across a number of different projects so that you could then use them to make synthetic seismic wavelets. This may involve you doing operations such as the following:

  1. Reading in the log database
  2. Removing bad data caused by digitization errors
  3. Finding data that falls outside 2.5 standard deviations
  4. Creating a smoothed version of the sonic curve
  5. Replacing the data that is outside 2.5 standard deviations with the smoothed curve
  6. Saving the new edited logs out

If you were to do this separately for each project you would run the risk of potentially being inconsistent from project to project if you didn’t remember how you performed each of the steps (for example, how exactly did you perform the smoothing in step 4?). As this is a batch operation this is exactly what Flows are designed for. Also, let’s say you updated your database or decided that you wanted to perform the data replacement at a different threshold – with Flows you simply make the changes in one place and then re-run the process. Your flow for this might look like the following:

  • LogInput: You select the database of well logs to use.
  • Python: You delete out or replace data that falls outside of a certain range
  • Python: You calculate the standard deviation of the data.
  • Python: You develop a smoothed version of the sonic curve
  • Python: You replace the data outside of your standard deviation threshold
  • LogOutput: You output the final log for use

You could even potentially stack on an additional flow for generating the wavelet, convolving it with the newly created sonic curve, and generating the synthetic seismic wavelet.

Flows for Everything

If you take the time to study many of the work products that you generate during a project you will see how much repetitive work is involved and it reveals that there are many ways that Flows can be used to streamline and simplify your interpretational process. Here are some of the areas where we see flows being especially useful:

  • Building out consistent sets of maps for a project
  • Performing operations on maps such as smoothing, constraining values, etc.
  • Identifying and eliminating outliers from results
  • Renaming well logs en masse (data management)
  • Processing deviation surveys
  • Calculating lateral lengths and the spacing between wells
  • Predicting missing well logs via machine learning
  • Extracting values from grids or seismic

This list could literally go on for pages and pages, especially as we move to provide users with ways to run their own custom codes in Python. The power of Flows when combined with the existing petrophysics, DCA, and mapping capabilities creates a way to take your interpretations to the next level.

Categories: flowsHelp