Building and Publishing Python-based Flows
Danomics Flows let users write their own tools using Python. These tools can vary from simple scripts to calculate a property to complex tools with several user options.
Python tools can be published, which makes them appear as if they are native tools in Danomics and makes them available to coworkers that are in the same organization.
Note: Many of Danomicsbuilt-in Flow tools are written in Python (e.g., NullRepeatedLogSamples and the machine learning tools Train and Predict). The code for these is user visible and can be used as a reference when writing your own tools.
Publishing a Python Flow Tool
When building Flows it is relatively common to use a Python tool to process data. In many cases these tools are useful to others in your company and you would like to make it broadly available to them. Let’s consider an example Flow tool that calculates the clay volume from gamma ray, with user options for the clean and clay parameters (in reality you would be far better served doing this in the CPI, but this is a useful illustration of the different components involved).
Once we are done, the final tool will look like this:
The code that generates this is as follows:
For the purposes of publishing the Python tool, the key part is the “name:” field provided in the tool definition. This name should be unique. To share the tool with other users click on the icon to the right of the Code tab. This opens a dialogue to “Save as tool”.
Once the tool has been published it will show up in the Flow tools folder in the File Navigation menu on the left hand side. When added to a flow it will now appear with its name as shown here.
Use Cases
We have seen companies apply custom built Python tools in a number of ways. Here are some examples:
- Using Python to deploy machine learning models developed by in-house data science and software teams.
- Performing custom curve renaming and unit conversions based on company's internal standards for archiving data.
- Performing proprietary calculations for operations such as log cleanup or petrophysical calculations.
The ability to publish tools within an organization enables companies to take their work out of R&D and put it into the hands of users. Contact us to learn more.
Related Insights
Sample data to get started
Need some sample data to get started? The files below are from data made public by the Wyoming Oil and Gas Commission. These will allow you to get started with petrophysics, mapping, and decline curve analysis. Well header data Formation tops data Deviation survey data Well log data (las files) Production data (csv) or (excel) Wyoming counties shapefile and projection Wyoming townships shapefile and projection Haven’t found the help guide that you are looking for?
NMR Interpretation Module
Purpose The NMR interpretation module allows users to calculate porosity, bound and free fluids from the measure T1 and T2 distributions from NMR tools. Primary Outputs Discussion In oil and gas well logging, the $T_2$ distribution is used as a high-resolution "map" of the formation's pore system. While a standard porosity tool tells you how much fluid is there, NMR tells you where that fluid is trapped and whether it will flow.
General Concepts in Flows
Because Flows will be a new concept to many of you it is important to understand the general concepts that flows are built upon. These concepts are: Many tasks are repetitive These tasks should be done consistently These tasks can often be split into small pieces. What Are Flows? Flows are batch processing system that combine Flow tools to perform operations consistently across a dataset, and are especially useful for repetitive tasks like gridding data.