In the past decade, the oil and gas industry has experienced a significant increase in the uptake of primary and secondary instrumentation that makes use of smart transmitter technologies. These devices are capable of outputting big datasets over industrial networks such as WirelessHart, Modbus, Foundation Fieldbus and Profibus. The data itself can contain process values relating to the device’s primary function, e.g., fluid flow measurement, as well as diagnostic information used to asses device performance or gain secondary information on the process stream.
Historically, this data has been used by metering technicians and commissioning engineers for maintenance and quick checks, to ensure that the device is performing as expected before integration into a facility supervisory control and data acquisition (SCADA) system. Many facilities also use the diagnostic values for simple range checking and alarming to indicate when a given parameter has gone out of acceptable conditions. When implemented correctly, this information can alert facility operators to potential problems, allowing for targeted investigation and preventative maintenance.
There is now an increasing interest within industry in accessing and logging these diagnostic process values, by using software packages that use machine learning and advanced mathematical modeling techniques to automatically interpret device performance by identifying correlations between diagnostic parameters across multiple sensors and process control equipment. Such a system gives end users access to a new level of facility performance analysis and therefore has the potential to streamline decision-making with regard to production and maintenance spending.
A more specific example is the financial and operational desire to move toward a system that embraces condition-based calibration (CBC) as opposed to time-based calibration (TBC) on devices such as flowmeters.
For example, with a TBC method there is potential to stop facility operation unnecessarily to calibrate a flowmeter, which in reality has not deviated from its required operating parameters. The combined costs of meter calibration, pipe fitting, electrical isolation/connection and facility downtime can be significant depending on the specifics of the facility in question. Conversely, it is also possible for a meter to have deviated from its expected performance envelope and it may not be due for recalibration for a significant period of time, resulting in fluid measurement errors that may have significant financial consequences to the facility operators.
A CBC schedule has the potential to reduce these types of operating costs by allowing facilities to develop more dynamic operating patterns that are based on continuous automated diagnostic analysis of facility and meter performance. By logging key meter diagnostic values in tandem with standard device outputs and comparing them to known baseline conditions, it is possible to determine whether a flowmeter is operating within specification. Additionally, with enough historical information on a specific device, it is possible to predict data calibration drift over time. If CBC is implemented in place of TBC, then planning is more challenging due to the irregular intervals, and so predictive capability becomes crucial to allow effective planning to continue.
Figure 1. Overview of a typical digital communications infrastructure
Depending on the metering technology, the parameters for diagnostic interpretation can vary considerably. For instance, a Coriolis flowmeter produces different diagnostic data from an ultrasonic meter due to the differing underlying physics of operation. Both metering technologies also have different installation requirements and environmental conditions to consider. For example, external influences such as vibration and ambient temperature can affect the quality of data output from Coriolis meters.
It is therefore crucial that when facility operators consider moving to a CBC system, credible scientific data — both meter- and facility-specific — must first be obtained, to ensure that any resulting operational decisions subsequently made are done so based on quantifiable data.
Potential variables associated with a large production facility are valves, pipe bends, temperature and pressure effects. When these variables are combined with the variations in meter design, it becomes clear that implementing a reliable CBC system that has full user confidence is no small task. This is currently one of the key reasons that TBC methods are still widely used in industry.
Modern technology providing a path
Factors that are gradually increasing the uptake of CBC-based facility maintenance patterns are the continued growth and adoption of cloud-based computing and data storage, as well as affordable computing power required for complex modeling and prediction. The standardization of digital communication protocols, as well as individual manufacturers supporting the integration of their devices into cross-platform packages, have also allowed for a number of unique and application-specific software solutions to be developed that support CBC facility operation.
The principle of condition-based calibration and monitoring is a component of a much larger concept, broadly referred to as “digital oilfield.” The overall aim of this concept is to optimize facility operating costs by streamlining areas such as maintenance, staff scheduling, production and data analysis. The exact parameters of a digital oilfield system are largely influenced by the specifics of the facility it is to monitor.
The specification and commissioning of such a system requires an in-depth understanding of the facility’s electronic, electrical and mechanical design as well as its normal operating requirements and capabilities. When predictive information is initially generated, it should be validated by staff with relevant knowledge and experience before key decisions are made on the data. Over time after multiple tuning iterations, confidence in the data is built up and in doing so the facility can start to adopt an efficient and intelligent decision-making process as opposed to a regimented and potentially inefficient one.
Research is currently underway in multiple industry and academic sectors with the aim of helping end users build confidence in the types of systems described in this article. Manufacturers of instrumentation, flowmeters and diagnostic software packages are in some documented cases, supporting this endeavor. This level of interaction between researcher, end user and manufacturer is key to building overall competence and confidence in identifying useful data for informing operational decisions.
The U.K.’s National Standards for flow and density measurement, operated by TUV SUD NEL, are currently tackling such research areas. Using their flow laboratories, which rely on multiple industry standard digital networks, TUV SUD NEL aims to develop correlations between the data output from field devices and their operational efficiency. Parameters such as device age and structural integrity, facility ambient conditions and properties of the fluid will be considered, as well as bigger picture flow facility components such as pump speeds and valve positions.
Additionally, many companies are in the process of analyzing the historical big datasets associated with the specifics of their operation. This is not limited to the oil and gas industry. Sectors such as food production, retail, automotive, etc., are undertaking digitization strategies with the aim of getting to grips with the subtleties and unrealized potential in the historical and live data that they hold.
With the ever-growing interest in logging and understanding diagnostic data, it is reasonable to suggest that the day is coming when facility operators and technology end users in general have the confidence to fully switch over from traditional time-based calibrations to automated and intelligently led condition-based calibrations.