Implementing a condition monitoring system: Guidelines and Questions for selecting the right technology partner
Please allow us to congratulate you on your presence here. We are congratulating you because…
Reliability is the among the hottest topics being discussed currently. With the advent of Industrial Internet of Things, big-data-driven decision making and plantwide visibility, the attention of operators and corporate managers has shifted from focusing on a few macro-indicators of performance to optimizing grass-roots wherever possible. Now one may question how all this relates to reliability, which has been a central focus of plant maintenance for decades. The reason is simple: reliability is the core enabler for all the above to be possible.
The recent shift in technology has enabled insight into plant performance at a level hitherto unheard of. Where once processes failures could only be detected after examining process output, now industrial sensors – tracking heat, vibration, emitted gases, temperature, and sound – can detect if a process will have an unfavourable outcome; giving operators time and options to reduce losses and recover from such anomalies. Reliability plays a central role in ensuring the process equipment doesn’t break down; the sensors tracking process parameters, the algorithms predicting process output, and the control systems running it all stay true to their functions.
What is reliability improvement?
At its simplest reliability is the likelihood of a component or system not failing to perform its desired function prematurely. For control systems, reliability has meant the controllers will continue functioning accurately and without fail; instrumentation will not end up failing or mis-calibrating unpredictably; the control network will remain functional and connected; workstations for operators and engineers will be available and responsive when needed etc.
Reliability improvement has thus become another buzzword that plant managers need to obsess over. Newer and cheaper technology consistently outperforms its clunkier predecessors – be it sensors, instruments or controllers – and more and more competitors are beginning to adopt it, seeing over 10% improved productivity with minimal infrastructure addition. Entering this apparent rat race is not an option anymore if any plants want to remain competitive. We recently worked with an ammonia production plant in the Middle East, going through that very scenario.
Case in point: Ammonia Plant
This plant aimed to add 20% additional output per day (amounting in several hundred metric tons more ammonia), which they hoped to achieve through a reliability improvement program. To summarize its existing state of affairs: the plant operated below full capacity on dated control systems and with limited control equipment and instrumentation. The control systems in particular faced obsolescence from its manufacturer; suffering from spare part unavailability, dated firmware, and limited performance.
The reliability improvement program aimed to bring the plant closer to its maximum operating capacity. This would be achieved through increasing operating parameters closer to their limits while employing advanced monitoring and sensors to ensure early detection of incidents and failures. The idea here was to employ a higher density of sensors to monitor the plant and proactively prevent downtime and breakdown while the plant operates at increased loads.
The backbone of this reliability improvement program was upgrading the plantwide control, safety and instrumentation systems, which would be responsible for monitoring critical parameters. Plant operators and maintenance managers would use this additional generated information to know about the likelihood and frequency of events like overheating, corrosion, breakdowns, over/underloading, excessive vibration etc. in all operating equipment. This information in turn, projected through calculations and algorithms, could be used to assess exactly when to rectify, repair, retrofit, and replace equipment.
To understand how reliability is affected by control systems, it is worthwhile to understand what control systems do in a plant.
Process control is a widely used term to define the role of a typical programmable logic controller for a process unit or system, and it means exactly what the term suggests: controlling a process. Process controllers are employed in continuous process industries (controllers used for discrete or batch processes are called… batch controllers…).
Industrial processes are complex and intricated which often require continuous control and monitoring to ensure results are exactly as desired.
Take ammonia production for example, which starts with steam reforming or steam-methane-reforming (SMR). The first step of steam reforming – the reforming itself – needs to be done at a high pressure and temperature, typically around 20-30atm and 700-1100˚C (1300-2000˚F) for the steam to react with natural gas, producing hydrogen and carbon monoxide. This is done in the presence of nickel spokes as a catalyst. Sensors are used to measure the pressure and temperate during the process. These sensors give input to the controller that determines whether the values are in their reference range. When a parameter goes too low or too high, the controller executes the part of its programming that adjusts the process input to return the process to ideal conditions. This adjustment could be to the burner feed for temperature or controlling the input valve for steam pressure. Once the sensors detect process conditions are idea, the controller readjusts the input to ensure there’s no overcompensation. This makes what is called a closed control loop.
These closed control loops are designed for every process taking place at a plant. For the ammonia plant, control loops monitor the steam reformation including the water-gas shift and the pressure-swing adsorption, the Haber-Bosch process itself, and all supporting systems like compressors and boilers that interact with the process vessels and equipment directly. The process control system is a key component of these closed control loops, automatically checking each process parameter against a setpoint and adjusting it as necessary.
A distributed control system also oversees the safety systems and auxiliary systems like gas/smoke/fire detection, perimeter security, power management etc., while a process control system primarily focuses on process vessels and systems.
So, while the basic task of a process control unit is to monitor all these control loops and ensure the predefined process parameters are met, traditionally these systems had been configured to a relatively loose set of parameters – typically a significant margin below the absolute operating limits. This ensured the automated processes would be less likely to cause catastrophic failures and give operators more time to react to issues. A key limitation here was the process controller’s limited memory and processing capacity.
What this meant was that while a process vessel or equipment could operate at 1000˚C for twelve hours, it would have been preventatively programmed to run at 900˚C for eight hours, owing to the uncertainty in measuring temperature accurately, the chance of some sensors getting out of calibration, and the inability of the controller to take more input and do more complex calculations. The vessel could still be configured to operate closer to its operating constraints but that would run the risk of overheating, process instability and even catastrophic failure.
Where does reliability improvement come in?
So far, the discussion here has revolved around explaining process control and the constraints of dated control systems. The relationship of reliability improvement with all this still sounds uncertain. To do that, some of the advances in control systems and associated technologies are worth mentioning first, as these also feature some of the improvements we made possible in the ammonia plant.
Modern control systems support significantly advanced algorithms at far superior speeds compared to older systems, this enables the controller to react more efficiently to process changes and take proactive and reactive measures accordingly. Another highlight of newer control-systems is the increased input capacity, taking in data from more sensors and instruments than ever before.
With newer manufacturing processes, higher memory availability, and multiple programming logic checks, the chance of processors leading to failures or causing shutdowns due to internal errors has reduced significantly. A further step taken to reduce the impact of failures is modular redundancy where multiple controllers configured for the same process unit are installed in redundant configuration, so if one fails, the other can take over the process without halting anything. For the plant we worked at, we provided redundant controllers as well as redundant input and output modules with 20% spare capacity for future additions.
What all this means is that where a process vessel was monitored by one or two temperature sensors, now a process controller can monitor the temperatures inside, outside, at the inlet, at the outlet, at the catalyst, at the top, centre and bottom of the vessel, giving a better insight into expected yield and what steps could be taken to improve it.
Another significant advancement has been of data historians. These programs take data from the process controller and keep plotting it over time. This historical data is often used by additional programs, installed at operator or engineering workstations, to predict equipment performance and determine things a process controller is unable to. For example, a compressor facing increased vibrations over time could indicate an upcoming bearing or shaft failure that can be replaced in the next routine maintenance. However, since the vibrations still oscillate within the operating parameters configured in the controller, the controller would only detect a potential failure once it is too close to breakdown, requiring an emergency repair and added downtime for maintenance.
Lastly there is the improvement in sensor technology to talk about. Precision manufacturing, newer composite materials and machine-handled calibration-accuracy have given rise to a generation of sensors and instrumentation that perform much more reliably and consistently. If that wasn’t enough, now software algorithms can perform soft-sensing, a technique to estimate expected sensor values based on other parameters and neighbouring sensor values. Soft-sensing helps determine what values to expect in the absence of sensors and helps identify sensors that aren’t performing as expected. Modern sensors and soft-sensing techniques build the foundation for reliability improvement in process control. This vast amount of generated data holds the key to ensuring plant operators are aware of every possible issue before it becomes a nuisance.
With this newfound information, operators can safely configure a vessel, originally designed to operate at 1000˚C for 12 hours but running at 900˚C for 8 hours, to now run at 950˚C for over 10 hours without continuously worrying about potential failures. The reliability improvement achieved from upgrading sensors, process controllers and operator stations helps the plant work in a highly optimized environment, improving yield, reducing failures and compliance issues, and increasing overall profitability.
Ahmed Habib, Marketing Manager
INTECH Process Automation