横河電機株式会社
横河ソリューションサービス株式会社

New Roles for Process Historians

By Wayne Matthews, Yokogawa Marex

At one time, process historians were simply software packages used to store and present real-time process data. Today, however, historians are being used to do much more.

Process historians are being used to store ever-increasing amounts of Big Data originating from a much wider variety of sources including control and monitoring, laboratory information management, ERP and asset management systems. They are also being used to transform this and other data into actionable information to implement and improve equipment diagnostics, maintenance, safety, alarms, production, performance and other process plant activities. Finally, historians are an integral part of integrating the enterprise, acting as a hub to distribute this information throughout the enterprise in various formats via company intranets, the internet and the cloud.

In this article, we'll explore the changing role of process historians and provide application examples to illustrate how these concepts are being used in process plants worldwide.

A Deluge of Data

At one time, historians were fed data primarily by the process control system, usually a distributed control system (DCS). Older historians relied on a proprietary database, which often limited user access, and made it difficult to interface with other plant information systems.

Today, thanks to more universal and open interface technologies—such as OPC, Ethernet and SQL—many more external systems can send and access historian data (Figure 1).

Data Historian image

Figure 1: Data can flow into and out of a modern historian from a variety of sources.

Advances in IIoT, including the increasing popularity of ISA100 wireless instrumentation, provide another source of data. If a plant has 100 ISA100 transmitters, each broadcasting once every six seconds, the historian could receive tens of thousands of data points every hour. These data points include the process variable, diagnostics, alarms and events.

Modern historians no longer rely on a simple proprietary database stored on a local computer to deal with this Big Data. New SQL-based deployment architectures now support the cloud, centralized and decentralized architectures—and can consolidate data from local to corporate-level historians.

Various data-handling techniques are employed to deal with this Big Data, including filtering, time-stamping, and combinations of flat-file and relational databases. For example, many of the data points coming from wireless instruments rarely change; therefore, the historian can store only those variables changing outside a pre-defined range, a form of reporting by exception. Devices themselves are becoming more intelligent and with Edge computing only the data that is of importance is provided to the Historian.

Turning Data into Actionable Information

In the past, data from historians was primarily used to analyze processes and control functions. Engineers would write software to generate trends and graphs, and then try to visually analyze this process historian data to spot anomalies and areas for improvement. Spreadsheet software was the tool of choice, but it took a lot of work and expertise to make this general purpose tool perform this specialized task. In many cases, data scientists had to be engaged to assist.

Now that data in modern historians is more readily available via the use of standard SQL structures and open interfaces, it is possible to make use of commercially available software for analysis and reporting, removing the need for specialist skills and integrating with corporate standards.

Historian suppliers and system integrators have created a wide variety of specialized analysis packages (Figure 2) for alarm management, safety system monitoring, asset management, mass balancing, off-sites management, and power and energy consumption. In many cases, these application packages have been developed between the supplier and user to solve specific end user problems and challenges, and then made into a universal solution available to other users.

alarm frequency report

Figure 2: Specialized packages, such as this alarm reporting software in Yokogawa’s Exaquantum historian, can be used by any system.

Distributing Information

Thanks to the cloud, IIoT and open interfaces, historian data is can now be made available to anyone and any software system with proper access credentials.

As shown below in the applications section, historian data can be viewed locally, globally and from centralized locations. For example, a dashboard display for an alarm monitoring application allows engineers to “drill down” into alarm activities. Clicking one of the colored discs in a heat map might take the user to the chattering alarms report, pre-filtered for the day of interest. The user can further continue to drill down until ultimately the raw alarm and events are displayed, which can be used to perform root cause analysis and see filtered events before, during and after an alarm incident.

This capability allows engineers at a central location to monitor conditions from company sites all over the world—but it also allows engineers at each plant to see the same data as it applies to their specific plant.

What’s more, users are no longer limited to accessing historian data via a local PC. Today, data can be viewed on any desktop PC, laptop PC, smartphone or tablet connected to the corporate intranet or to the internet (Figure 3).

Data Historian can be viewed on any desktop PC, laptop PC, smartphone or tablet

Figure 3: Yokogawa’s Exaquantum historian can be accessed from a tablet computer.

What’s possible with today’s historians? With the availability of application packages and open interfaces, virtually any kind of analysis of plant data is now possible, as shown in the examples below.

Alarm Management at a Gas field

In Europe, a large facility extracting gas from dozens of widely dispersed well heads and tank farms is fully automated. Each well head or tank farm is linked to a central control and monitoring center to form one of the largest distributed control systems in the world.

The facility, with more than 750,000 potential alarms, required an alarm management solution to ensure safe, effective and efficient operation. The alarm management solution had to be used by all plant personnel including operators, engineers and managers. Alarm reporting and analysis would be used to help identify and eliminate faulty and incorrectly configured alarms, reducing the number of alarms presented to an operator.

The well heads and tank farms are linked to the central monitoring and control center via seven Web servers. An Exaquantum process historian monitors the 750,000 potential alarms, analyzes how operators respond, and produces reports (Figure 4) at the end of each shift to show:

  • Standing alarms at the end of a shift
  • Top 10 alarms by number of occurrences during the shift
  • Alarms suppressed by an operator at the end of the shift
  • Alarms in calibration at the end of the shift
  • Mean alarm rate for the shift
  • Alarm rate distribution by the clock hour

Exaquantum ARA Dashboard

Figure 4: Yokogawa Exaquantum Alarm Reporting and Analysis dashboard is used in a European gas field to summarize activity on 750,000 possible alarms.

Weekly and monthly summary reports are available for management and planning meetings. The alarm management solution conforms to EEMUA 191 and ANSI/ISA 18.2 guidelines.

Valve Travel Times

At a large oil production facility, reports on high-integrity pressure protection system (HIPPS) activations were being generated with a custom application package running on a legacy DCS, which needed to be replaced. The company wanted a standardized solution independent from the DCS, so that it could be implemented across many locations with minimal configuration required at each site.

Yokogawa developed a DCS-independent solution based on an Exaquantum historian using Sequence of Events (SOE) data received from the HIPPS system to generate HIPPS Activations and Event Travel time reports.

This solution has been deployed at two sites so far, each with a different DCS. At one site, the SOE data is collected through an OPC server connected directly to the historian. At the other site, SOE data is collected through the Exaquantum Remote Data Synchronization application that provides a robust communication method to address low bandwidth and intermittent network connections.

All HIPPS activation and configuration data is stored in the process historian, with additional tables used to store intermediary report data. Reports at both sites are created with Microsoft Reporting Services. The reports are also available in PDF format to facilitate review and distribution among multiple teams.

Analyzing Alarms, Events, and PID Control Data

An agribusiness group wanted to analyze alarms and PID controller performance for ten of its production plants worldwide from a cloud-hosted centralized control center. They needed to analyze alarm performance and improve the operation of their PID controllers from their headquarters, and continue to access plant information locally. The ten plants had control systems from five different vendors. An Exaquantum historian was deployed in a cloud-hosted environment (Figure 5) to provide the required solution.

Exaquantum historian was deployed in a cloud-hosted environment

Figure 5: This cloud-based Yokogawa Exaquantum historian acquires data from five different types of control systems in ten plants worldwide, and produces alarms & events and PID controller performance reports.

Data from all plants are consolidated in a single cloud-based historian via a secure and reliable data transfer method. For Alarm & Events data, the historian collects process data and alarms & events from each plant’s control system. The solution enables alarms and events to be collected and monitored for each plant, including generation of reports and key performance indicator (KPI) data for further analysis from the centralized control center.

PID loop performance and Alarms & Events are displayed in a combined web-based dashboard, providing a single display of key data for each site. The global plant management team now has a complete overview of each plant’s status, and can analyze performance and identify operational improvement opportunities.

Analyzing plant data across multiple sites enables processes and equipment to be benchmarked using real plant data for comparison and assessment. Additionally, this information is also used to establish best practice processes which are applied across all sites to help identify and set attainable targets, and evaluate overall performance.

Measuring Safety System Performance

An ultra-deep-water production platform required a solution to monitor and record all safety KPIs, and to record safety and performance information. It was important to provide a repository for all trips and operational statistics, covering all safety instrumented systems (SIS), to measure actual performance against the original safety design.

The Exaquantum historian located offshore acquires data from the platform’s DCS, safety instrumented systems, emergency shutdown systems and fire and gas systems. The historian stores all relevant safety-related data, and all key safety-related KPIs and statistics are validated and available in a single location. Without this solution, it would be very difficult and time consuming to consolidate, view and analyze this data.

Safety systems need to provide proof tests at various intervals to demonstrate each component or subsystem is operational, such as a valve activation. The system can show that actual activations were successful and can be used to verify that the safety system is working correctly while it is operating—so further proof tests requiring shut down are not needed, thus reducing the number of plant start-up and shut downs required for scheduled testing.

This proof testing and safety trip data provides design input for LOPA and safety case updates, showing the true demand rates and helping identify scenarios and the frequencies of events. For each safety loop, statistical data and reports illustrate the true demand placed on that loop over a number of years. This historical data provides actual field demand rates, and can be used to validate the safety case against the original design assumptions that were based on calculations.

Summary

Traditional historians were proprietary, closed systems used primarily to view and visually analyze process performance at a single site. Today’s modern historians are open systems that work with any DCS or other data source, store data in open SQL Server databases and across the world in the cloud, and make data available to anyone, anywhere—as long as they have the proper security credentials.

Modern historians can also perform a variety of functions such as alarm analysis, safety system verification, PID analysis, energy monitoring and much more. And instead of requiring companies to develop their own analysis software or hire data scientists, modern historians can be provided with or linked to specialized analysis application software.

About the Author

Wayne Matthews is a computer science graduate who has worked for Yokogawa Marex for more than 20 years. He has a wealth of experience in the development and deployment of manufacturing execution and plant information management systems. He has overseen the creation of the comprehensive portfolio of software solutions, particularly in alarm, safety and production management.

関連製品&ソリューション

  • プラント情報管理システム Exaquantum

    Exaquantum (エグザカンタム)は、プラント操業に携わるさまざまな部門のお客様に、必要な情報を最適な形にして提供します。製造業を知り尽くしたYOKOGAWAだからこそ実現できたプラント情報管理システム(PIMS)です。

    さらに見る

トップ