Six Sigma (Lean) is the most well-known approach to transforming organizations so that they are data-driven and continuously improving. Where Lean addresses process flow and waste issues, Six Sigma focuses on variation and design. These complementary disciplines are aimed at promoting “business and operational excellence.” Even if it’s not fully adopted, the basic method and philosophy of (Lean) Six Sigma can be applied to everyday activities to achieve higher levels of operational excellence.
One of the challenges operational-excellence-focused executives face, however, is getting their entire organization to contribute to and participate in continuous improvement programs. Most companies have multiple ongoing projects but have only a handful of trained team members who are proficient enough in the use of statistical methods to validate cases before and after implementing planned improvements. Allowing process and asset specialists to contribute to these projects would dramatically increase the operational improvements needed to meet the expected organizational goals. Instead, the use of time-series-based advanced analytics can help organizations achieve their desired outcomes – better and faster.
A way forward
Six Sigma projects typically follow a methodology inspired by Deming’s plan-do-check-act cycle. This methodology consists of five phases: define, measure, analyze, improve, and control; it’s also known as the DMAIC cycle.
At first glance, this structured approach seems like a perfect fit for continuous data-driven improvement within the organization and therefore strong integration of the Six Sigma philosophy in daily operations. The current tools and methods used within the DMAIC cycle, however, have limitations for Six Sigma stakeholders in the plant as well as on a central level.
Often, continuous-improvement experts serve the organization via a central operational excellence center, and with few experts in this field, bottlenecks may arise. Additionally, many projects may concern production performances where asset and process expertise is required. If those subject-matter experts are unfamiliar with statistical project approaches, projects may end up going unfinished.
Additional impacts for the organization and everyone involved include:
- Underuse of the local process expertise (plant level)
- Missed improvement opportunities
- Long project cycles
- No smooth symbiosis between plant and central level stakeholders
- Potential financial losses for the organization
The structure of the DMAIC cycle is well-suited for data-driven analysis, but the tooling is not currently up to the challenge. What is needed is a method for providing a common way of analyzing process data across central and plant-level team members that will significantly lower the threshold for starting improvement projects, especially with the data-gathering, modeling, and analysis components.
Self-service industrial analytics is a new approach in industrial process data analytics for users throughout an organization. The approach combines the necessary elements to visualize a process historian’s time-series data and overlay similarly matched historical patterns and enriches it with data captured by engineers and operators. Furthermore, unlike with traditional approaches, performing this analysis doesn’t require the skill set of a data scientist or black-belt expertise, because the user is always presented with easy-to-interpret results.
Key elements of a self-service industrial analytics platform to look for include:
- A system that brings together deep knowledge of both process operations and data analytics techniques to gain value from the operational data already collected. Such a system will minimize the need for specialized data scientists or complex, engineering-intensive data modeling and can turn human intelligence into machine intelligence.
- A model-free predictive process analytics (discovery, diagnostic, and predictive) tool that complements and augments rather than replaces existing historian information architectures.
- A system that supports cost-efficient virtualized deployment and that has plug-and-play functionality within the available infrastructure, while also having the ability to flawlessly evolve into a full scalable setup that fits in with corporate Big Data initiatives and global environments.
Based on these key elements, self-service industrial analytics is exactly what is currently lacking for supporting each phase of the DMAIC cycle. Empowering local subject-matter experts with advanced analytics tools will enable them to contribute to operational excellence goals, whether a project is smaller in scope or has a more long-term focus.
How exactly does self-service industrial analytics fit into the DMAIC cycle?
1. Define phase
The main goal of the define phase is to identify the nature and scope of the problem/project and establish priorities based on the cause-effect relationship.
Analyzing the cause-effect relationship of a symptomatic upset starts with rigorous data preparation. Self-service analytics provides an easy way to answer to the question, “Which process conditions are representative of my symptom at hand?” Process data is searchable and includes context information from various sources and stakeholders involved in the production process, creating a full and clear picture of the problem at hand. Because prioritization usually involves assessing business impacts, it may be necessary to calculate relevant KPIs.
2. Measure phase
After the problem/project is defined, scoped, and prioritized, baseline performance and objectives need to be established. In order to do so, access to all kinds of tags relevant to production is needed. Having live connections to various data sources therefore is crucial. To create an accurate baseline, either process expertise is needed or contextual information must be provided for assessing similar situations.
A search engine for sensor-generated data that includes the context information will play a central role and will give the power to the (local and central) process engineers to quickly retrieve similar situations. The complete relevant history of the data recorded can be utilized to save a baseline performance of the situation.
3. Analyze phase
The analyze phase is all about determining root cause(s) that factor into meeting the project’s objective. Allowing for time shifts is vital both for hypothesis testing and for generating new insights from historical data. The former requires a predefined smaller set of tags, whereas the latter will need to be conducted within an undefined or a predefined big number of tags. A self-service analytics platform needs to provide live connection to all relevant tags from various sources as well as provide stakeholders with interpretable results in an iterative way. The results should not be a “black box,” but instead easy to grasp and therefore trusted and acted upon.
4. Improve phase
During the improve phase, focus is put on testing and evaluating a possible solution. An example is altering the control concept of a column to see whether previously occurring flooding behavior is eliminated. In the process industry, this is usually done while production is running. A well-suited self-service industrial analytics platform should offer straightforward creation of real-time performance monitors, with the stored baseline available for use as needed.
5. Control phase
The control phase provides benefits that can be reaped for years to come on local and global scales. The analytics here focus on online data monitoring (similar to with the improve cycle), including a baseline if needed, plus the ability to monitor against signatures of early indicators that the analyze phase has indicated. This leads to avoiding the upset and issuance of preventive warnings. A self-service industrial analytics solution needs to provide the ability to mark and monitor signature patterns in real time. This builds a great foundation for following through on insights gained.
Using self-service industrial analytics to support the DMAIC cycle can result in various organizational benefits. By avoiding the need for high statistical expertise, many more individuals can start contributing to continuous improvement projects. It might even help people get certified more quickly as green or black belts, and executives responsible for operational excellence may be able to change the organization’s culture and meet targets that much faster.
Furthermore, using self-service analytics can result in more projects being executed per year, bringing faster results in areas such as reducing a facility’s carbon footprint, improving quality, and reducing waste. The contributing subject-matter experts will proactively be generating new insights with use of the extensive capabilities of the self-service industrial analytics platform. Finally, the improvements achieved for one asset, production line, or plant can provide the basis for sweeping organizational changes.
Fully embracing the power of self-service industrial analytics should lead to an organizational shift toward data-driven process improvements aided by empowered subject-matter experts.