Quantifying Latency in Batch Processing Using Seeq

Quickly Find the Root Cause and Solution for Every Operational Delay

February 16, 2021

In batch processing operations, the combination of numerous concurrent and independent steps can lead to bottlenecks, causing the process to pause and wait for a downstream operation to finish before the preceding steps can move forward. This introduces latent time to the cycle and lengthens the time required to complete each batch.

Understanding and measuring this latency in your process can be an impactful project justification metric, as well as a critical KPI when measuring OPex performance. When viewed individually, each of these waiting periods may seem brief or unimportant, but totalizing these periods into a single number often tells a more meaningful, and sometimes costly, story about your process.

Lost time analysis for batch processes can be easily completed in Seeq using just a few of the point and click tools in Workbench Analysis. Once these periods are identified, Seeq can be used to find the root cause and solution for each delay.

Identifying Batches

This analysis requires identifying batch cycles and forming capsules to represent each one. If you’re lucky, your historian may already have a signal that tracks the batch number or ID. If so, all you need to do is use .toCondition in Seeq Formula to create a condition for the batches, and each new batch will then get its own capsule when the batch ID changes.

If you have a “Batch_ID” type tag in your historian, chances are there are also tags for individual batch processes/operations/phases. Taking the following analysis a step further for specific portions of a batch can help zero in on unit constraints, providing added precision when quantifying and addressing issues.

If the historian doesn’t have a clear signal or event frame pre-defined for each batch, you need to be a bit more creative in identifying the batch cycles. Any batch process has repeatable parameters—such as tank levels, temperature signals, agitator amp readings, valve open/close status, etc.—indicating the end of one step in a batch and the start of another.

Expanding on these cues, Seeq can be used to easily create capsules representing each batch using tools such as Value Search, which measures the rate of change of a process variable using a derivative, or even a simple timer created in Seeq Formula. If the batch phases are more complicated to define, Seeq Formula has options for calculating derivatives, timers, and other values.

Taken as a whole, using process parameters to define the batch isolates the steps such as cooling down, heating up, or charging a reactant. If there is a period of the batch that doesn’t fall into one of these operations, it may be indicative of latency. These gaps, once identified using these same steps, could be the basis for a separate process investigation or an OEE improvement project. With spreadsheets or other general-purpose tools, it is extremely tedious to scale batch definitions across time, but this is as simple as expanding the Date Range in Seeq.

Once the batches are identified as unique capsules, the duration of each capsule can be viewed in the Capsules Pane in the lower right corner of the display. You can sort by duration to quickly identify your batch with the least duration (Figure 1).

Figure 1: Sorting Capsules by Duration

Once batches are sorted by duration, you can start to explore the nuances among batches that are causing the differences in batch times. Some will inevitably be shorter or longer than others. Why is one batch faster than the rest? For this batch, in particular what factors set it apart as the fastest? This makes it ripe for further analysis but for the time being, we will only concern ourselves with its duration. Seeq makes visualizing the distribution of batch times easy using the histogram tool (Figure 2).

Figure 2: Batch Time Histogram

The histogram provides context to subject matter expert, showing them where a particular batch stands with respect to the others. It helps to answer the question: “How fast or how slow was this particular batch?”

Benchmarking Batches

Once we have identified our batches, the next step is to select a benchmark. One way to do this is by selecting the fastest batch, but some OEE/OPEX guidance suggests benchmarking using the 85th percentile. In this case, think of that as a target batch time that is 15% longer than the fastest batch.

To execute this in Seeq, you would use the formula tool, with your fastest batch selected benchmark batch in mind, using the afterStart() function to create a Best Batch condition that starts when the Batch Capsules start. Create the desired duration here by using the fastest batch time, or your selected benchmark. By using Seeq for this analysis, you now can monitor the results and visualizations in near real time.

To this point, your analysis should show conditions presented as two layers of capsules that start at the same time. Most, if not all, of your batch capsules should be longer than the target batch time capsule. Identifying, isolating, and then aggregating these differences in capsule length will form the remainder of the analysis.

Isolating Batch Processes

Using the composite condition tool, you can easily isolate this difference and create a condition for it. There is built in logic in the tool that allows for creating a third condition based upon the two conditions you already created. For this case, select the “Outside” logic. Once identified as a condition, then this delta will automatically be generated in the form of its own condition if the batch is over the target time as shown below in the “Lost Time” condition with red capsules for each event (Figure 3).

Aggregating Batches

Aggregating the newly minted condition allows you to see the total latent time in the batches over the course of the display range (Figure 4).

Figure 4: Aggregated Process Latency

Creating this metric is quick and easy using Signal from Condition. This will take the lost time condition and create a single number showing the total lost time over the given time range. By simply changing the time range or period of interest, you can now can quickly visualize the amount of process latency.

For instance, setting the time range to a year will immediately totalize the lost time for the year. Then using process knowledge and subject matter expertise, the lost time can be translated into a dollar amount using Formula.

Turning Batch Process Insight into Action & Next Steps

Latent time for a single batch may seem inconsequential, but aggregation will create a concise metric showing the total lost time over the time range. This value can then be used to calculate the total opportunity in terms of increased production, as well as higher OEE.

Often, subject matter experts or unit process engineers are aware of the issues holding back production, they know the bottlenecks. However, the exact nature of these issues has previously been very difficult to quantify using a general-purpose tool, such as Excel. The data has always been there, but accessing it in a meaningful way was challenging.

Seeq changes that as building this type of analysis is quick and simple, while the insights it provides are powerful. These findings can be used to create project justifications, ROI calculations, or other metrics that plant leadership needs to see before greenlighting the capital spend required to address the bottleneck. In the best case, required changes might only require changing a program in a controller, correcting an instrument’s calibration, or performing maintenance on a control valve.

Once the lost time has been identified in a batch process, identifying the cause of the bottleneck requires a deep dive. Once you’ve calculated the distribution of overall batch times, think about the distribution of times for individual steps within the batch. Are you heating or cooling limited? Is the feed rate or filtration time delaying the batch? As the subject matter expert for your process, you are best positioned to ask and answer these types of questions.

Seeq’s chain and capsule views can help compare and contrast batch signal profiles across batches. This can highlight anomalies negatively affecting batch durations and help determine what sets your best batches apart from the rest. Often, you might have an idea what the constraint is, but now with Seeq you are empowered to use advanced analytics to make data-driven decisions.

Deliver Continuous Improvements in Quality, Yield, and Asset Availability Metrics

As pharmaceutical manufacturers realize the need to digitally transform their operations, industry leaders including Merck, Lonza, Roche, Abbott, and Bristol-Myers Squibb have deployed advanced data analytics to accelerate production improvement and increase profitability. Learn from pharmaceutical and life science industry experts as they share how advanced analytics has helped turn data into insights in a recent Seeq webinar.

Download the webinar today to learn how leading life science organizations are using Seeq to:

  • Rapidly investigate process and quality data
  • Improve batch consistency and identify deviation causality
  • Reduce costs by improving operational efficiency
  • Collaborate to create and share process knowledge