On the FC + CVX line, many moving parts have to work together: clippers that cut veneer, transports that move boards, lift tables that raise and lower stacks, and units that rotate stacks into the right position. If any of these motions becomes slow or unstable, the whole line can stop. Typical problems are worn chains and guides, lift tables that struggle to move to the top position, or rotation units that keep hitting their end switches several times before they stop. When a drive finally breaks, production stops unexpectedly, repairs are urgent and expensive, and the plant risks missing delivery dates.
This predictive maintenance use case is about noticing those problems early, long before something breaks. Instead of waiting for a drive to fail or relying only on basic alarms, the system watches how each motion behaves in everyday work: how long a movement takes, how often the machine has to “retry” the same move, how frequently creep speed is used, and whether there are any repeated temperature or pressure warnings. From these simple signals, it builds an easy‑to‑understand health score and risk level for every important drive and motion axis. Maintenance teams can then plan checks, lubrication, adjustments, or part replacements in calm, scheduled time windows, and production can see which components are becoming risky in the next days and weeks. Because all of this is based mainly on signals that the PLC already provides, the factory can move from running equipment “until it breaks” to managing it proactively, without needing a lot of new hardware from day one.
To understand how healthy each drive or motion axis is, the system starts from data that the line already collects during normal work. It reads basic drive status (running, stopped, fault), the commands sent to move a component and the feedback that it has reached its position, signals from limit switches on lift tables and rotation units, and protection events such as over‑temperature or electrical faults. It also looks at pressure and compressed‑air signals that affect how smoothly motions run, plus simple context like how many hours each drive has worked, how many cycles it has completed, and when the last maintenance was done. Together, this gives a detailed diary of what each component has been doing over time.

From this diary, the system calculates easy‑to‑interpret health indicators. For every movement, it measures how long it takes from the command to the moment the limit switch is reached, and tracks whether this “cycle time” is slowly getting longer week by week. It counts how often a drive needs to retry the same motion, how frequently creep speed is used instead of normal speed, and how many times over‑temperature or abnormal pressure events appear. It also checks whether problems tend to happen when compressed air is unstable, which can be a simple but important cause of irregular motion. These factors are combined into a small set of numbers that describe how “hard” each drive is working compared to how it behaved in the past.
All of these indicators are stored per component and shown as a health profile that changes over time. For each drive or axis, the system calculates an anomaly score that becomes higher when motions are getting slower, retries are more frequent, or stress events are piling up. In the dashboard, this appears as a table where every component has a health percentage, a risk level (for example, low, medium, or high), and a short recommendation such as “inspect guides and lubrication” or “monitor retries during shift A.” This gives maintenance and production teams a quick, visual way to see which parts of the line are still in good shape and which ones are starting to behave unusually and should be checked soon.

Once basic health indicators are in place, the system looks for clear signs that a drive or motion axis is starting to behave differently from normal. For each component it first learns a baseline: typical cycle times, usual number of retries, and normal motion sequences for different products and speeds. Around this baseline it sets simple thresholds, for example “up to 10–15% slower than usual is still acceptable.” Every new movement is then compared to this baseline; if a cycle is much slower or the sequence of signals is unusual, it is marked as an anomaly. The more often this happens, and the stronger the deviations, the higher the anomaly score for that component becomes, making it stand out in the analytics.
These repeated anomalies form what the system calls precursors: early patterns that often appear days or weeks before a real breakdown. Examples include a lift table that increasingly needs two or three tries to reach the top position, a transport axis whose cycles are slowly getting longer, or a clamp that shows repeated over‑temperature events during the same shift. Each precursor is tied to a specific component and clearly described in the interface, so teams can see not just that “something is wrong” but also where and how it shows up in the motion. The “Failure Precursors & Abnormal Behaviour” view groups these events by component and time, helping engineers quickly see which parts of the line are generating the most warning signs and what type of unusual behaviour they show.
Where there is enough history, the system goes one step further and estimates how much time is likely left before a failure becomes very likely. For every past breakdown or major repair, it looks back at the weeks leading up to the event and records how health indicators and anomaly scores changed during that period. By comparing many such histories with periods where no failure occurred, the models learn typical “approach paths” to failure. Using this knowledge, the system can then estimate, for example, that a certain lift table has a high chance of failing within the next two weeks, or that a transport axis probably has a few hundred operating hours left before the risk becomes critical. These time‑to‑failure or remaining useful life estimates are never exact, but they give maintenance a practical planning horizon and help prioritise which components should be inspected or repaired first.

The maintenance planner, alerts, and workflows are the point where predictive analytics becomes something the whole team can actually use every day. In the planner view, each important drive or motion axis is shown with a simple curve of predicted failure risk over the next several days, based on its recent behaviour and estimated remaining useful life. Instead of a long list of technical parameters, planners see which components are low, medium, or high risk, and when that risk is likely to cross a warning line. For high‑risk items, the system suggests specific intervention windows such as “inspect during Friday night cleaning stop” or “replace during Sunday maintenance shutdown,” making it much easier to align work with production plans and avoid unpleasant surprises during busy periods.

Alerts translate these forecasts into timely prompts for operators and maintenance teams. The system continuously checks whether a component’s anomaly score, cycle‑time increase, or modelled failure risk goes beyond agreed thresholds. When it does, it generates clear, human‑readable messages like “Stack rotation axis: rapidly increasing retries and slower cycles – check limit switches and mechanical play” instead of cryptic fault codes. Different alert levels can be configured so that minor deviations only appear in the dashboard, while serious trends trigger on‑screen warnings, emails, or tickets in the maintenance system. In the configuration view, engineers can fine‑tune settings such as how far ahead to forecast, what counts as a “high” anomaly score, and which input signals should influence the models, ensuring the system matches the plant’s tolerance for risk and noise.

A key part of this loop is what happens after maintenance is done. When technicians inspect or repair a drive—lubricate guides, adjust chain tension, replace a sensor—they record the action and the component in the system. The analytics then recognises this moment as a reset point: it expects cycle times to shorten again, retries to decrease, and stress events to drop. If this improvement appears, the models update their idea of what “healthy” behaviour looks like for that component, and future risk estimates become more accurate. If problems persist, the system can continue to flag the issue, indicating that a deeper root cause may still be unresolved. Over time, this feedback loop builds a living memory of how each drive and axis behaves before and after different types of interventions, which thresholds are too sensitive or too relaxed, and which recommendations are most effective. The result is a planner and alert system that not only helps prevent breakdowns, but also keeps learning from the plant’s own experience, making predictive maintenance a natural part of everyday work on the FC + CVX line.
The predictive maintenance system for drives and motion axes on the FC + CVX line has already changed how the line behaves in everyday production. By continuously tracking cycle times, retries and stress signals, it has helped to spot problems early, so that lift tables, transports, clippers and rotation units can be checked and repaired before they suddenly fail. As a result, there are fewer unexpected breakdowns, emergency repair jobs are shorter and less frequent, and overall line availability is higher and more stable across the week. Production teams experience fewer last‑minute interruptions during critical orders, and maintenance can use its time on the components that truly need attention instead of spreading effort thinly across the whole line.
Just as important is the shift in maintenance philosophy that this use case enables. Instead of following a fixed calendar—checking every drive after a set number of months whether it needs it or not—the plant can base decisions on the actual condition and risk level of each component. Drives with low risk can safely stay in service longer, while those showing clear warning signs are scheduled for inspection or replacement during the next suitable window. Over time, this condition‑based, risk‑driven approach reduces unnecessary work, makes better use of parts and labour, and builds trust between production and maintenance because decisions are grounded in transparent data rather than guesswork.
Looking ahead, the same approach can be strengthened and expanded. Where it makes sense, additional sensors such as vibration, current or temperature probes can be added to the most critical or hard‑to‑access drives to capture even earlier and more sensitive signs of wear. The predictive models that have been developed for clippers, transports, lift tables and rotation axes can be adapted to other equipment types on the FC + CVX line and on neighbouring lines, step by step building a wider predictive layer for the whole plant. With each new data source and each confirmed prediction, the system becomes more accurate and more valuable, helping the factory move toward a future where most maintenance is planned, breakdowns are rare exceptions, and the mechanical backbone of production quietly does its job in the background.