In its True Cost of Downtime 2022 report, a major technology conglomerate reveals that the cost of an unplanned lost hour runs anywhere from $39,000 for producers of fast moving consumer goods to $2M an hour for automotive manufacturers. Even more concerning is that this is a negative trend over the past few years.1
The overwhelming majority of managers surveyed by the conglomerate use some form of condition monitoring as part of their predictive maintenance program. But these efforts may be futile if power quality isn’t a routine part of the analysis.
This article highlights a few of the hidden catalysts that commonly wreak havoc on operations and manufacturing downtime in commercial and industrial environments. Spoiler alert: Each of these triggers is tied to power quality — and relatively easy to remedy with the right tools. Let’s dig in.
Scenario 1: Frequent drive failure in a pipeline booster pump
The basic drive monitoring features typically pre-installed into medium voltage drive systems are generally limited in scope and don’t produce information in real time.
What’s missing? In this customer’s case, managers couldn’t see that temperatures were trending high in the low-voltage side of the drive. When the drive reached a critical temperature, it would go into fault, bringing the motor to a halt for several hours — adding up to thousands of dollars in lost productivity.
Preserving uptime with granular power quality analysis: Remote drive monitoring was implemented to 1) detect the subtle temperature changes that alter with voltage and 2) preemptively alert operators to immediately perform a controlled shutdown to cool the unit and prevent a fault. Meanwhile, this new insight also illuminated the need to perform HVAC maintenance to improve ambient temperature.
Check out this application note for additional details on what can be learned from comprehensive power quality-focused drive monitoring.
Scenario 2: Staying sensitive to power quality issues in semiconductor fab
For an industry where production lead times may extend to a full year, manufacturing downtime from a malfunctioning or stalled tool can cost a company millions. Not only that — and unbeknownst to operators — equipment may consume too much power: bad news for the large fabs already spending 30% of their operating costs on electricity.
What’s missing? Power quality events frequently occur in semiconductor fabrication, and often cause tool malfunctions and failure. However, these events are difficult to detect without proper instrumentation. Lacking this visibility, fab operators believe that their equipment problems are caused by a faulty part. This results in delays and costly service calls that yield no resolution because the problem was mis-identified as something inherently wrong with the tool. Additionally, if a root cause can’t be confirmed, warranty claim disputes may arise between the fab and the tool manufacturer.
Preserving uptime with granular power quality analysis: Power quality monitoring and compliance assessment make it easy to prove, or rule out, bad power as the source of poor equipment performance.
- The PQ1 Power Quality Sensor integrated into facility power mains identifies voltage sags, swells, interruptions, and impulses as short as 500 nanoseconds. Relay contact outputs can send signals to operations software applications for error logs, alarm processing and more.
- For more complex analysis, the PQube®3 power analyzer produces automatic, time-stamped reports to correlate a power event with a tool malfunction. It monitors up to 4 three-phase loads along with temperature, humidity and mechanical shocks. It also records the peak current actually required by a given tool, which often turns out to be less than what the fab operator has allocated — a ”quick win” opportunity to lower energy costs.
- The nature of semiconductor fab creates conditions that can affect tools and tool components. SEMI F47 tool testing certifies that a tool can ride through various voltage sags. In addition, many fabs now require SEMI E6 (Part 12) compliance, which provides detailed information about electricity consumption per wafer, power factor, harmonics and more.
Learn more about power quality issues in semiconductor fab here.
Scenario 3: Data loss — the downtime you can’t put a price on
Without clean uninterruptible conditioned power, there is no data center. To safeguard availability, operators focus on physical infrastructure security and the power and cooling infrastructure systems that support IT equipment.2 But availability — and more specifically what constitutes “downtime” — can be misleading.
What’s missing? Modern data center technology is subject to random transients and other power quality issues that can cause data errors and loss. But as long as the power stays on, these are not considered downtime events.2 In fact, these events remain invisible to data center managers — and pose severe risk to the business — unless comprehensive power quality monitoring is part of power management.
Preserving uptime with granular power quality analysis: One of the most important tests of data center availability lies in the waveforms produced by various electronics. Voltage and current spikes, sudden voltage drops, frequency variations, harmonic distortion — these can occur for either momentary or sustained periods, and they change how electronics perform. Some of these impacts are visible but assumed to be other causes, such as when an instantaneous computer glitch is attributed to a loose power cord.
Power quality analyzers placed in strategic points across the facility will keep data center managers apprised of changes in power quality, in real time. Here are a few locations to focus on, at a minimum:
- A single power quality analyzer with multichannel inputs can be installed directly into the floor level power distribution unit (PDU) to measure 480 volt input from the UPS and 208/120V PDU output.
- A multichannel power quality analyzer can also be installed in the data center’s main distribution panel. This monitors the total power of the facility and separately measures power consumed by the cooling system and the UPS that support IT equipment.
- At times poor power quality originates on the utility side of the meter. This can be assessed by placing a single power quality analyzer at the output of the automatic transfer switch (ATS) to examine both the quality of power from the incoming utility as well as the back-up generator output.
Ideally, data generated from each of these points will be pulled into a cloud-based monitoring platform for facility-wide analysis and reporting. This produces analytics useful for proactive strategies such as maximizing UPS capacity, prioritizing capital investments, and more.
Read up on the kinds of power quality threats that can corrupt your data center, and specifics on addressing them.
Any of this sound familiar?
Those unexplained equipment malfunctions may be the result of poor power quality. We can help you detect and correct it. Reach out to start a conversation.
1 Siemens Industry Inc., The True Cost of Downtime 2022, published 2023
2 Data Center Frontier Special Report, Understanding the Importance of Power Quality in the Data Center: Finding the Ghost in the Machine, 2020