BlogPost-01 — Attacks Against Process Control Systems (Revised)

Note: LSU logo slide removed; numbering starts from actual content slide.

Slide-by-Slide Descriptions (with Screenshots)

Slide 1

Introduces the paper’s focus—industrial Process Control Systems (PCS)—and frames security as a safety and reliability problem rather than only an IT problem. It clarifies that cyber events can propagate through control loops and physical plants, so risk must be quantified in terms of process variables, constraints, and consequences. The slide lists authors, venue, and the central question: how to combine risk assessment, detection, and response into a coherent workflow.

Slide 1
Slide 1

Slide 2

Defines PCS (a.k.a. ICS/SCADA/DCS) and shows examples (chemical plants, power systems, water treatment). The key point is that feedback control and actuation distinguish PCS from enterprise IT: attacks directly influence actuators and sensors. The slide sets terminology for controllers, HMIs, historians, PLCs/RTUs, networks, and plant equipment.

Slide 2
Slide 2

Slide 3

Explains why this matters: outages and safety incidents have multi-million-dollar impact and can damage equipment or threaten human safety. Unlike IT data leaks, PCS incidents manifest as physical deviations (pressure surges, temperature runaways). This motivates risk-driven prioritization of defenses and fast anomaly detection to stay within safe operating regions.

Slide 3
Slide 3

Slide 4

Uses Stuxnet and similar incidents to demonstrate how subtle integrity manipulation can evade simple alarms while steering a process toward damage. The lesson: sophisticated attackers can be process-aware. Therefore, detection must leverage physics/estimation—purely network-signature approaches are insufficient against stealthy plant-level attacks.

Slide 4
Slide 4

Slide 5

States contributions: (i) a risk model that evaluates expected loss from different attack types and targets; (ii) a physics-based residual detector (model + CUSUM) tuned to minimize false alarms while catching slow stealthy drifts; (iii) an automatic response policy that substitutes trusted estimates for compromised measurements until human operators intervene.

Slide 5
Slide 5

Slide 6

Roadmap of the talk/paper: background and threat model → risk quantification → detection design → response algorithm → case study and results → discussion, limits, and future work. This helps the reader anticipate how the three pillars (risk, detection, response) interlock.

Slide 6
Slide 6

Slide 7

Surveys the threat landscape with real incidents (e.g., TRITON, Ukrainian grid, Oldsmar water facility). Emphasis: many breaches begin as IT footholds but only become catastrophic when the attacker reaches the control loop. We enumerate attack surfaces: sensors, actuators, controller logic, setpoints, and communications (wired/wireless).

Slide 7
Slide 7

Slide 8

Contrasts PCS with IT: legacy protocols (often no auth), real-time determinism, safety interlocks, and strict availability constraints. Patching and restarts are expensive; false positives can trip production. Therefore detectors must be lightweight, interpretable, and tolerant to benign disturbances and model mismatch.

Slide 8
Slide 8

Slide 9

Introduces the three focus areas: (1) risk assessment to prioritize limited security budgets; (2) detection using residuals that reflect physical consistency; (3) response that safely biases the loop toward stability while preserving operator authority. The slide visually shows how these modules connect in a loop around the plant model.

Slide 9
Slide 9

Slide 10

Formalizes risk as expected loss R = E[L(x,u,attack)] over attack scenarios and process states, with loss capturing safety, quality, and downtime. A practical takeaway is ranking sensors by marginal risk contribution: how much additional loss occurs if that sensor is compromised. This ranking drives redundancy or hardening decisions.

Slide 10
Slide 10

Slide 11

Differentiates integrity attacks (bias, drift, replay) from DoS (dropout, delay, substitution). The paper argues integrity attacks on high-criticality sensors often dominate risk because the loop consumes the falsified input continuously, while short DoS can be buffered or estimated. This shapes the detector’s emphasis.

Slide 11
Slide 11

Slide 12

Describes the Tennessee–Eastman (TE) benchmark: a complex chemical process with nonlinearity and multiple operating modes. The authors use TE to get repeatable, community-accepted evaluations. Key controlled and measured variables are listed, along with constraints and typical disturbances.

Slide 12
Slide 12

Slide 13

Shows sensor criticality analysis: pressure measurement y5 ranks as most safety-critical because violations rapidly escalate to relief events or shutdowns. The team quantifies how much process risk rises when each sensor is attacked, guiding where to add redundancy and aggressive monitoring.

Slide 13
Slide 13

Slide 14

Argues for physics-aware detection: rather than relying on network signatures, compare measured outputs with model-predicted outputs to form residuals. If residuals exhibit bias or persistent structure beyond nominal noise, an attack is suspected. This naturally covers both IT-origin and insider threats.

Slide 14
Slide 14

Slide 15

Presents the linearized (or gain-scheduled) model around an operating point: ẋ = Ax + Bu + w, y = Cx + v. An observer (e.g., Kalman/Luenberger) forecasts ŷ; residual r = y − ŷ captures inconsistencies. The method accepts modest model error but relies on residual statistics being stable in normal operation.

Slide 15
Slide 15

Slide 16

Defines CUSUM: S_k = max(0, S_{k-1} + r_k − ν), with threshold h. Intuition: slow drifts accumulate and surpass h even if r_k looks small moment-to-moment. The offset ν reduces sensitivity to benign bias and tuning noise, balancing false alarms versus detection delay.

Slide 16
Slide 16

Slide 17

Discusses tuning: choose ν from residual mean under nominal conditions; pick h to bound false alarm rate (ARL0). Practical tuning uses historical data and small validation attacks to ensure timely detection of relevant magnitudes while ignoring setpoint steps or mode changes.

Slide 17
Slide 17

Slide 18

Shows detection performance curves and timelines: integrity attacks cause steady growth in S_k until alarm, while DoS may produce intermittent spikes buffered by the estimator. The trade-off: tighter thresholds catch attacks faster but risk false alarms under disturbances. Results favor moderate ν and h calibrated by plant variance.

Slide 18
Slide 18

Slide 19

Classifies stealthy attacks: (i) step bias small enough to be hidden by variance; (ii) geometric drift that increases slowly; (iii) coordinated multi-sensor changes that preserve key invariants. The slide explains how joint residual monitoring across sensors reduces such blind spots.

Slide 19
Slide 19

Slide 20

Explains stealth outcomes: attackers that know the model can shape signals to mimic expected dynamics. The defense is diversity—multi-rate sampling, cross-variable constraints, and watchdog models (linear + nonlinear). The paper evaluates how much stealth budget remains after multi-sensor CUSUM.

Slide 20
Slide 20

Slide 21

Introduces the automatic response: when a sensor alarm triggers, switch that channel to a trusted estimate (observer output, redundant sensor, or filtered proxy). Control continues with the substituted measurement while operators validate and inspect. This limits immediate process risk without drastic shutdowns.

Slide 21
Slide 21

Slide 22

Evaluates response effectiveness: under strong bias or drift, substitution holds variables within constraints long enough for human action. Graphs show overshoot reduction and fewer safety-limit violations compared to doing nothing or naive shutdown. The method is conservative but buys time safely.

Slide 22
Slide 22

Slide 23

Clarifies human-in-the-loop: alarms are routed to operators with suggested actions and a timer. If multiple alarms occur, a policy determines priority (e.g., pressure > flow). The interface must explain residuals and thresholds in plain terms to avoid alarm fatigue.

Slide 23
Slide 23

Slide 24

Summarizes key takeaways: integrity attacks on critical sensors dominate risk; residual-CUSUM is effective and tunable; automatic substitution is a robust bridge response. Risk ranking guides where to invest in redundancy and hardened communication.

Slide 24
Slide 24

Slide 25

Outlines limitations: model mismatch in highly nonlinear regions, operating-mode switches, sensor drift and aging, and coordinated attacks designed with knowledge of the observer. Additional validation is needed on high-speed plants and under maintenance events.

Slide 25
Slide 25

Slide 26

Discusses broader impact: the framework provides a blueprint for plants lacking deep security benches—start with risk ranking, implement residual detection, and define clear response SOPs. Even partial deployment improves resilience if focused on top-critical sensors.

Slide 26
Slide 26

Slide 27

Lists open challenges: online model adaptation, combining physics with ML (autoencoders/LSTM/KalmanNet), attack attribution, and formal guarantees for response safety. Hardware-in-the-loop and field pilots are the next steps to verify scalability.

Slide 27
Slide 27

Slide 28

Provides a 2023–2025 snapshot of related work: physics-informed learning, federated detection across plants, and secure state estimation under sparse attacks. The slide positions the paper relative to emerging trends.

Slide 28
Slide 28

Slide 29

Citation slide with full reference and identifiers. Ensures traceability and encourages reproducibility by pointing to TE process simulators and open-source detection code when available.

Slide 29
Slide 29

Slide 30

Questions slide invites discussion on cost-benefit of redundancy, threshold governance, and integration with safety instrumented systems (SIS).

Slide 30
Slide 30

Summary

This work presents a unified workflow for securing process control systems that couples risk assessment, physics-aware detection, and automatic response. Risk is quantified as expected loss under plausible attack scenarios to prioritize protection of high-criticality sensors (notably pressure). Detection compares measured outputs with model-based predictions to form residuals and uses a CUSUM statistic to surface slow, stealthy manipulations. Upon an alarm, a conservative response replaces compromised measurements with trusted estimates to keep the plant within constraints until operators intervene.

Conclusion

Discussion

General (Professional) Discussion

Our Team’s Discussion & Takeaways