Designing for Signal Detection in Complex Pipeline Systems

Applying Human Factors to Improve Operational Awareness in Flow-Cal Measurement Systems

In complex operational environments, performance is not just about data—it is about how effectively people can interpret that data and act with confidence.

During my work on pipeline measurement systems at Flow-Cal, I focused on how users detect anomalies in high-volume, time-based datasets—identifying subtle shifts in pressure, flow rate, and gas composition that could signal operational issues or financial loss.

While the system was built to present data, the real challenge was supporting how users actually worked: recognizing patterns, validating signals across time, and separating meaningful anomalies from normal variation.

This approach closely aligns with Human and Organizational Performance—designing for real-world behavior, acknowledging that error is part of complex systems, and ensuring the system supports better decisions, not just better displays.

Context: A System Built for Data, Not for Decisions

In pipeline operations, measurement systems serve as the backbone for both operational monitoring and financial accountability. Flowcal was designed to aggregate high-frequency data—captured multiple times per second—and roll it into usable time intervals (1, 15, 30, and 60 minutes). This data informed critical decisions around flow rate, pressure stability, and gas composition.

At a surface level, the system worked. It collected, stored, and displayed enormous volumes of data with precision.

But the reality of work in this environment was far more complex.

Users were not simply reviewing data—they were searching for meaning across miles of pipeline, often after the fact, trying to determine whether a subtle deviation represented a real issue or simply noise in the system. The interface required them to scroll through dense tables of numbers, manually identifying anomalies across time.

The system supported data access.
It did not support decision-making.

The Problem: When “Work as Imagined” Breaks Down

The system had been designed around an implicit assumption: that users would analyze structured datasets in a linear, methodical way.

In practice, that assumption did not hold.

Operators and analysts were:

  • Jumping between time intervals to validate patterns
  • Cross-referencing multiple variables (pressure, flow rate, composition)
  • Mentally filtering out false alarms
  • Relying on experience to determine what “looked wrong”

This created a fundamental gap between:

  • Work as imagined (reviewing clean datasets)
  • Work as done (pattern recognition under uncertainty)

That gap introduced cognitive strain, slowed decision-making, and increased the risk of both missed anomalies and false positives.

Applying HOP Principles Through Human Factors

1. Error is Normal → Designing for Signal vs. Noise

In this environment, “error” was not a user failure—it was an inevitable outcome of:

  • High data volume
  • Temporal complexity
  • Indirect indicators (no single definitive signal)

Users were expected to detect meaningful anomalies buried within normal system variability.

Rather than asking users to “be more accurate,” the design approach shifted to:

Supporting human pattern recognition by making anomalies visible, not hidden within raw data.

This led to design directions such as:

  • Highlighting deviations from expected ranges
  • Surfacing trends over time rather than isolated data points
  • Reducing reliance on manual scanning

Errors were treated as symptoms of system design, not individual shortcomings.

2. Blame Fixes Nothing → Understanding Workarounds as Expertise

Users had developed their own strategies to cope with the system:

  • Scanning for “outlier” values based on experience
  • Cross-checking across multiple time resolutions
  • Ignoring known sources of false alarms

From a traditional lens, these might be labeled as inefficiencies.

From a Human Factors and HOP perspective, they were adaptive behaviors—evidence of how the system actually functioned in practice.

The goal was not to eliminate these behaviors, but to understand and support them.

Design efforts focused on:

    • Making cross-time comparisons easier
    • Reducing the need to mentally reconcile datasets
    • Supporting the heuristics users were already applying

3. Learning is Vital → Capturing Work as Done

The core of this work was grounded in research:

  • Interviews with domain experts
  • Observation of real-world workflows
  • Analysis of how anomalies were identified and validated

What emerged was not a single workflow, but a set of situational strategies depending on:

  • Type of anomaly
  • Time pressure
  • Confidence in the data

These insights reframed the problem:

Users were not analyzing data—they were conducting investigations.

This shift informed a move away from static data tables toward more exploratory, decision-support-oriented designs.

4. Context Drives Behavior → Designing for Operational Reality

User behavior was shaped by:

  • The scale of the system (miles of pipeline)
  • The time delay between data collection and review
  • The financial impact of missed anomalies (e.g., gas purity loss)
  • The cost of false alarms

These constraints created a constant tension:

  • Investigate too much → inefficiency
  • Investigate too little → risk of loss

Design solutions needed to operate within that tension.

This led to a focus on:

  • Prioritization of high-risk signals
  • Contextual cues tied to operational thresholds
  • Flexible views that allowed users to zoom between time scales

The system began to reflect the decision context, not just the data structure.

5. How You Respond Matters → System-Level Impact

The outcome of this work was not just a redesigned interface, but a shift in how the system supported operational performance.

By aligning the system with how users actually worked:

  • Time to identify anomalies decreased
  • Confidence in distinguishing real issues vs. noise improved
  • Reliance on manual workarounds was reduced

Most importantly, the system moved from being:

  • A repository of data

to:

  • A tool for operational awareness and decision support

Closing Reflection: Human Factors as HOP in Practice

While this work was conducted under the banner of Human Factors and UX Research, it directly reflects the principles of Human and Organizational Performance.

At its core, the effort was not about improving an interface—it was about:

  • Understanding how system conditions shape behavior
  • Designing for real-world work, not ideal workflows
  • Supporting human judgment in complex environments

The result was not just better usability, but improved system performance.