Designing for Safety: Human Factors in Space-Based Medical Systems

In safety-critical environments, usability is not about convenience—it is about preventing harm.

During my time at Lockheed Martin supporting NASA programs, I worked on human factors challenges that closely parallel modern medical device design. This included real-time monitoring of astronaut vitals during EVA activities and interface design concepts for lunar medical procedures.

While these systems operated in space, the underlying challenge was the same as in medtech: ensuring that users can interpret information correctly and act without error under pressure.

My approach aligned closely with principles later formalized in standards like ISO 14971 and IEC 62366-1—focusing on identifying risk, understanding user interaction, and designing to mitigate use-related hazards.

The Challenge

Design interfaces for medical monitoring and procedural guidance in environments where:

  • Cognitive load is high (EVA, mission stress)
  • Physical constraints limit interaction (gloves, limited mobility)
  • Delayed or incorrect interpretation of data can lead to serious harm
  • Real-time decisions are required with limited support

The challenge was not just usability—it was ensuring that critical information could be perceived, interpreted, and acted on without error.

My Role

As a Senior Human Factors Engineer in Space Life Sciences:

  • Led human factors and usability evaluation efforts
  • Conducted research on how astronauts interpret and act on physiological data
  • Evaluated interface concepts for clarity, prioritization, and error potential
  • Collaborated with engineers, medical experts, and mission stakeholders

1. Risk Identification (ISO 14971 Mindset)

  • Identified potential hazards:
    • Misinterpretation of vitals data
    • Delayed recognition of critical conditions
    • Incorrect procedural steps
  • Mapped:
    • Hazard → User interaction → Potential harm

2. Use-Related Hazard Analysis (IEC 62366-1 Thinking)

  • Identified critical tasks:
    • Monitoring physiological signals
    • Responding to alerts
    • Executing medical procedures
  • Evaluated:
    • Where confusion could occur
    • Where users might hesitate or make incorrect decisions

3. Design for Risk Mitigation

Focused on reducing cognitive and interaction risk through:

  • Clear hierarchy of information (what matters now)
  • Improved signal visibility and differentiation
  • Reduction of ambiguity in controls and feedback
  • Designing for constrained interaction (gloves, limited dexterity)

Design decisions were driven by reducing the likelihood of use error, not just improving efficiency.

4. Formative Evaluation (Iterative Testing)

  • Conducted usability testing in simulated environments
  • Observed:
    • Decision-making under pressure
    • Misinterpretations of data
    • Delays in response
  • Iterated designs based on findings

5. Validation Mindset (Summative Framing)

The work emphasized:

  • Can users correctly interpret critical data?
  • Can they act without hesitation or confusion?
  • Are failure points minimized?

Key Insights 

  • Many risks were not technical—they were perceptual and cognitive
  • Small interface ambiguities led to measurable hesitation
  • Prioritization of information was more critical than volume of information
  • Designing for constrained environments forced clarity and simplicity

Outcomes 

  • Improved clarity and prioritization of physiological data displays
  • Reduced ambiguity in user interpretation during simulated tasks
  • Identified critical failure points early in the design process
  • Influenced interface design decisions in safety-critical workflows

Contributed to safer, more reliable interaction models in high-risk environments