Data Visualization & System Design Insight (Global Earthquakes, 1997–2014)

Designing for Preparedness: Translating Data Into Actionable System Insights


overview

This project examines how large-scale event data can be transformed into clarity, foresight, and actionable decision support — the same principles driving responsible AI and human-centered system architecture.

Using a dataset originally sourced from the U.S. Geological Survey (USGS) Earthquake Catalog, I led the acquisition, preprocessing, and visualization design to explore how raw environmental data could reveal patterns of global vulnerability and resilience.

Our focus was on identifying when and where the most catastrophic earthquakes occurred worldwide. After constraining the dataset to the most recent couple of decades or so, we found that magnitude-8-and-above events were extremely rare. Expanding the filters to include magnitude 7.0+ “severe” earthquakes provided a more instructive view of systemic behavior — showing that significant seismic activity occurred regularly and was geographically concentrated along a few persistent fault zones.

These visualizations ultimately highlighted how data scale and filtering choices shape interpretability. By toggling magnitude thresholds interactively, users could observe how “catastrophe” redefines itself depending on the lens applied — a concept that parallels AI model tuning, confidence thresholds, and governance boundaries in my later system-design work.

While the subject matter centers on seismic activity, the methodological goal mirrors my ongoing UX and AI systems practice: building transparent, interpretable pipelines that help humans make safer, faster, and more informed decisions.

 
 

My Role

I led the data acquisition and visualization strategy for this project, guiding critical design decisions that shaped interpretability and insight.

  • Expanded the filtering criteria beyond catastrophic (8.0+) events to include severe (7.0+) thresholds, enabling a more statistically meaningful analysis. Opted for interactive visualizations to allow all events recorded in our timeframe to be visualized if desired (6.0+)

  • Redesigned the primary visualization from a limited geographic plot to a layered bar graph and interactive tree map, improving readability and user engagement.

  • Balanced accuracy and accessibility, selecting visual formats that translated complex seismic data into actionable context for preparedness and system-design insight.

  • Coordinated the full analysis flow — from query design and data structuring to final storytelling in Tableau Public.

 

approach

  1. Data Acquisition & Query Design

    Queried and imported earthquake event data directly from Tableau’s public repository.

    Scoped variables to focus on location, magnitude, and date-time for meaningful temporal and spatial analysis.

  2. Data Preparation

    Filtered out insignificant dimensions and implemented plain-language labeling schema for both usability and readability.

    Identified inconsistent naming conventions for earthquake locations and visualized their effect on interpretability.

  3. Visualization & Story Structure

    Designed interactive Tableau story dashboards to let users explore severity, frequency, and impact by region and timeframe.

    Emphasized accessibility and clarity: consistent scales, restrained color palette, and strong contrast for pattern visibility.

  4. Analysis & System Relevance

    Revealed global clusters of seismic volatility and regional outliers.

    Translated patterns into lessons about data integrity, event monitoring, and early-warning system architecture — principles foundational to any domain involving risk management or healthcare triage.

 

Analysis & System Relevance

  • Identified temporal clustering within 2011, when multiple high-magnitude earthquakes—including a magnitude 9.0 event involving Honshu, Japan—occurred within a short window, illustrating how overlapping seismic activity can magnify regional risk and systemic strain.

  • Observed that every year containing a catastrophic (8.0+) event also included at least one additional severe (7.0+) earthquake, highlighting the compounding nature of high-energy release periods.

  • Found that severe earthquakes (7.0+) occurred consistently across all years within the dataset, forming a baseline of global volatility against which catastrophic events stand out as anomalies rather than trends.

  • Confirmed regional clustering along the Pacific Ring of Fire, with recurring epicenters near Sumatra, Indonesia and Honshu, Japan, underscoring the predictive and analytical value of geographic patterning in environmental system design

  • Documented inconsistent location labeling and metadata gaps that disrupted geographic aggregation, emphasizing the necessity of data normalization, consistent schema, and transparent labeling standards for any system intended to support decision-making or automated response.

  • Translated these analytical outcomes into broader lessons on data integrity, pattern recognition, and early-warning architecture—principles equally foundational to healthcare systems, risk governance, and AI oversight frameworks.

 

Key Metrics & Interactivity:

  • Magnitude Range: 6.0–9.1 (moderate to catastrophic)

  • Time Frame: 1997–2014

  • Primary Variables: magnitude, date/time, and geographic distribution of earthquakes

  • Derived Metrics:

    Total count of earthquakes per year, showing annual frequency and volatility across global activity

    Top regions by occurrence and average magnitude, highlighting recurring epicenters such as Honshu, Japan, Sumatra, Indonesia, and the Vanuatu–Fiji region — regardless of applied intensity filtration

    Relative distribution of event magnitudes, revealing variability in seismic intensity over time

  • Interactivity: The Tableau story incorporates live magnitude filters (6.0–9.1), allowing users to dynamically adjust thresholds and observe how perceived frequency, severity, and regional concentration change with framing

Key Insights

  • Magnitude thresholds shape interpretability. When the dataset was limited to catastrophic (Mag 8+) events, the visual narrative collapsed — too few events for meaningful pattern recognition. Expanding to severe (Mag 7+) revealed consistent clusters across the Pacific Ring of Fire and the Indonesian subduction zones, emphasizing how framing and filtering directly affect insight quality.

  • Temporal density exposes system vulnerability. Distinct surges in seismic frequency around 2004–2011 mirrored major regional disasters, illustrating how clustering can signal infrastructure stress periods — a transferable lesson for real-time monitoring systems.

  • Visual hierarchy determines understanding. Replacing a sparse geographic scatter with layered bar graphs and tree maps clarified proportional impact and global distribution, demonstrating how structural clarity converts noise into signal.

  • Data literacy is resilience. Empowering users to explore variable thresholds (6+, 7+, 8+) turns a static dataset into an adaptive decision tool — an approach that parallels the adaptive governance principles used in my AI systems work.

 

outcomes

The final Tableau story reframed seismic records as a living model of preparedness, showing how scale, frequency, and distribution interact over time.

By refining thresholds, labeling, and visual hierarchy, the dataset evolved from static observation into a decision-support framework — one that could inform early-warning design, resource allocation, or resilience modeling.

Though this analysis focused on environmental systems, the methodology scales to healthcare accessibility, AI governance, and civic infrastructure — domains where risk, complexity, and human safety converge.

 

Reflection

This project reinforced my belief that data visualization is not an aesthetic exercise, it is a form of system governance.

When structured well, information becomes empathy: it enables institutions, technologies, and humans to respond with precision rather than panic.

That conviction continues to drive my graduate research and my independent work such as Project Arqos, where I apply the same logic to autonomous systems. This ensures that intelligence serves stability, accountability, and human trust.

 

Next
Next

Project Arqos: Governed Autonomy Framework