Digesting Censorship Data: 5 Insights for Dashboard Design

On behalf of the Blocky project, we set out to understand how people use censorship data—from journalists to researchers to policy makers to advocates. This article distills those findings into five key insights that shaped our design work with Blocky.

Blocky is a data dashboard that offers real-time checks on the availability of domains in China, and we wanted to explore how it could become most useful to people working on China issues. We interviewed and exchanged insights with eight people. What emerged was a set of themes about how censorship data is understood, shared, and acted upon.

1. The Audience Divide

Censorship data is deeply technical. Terms are often unfamiliar, and interpreting test results usually requires expertise. As a result, we observed an audience divide:

  • Experts (researchers, data translators, technologists) who can navigate datasets like OONI or Censored Planet directly

  • Non-experts (journalists, advocates, policy makers) who often rely on experts to explain the data

Journalists rarely dig into the platforms alone. Instead, they reach out to toolmakers or researchers for help making sense of the results. For Blocky to be valuable, it needs to bridge this divide—offering features that guide non-technical users while still serving expert needs.

2. People Generate Different Types of Knowledge

We found that people use censorship data to answer questions at three levels of depth:

  • Deep Knowledge: Academics and researchers working with raw data over long timelines. Their outputs are peer-reviewed papers or technical reports. Example: comparing methodologies across platforms like OONI and Censored Planet.

  • Contextual Knowledge: Advocates, embassies, policy makers, or organizations asking: Which news websites are currently blocked? Their outputs might be reports for negotiations, organizational campaigns, or news articles.

  • Specific Knowledge: Rapid-response checks: Is this URL blocked right now? Was it blocked during the 2024 elections? Journalists, publishers, and advocacy groups often need this quick validation.

These three layers show how the same data can support very different outcomes—from policy arguments to breaking news headlines.

3. Methodology Shapes Meaning

Understanding censorship data requires knowing how the tests were run. A simple color code like “yellow” can mean different things across tools:

  • OONI: Tests are run on individual devices. “Yellow” often means the result needs expert review to confirm if it’s blocking.

  • Blocky: Tests are run in controlled environments. “Yellow” indicates partial blocking (e.g. only in some regions, or censorship still in transition).

This highlights why dashboards must make methodology transparent—without it, data risks being misunderstood or misused.

4. Censorship Tracks With National Events

Domain blocking and unblocking often align with external events:

  • Political meetings or elections

  • New regulations or laws

  • High-profile cultural flashpoints (e.g. China blocking Winnie the Pooh references after comparisons to Xi Jinping)

Advocates and policy researchers pay close attention to these correlations. For them, a dashboard isn’t just about listing blocked sites—it’s about connecting when and why censorship happens.

5. The Density of Censorship Requires a Starting Point

In heavily censored environments like China, so much content is blocked that nothing feels surprising. For newcomers, the landscape can be overwhelming. A useful dashboard must help users orient themselves—offering entry points into the data and surfacing patterns across levels of censorship.

Censorship happens across two levels:

  • Macro: Entire websites or services blocked.

  • Micro: Keywords or specific pieces of content censored (e.g. posts removed from Weibo, keywords flagged in WeChat).

Blocky brings transparency to the URLs being blocked. It focuses on a Macro level of censorship. It is one tool within the wider ecosystem that plays a critical role in helping people make sense of an opaque system

Closing

Censorship in China is complex, pervasive, and ever-changing. By studying how journalists, advocates, and researchers work with existing datasets, we learned that effective dashboards need to:

  • Make technical data accessible to non-experts

  • Serve multiple layers of inquiry (deep, contextual, specific)

  • Be transparent about methodology

  • Highlight correlations with real-world events

  • Offer starting points in dense censorship environments

Acknowledgements

We’d like to extend a special thanks to our research participants and the GreatFire team.

This research and article was led and produced by Carrie Winfrey and José René Gutiérrez.

Next
Next

Growing With The People: Insights from Outline VPN Providers