Guardian reporter Spencer Ackerman broke the initial story of Chicago police detentions at Homan Square in February 2015 and shortly thereafter, the Guardian filed suit against the Chicago Police Department seeking records of individuals taken to the facility. During the summer of last year, it finally became clear that the CPD would be compelled to release some of this data.
The first batch of disclosures included 3,500 records, featuring names, ages, offenses, and various other data points. Another 3,500+ records were released in a second batch of disclosures several weeks later, compelled through the ongoing lawsuit. That’s when the Guardian’s US interactive team was called in to help make sense of this large data dump.
After the first record release, the team spent several weeks on data analysis, as well as producing a few static graphics for an opening story. When it became clear that the second batch of several-thousand records was forthcoming, Guardian editors decided it was important to do a bigger, explanatory interactive project. The goal of that project, which turned into “Homan Square: a portrait of Chicago’s detainees”, would be to illustrate the full scale of detentions at Homan Square and serve as a primer for readers not yet familiar with Ackerman’s reporting.
Because this is such a complex story, we wanted to walk readers through what we know about Homan Square step-by-step — but using the more natural act of scrolling, rather than clicking.
We decided early on that we wanted to reinforce the sheer quantity of people known to be detained at the facility by rendering a “cloud” of documents, representing all 7,000+ arrest records. But we also soon realized that a document-centric visual representation wouldn’t work, both conceptually and due to data constraints. (For one, we don’t let readers explore/read the arrest records because we’re not revealing details of individual cases). It was also important for us to remind people that numbers are about human stories and that pixels in data visualizations often represent people, so we chose to represent the arrestees with abstract, but more human icons.
From there, the idea of rearranging these icons into different visualizations felt incredibly natural.
At first, readers see a floating cloud of abstract portraits, but as they scroll from one section or “scene” to the next, that same cloud is transformed into a visual representation of statistics about Homan Square: the number of people detained at the facility, race of detainees, where they were detained, and a timescale graph showing when they were detained.
We also highlight two cases where we are able to put names and at least one face to the abstract forms. This serves as another reminder that real people are behind the statistics.
Persistent visualizations that adapt to where you are in the story can be really powerful ways of helping readers understand the numbers, compared to embedding a series of discrete charts.
To make this piece, the interactive team worked closely with reporters Ackerman and Zach Stafford.
There were many challenges to executing this piece, starting with the data itself. The records received from the Chicago Police Department were not delivered in a computer-readable format, which made them tedious to digitize and analyze. Some key data values were also missing for many files, including the race of large batches of the detainees. We hired researchers in Chicago to manually pull race data from arrest records to complete this dataset.
Because Homan Square has a reputation as a place where police pressure people into becoming informants, it would have been irresponsible to compromise the identities of these individuals. Once we started working to visualize the data, we had to figure out how to keep the records anonymous while also reminding readers that there are real people behind the numbers. Since we weren’t using photographs of the arrestees, we hired an illustrator, Oliver Munday, to make a series of abstract portraits to represent the people held at Homan Square.
There was also a significant technical challenge. Browsers – especially mobile browsers – struggle to animate more than a few hundred elements smoothly, and we had 7,185. We used the THREE.js WebGL library to create a ‘scene’ in which the document cloud could live in three dimensions – WebGL allows you to draw things more quickly at the cost of more difficult-to-write code – but the effort of transitioning from one layout to another was still too great for slower devices. Creating custom animation code and manipulating blobs of binary data, as a game developer might, gave us the performance boost we needed for the story to work on mobile. We created a framework for adding an SVG annotation layer atop the WebGL canvas in order to add things like text and axes where necessary.
For the map, we built our own custom map renderer that generated an image from public domain OpenStreetMap data, in order to get the exact look we wanted.
The layout changes according to the screen on which you’re viewing the story: on desktop, the graphic elements fill the screen with a text overlay to one side, while on mobile devices the text flows beneath a canvas that fills the top half of the screen – an unusual and hard-to-implement approach that we settled on after many design iterations and user testing sessions.