Algorithms increasingly pervade our lives – influencing the news we see, the prices of items we buy and, if we’re caught up in the criminal justice system, whether we’ll be jailed and for how long. Yet algorithms are frequently opaque and thus not subject to accountability. Basic questions have often gone unexplored, such as whether algorithms discriminate against some groups and advantage others. Those are the kinds of questions we decided to investigate to help readers understand how algorithms work.
We created a series of explainers that used a combination of text, video, crowdsourcing and even custom-built tools that readers could download.
Facebook was one of the companies we focused on, since its algorithms wield enormous influence over the information Americans see. Here, we created a browser extension allowing readers to see the “interest” categories Facebook has put them in. Thousands of readers shared 52,000 distinct interest categories.
But despite the granular categories, like “Breastfeeding in Public,” we found that Facebook doesn’t tell users something crucial: It buys sensitive data about users’ offline lives, including their income, the types of restaurants they frequent, and even how many credit cards are in their wallets to add to what it collects itself.
The crowdsourcing effort also led to an even more startling discovery: That Facebook allowed advertisers to exclude users by race. This feature led to a series of articles where Facebook first said the policy was legit and, a mere six weeks later, reversed and changed this part of their advertising offerings. The crowdsourced data was made public in our data store and, so far, has been the most downloaded data set.
In addition to the Facebook episode, we also published stories as part of this series that allowed you to see how news organizations A/B test headlines on you, look up a zipcode to see pricing discrepancies for SAT prep courses and, finally, how machines can learn to be racist.