How Google’s AI Robot Is Helping The New York Times Moderate Its Comments

Photo: Flickr user Phil Roeder

Inside the New York Times‘ towering building in Midtown Manhattan, just off Times Square, sit 14 journalists whose primary role is to read and to click.

They are the Grey Lady’s comment moderators, and their goal is to maintain online civility in the hallowed digital pages of the 166-year-old newspaper. Civility, that is, for 10{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of the Times‘ articles–for only that percentage of stories on nytimes.com allow reader comments, given the size of the paper’s comment moderation team. As it is, the Times receives 12,000 comments each day.

The offices of Alphabet’s Jigsaw division [part of Google] lie farther down the island of Manhattan. Jigsaw’s new offices–in a much cooler part of town than the Times–sit atop an artisanal ice cream shop in Chelsea Market. Jigsaw is Alphabet’s tech incubator and focuses on widespread digital problems, like online harassment and terrorist recruiting. During the last year, Jigsaw and the Times have been working together to make online commenting better (because let’s face it–online commenting pretty much sucks). Now we will begin to see the fruits of their labor.

Starting today, the New York Times will begin to increase the number of articles that feature a commenting section–including front page articles–and the moderators will now use the Jigsaw-built platform, Perspective. As the year goes on, the paper hopes to offer comments on 80{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of its articles. That’s no small goal, given that the Times publishes about 200 articles a day.

To do this, the venerated newspaper has been leveraging Jigsaw’s artificial intelligence capabilities to achieve this lofty aim. Instead of bulking up the Times‘ team of human moderators, Alphabet has been building a machine-learning algorithm that automatically scans for abusive and superfluous comments. Human moderators will use this new software, built specifically for them, to speed up their moderation processes–and the hope is that this program will make it dramatically faster for humans to comb through all those reader-submitted comments.

Years ago, Jigsaw (formerly Google Ideas) realized it could use its mother company’s computing prowess to help build a solution that tackled online discourse. According to Patricia Georgiou, Jigsaw’s head of partnership, the Times was specifically chosen, along with a few other partners including Wikipedia, because it “aligned with our goal.” This goal, she says, is simple: “How can we improve online conversation?”

The Times‘ existing system was just as painstaking as you might imagine a legacy news organization’s commenting CMS to be.

“Basically, what we did was moderate the overwhelming majority of comments by hand,” says Bassey Etim, the paper’s community editor. Whenever an inquisitive (or angry, or happy) reader decided to sound off, Etim’s team would have to look at that comment and deem whether or not it was fit to print. He calls this system time-based because moderators would look at the comments based on the time when they were submitted.

Etim is quick to add that this system worked fine, despite how tedious it sounds. “We tried to create a product that feels like you’re still on the New York Times site,” he says of the relatively polite discourse that gets published by the moderators, compared to other websites around the web. “It doesn’t make you feel like you have to take a shower after.”

According to surveys, Times readers generally didn’t mind the delay between the time they submitted a comment and the time it appeared on the website–and they seemed happy with the conversations they encountered on the site, too. Etim believes that the only two news sites that ever executed commenting systems really well were the Times and Gawker (R.I.P.). He describes the Times‘ commenting community as a group of people who write “full sentences and erudite jokes.” But because only a small proportion of Times articles allowed comments, and because the Times likely couldn’t afford to hire an infinite number of comment moderators, the one-by-one moderation method created a problem of scale.

Up until now, choosing an article to receive commenting love or hate was a process of triangulation. The team would first look at the article’s newsworthiness. Then they would ask themselves whether or not comments on that article would truly add something novel to the conversation. If the answer was yes, they then would determine exactly what the comments section could add to a discussion of the topic. The big challenge Etim faced was figuring out how, under his existing workflow, he could scale up the amount of stories with comments without hiring an army of moderators.

ENTER: JIGSAW

Jigsaw has its tentacles in attempts to solve many online problems. One of its most well-known projects aims to figure out how to curb online harassment and trolling. This new commenting project with the Times is very similar.

“The increasing concern that we were hearing from publishers was that a lot of conversation around the hard topics is actually happening in the comments section of news articles,” says Georgiou. At the same time, journalists have been seeing an uptick in online toxicity over the years–a lot of which bleeds into these commenting spaces. Many sites don’t offer comments because they simply lack the resources to support such a platform responsibly, without letting it devolve into nastiness.

The Times, however, had built out a curated system that Jigsaw noticed. “The New York Times is dedicated to having good online conversation on their platform,” Georgiou says, adding that it’s rare for any organization to have such a moderation team of that size. Thus it seemed only natural that the two like-minded organizations would partner to create a scalable solution.

Together, Alphabet and the Times hope to do something greater than shut out trolls–they want the comment section to serve as a 21st-century version of the Victorian salon. For his part, Etim wants to see more full sentences in the Times‘ comments section–and he definitely needs more erudite jokes.

A few months ago, the Times handed over its entire cache of comments to Jigsaw–over 16 million examples. That’s when Jigsaw’s crack team of developers went to work, building an AI that can understand what should be approved and what should not be approved. The challenge, says Georgiou, is teaching the robots to recognize “what is a toxic comment” along with “what is a comment that would cause somebody to leave the conversation” (like a reader whose submitted remarks veer far off from the topic presented in the article).

Based on past decisions by the Times’s comment moderators (thanks to those 16 million previous comments) and other publishers (including Wikipedia) who have contributed their data to the project, Jigsaw created the machine-learning algorithm that is now implemented on the newspaper’s website. The project is driven by machine learning, which is iterative, so as more comments get added, the more effective the comment-scanning-and-understanding robot will theoretically become.

Thanks to this new system, the human moderating team will be able to access a platform that ranks a comment based on the probability that it would be rejected by a human moderator. If it’s deemed a 0, it’s very likely a good comment. If it’s given a 100, it may be the trolliest thing your eyes have ever seen. The New York Times‘ moderators can use this system to gather comments within a certain number range–say, 0-10–and then approve or deny them en masse. If something is questionable–i.e., if the robot assigned it a high-ish number–the human moderators can take a more discerning look. This software also bolds questionable sentences so the human team can easily see exactly what’s controversial about a comment, according to the machine. This, says Etim, will make the moderators’ jobs “eight to 10 times more efficient.”

A GRANDER VISION

Scaling the New York Times‘ comments section without reducing its quality is not the only goal here.

Two weeks ago, the paper announced it was doing away with its public editor position, a high-profile role that aimed to read reader’s critiques and accordingly determine whether or not the Times‘ editorial staff had failed its readers. That public editor wrote about their findings in a column on the Times‘ website. In the wake of the decision to eliminate the public editor, in a memo to employees, Times publisher Arthur Sulzberger wrote that the public editor’s purpose was to be the “reader’s representative.” He mentioned that the Times is “dramatically expanding our commenting platform.” Does the newspaper see its investment in commenting as a way to replace the role the public editor served, a voice for readers?

According to Etim, there’s a bit more nuance to it than that. “It’s not that these comments replace the public editor,” he says, “it’s that me or someone else from my desk is going to go to the news desk and say people really need an answer to this question” that has been brought up in the comments section. Instead of one person arbitrating which reader remarks will or will not be discussed with writers and editors, the entire moderating team will read what people are saying in the comment section and then liaise that information to the edit team. Then editors will be able to create a response–like a comment from the staff to the reader in the comments section, or perhaps another article. This new system, says Etim, “helps journalists turn reader reactions into journalistic actions.”

THE BEGINNING OF A NEW CHAPTER

The Times believes this new commenting system might propel it even further into the digital future. If more people can comment on the website without fear of trolling or abuse, they will likely engage better and more with the Times‘ stories.

Currently, it can be hard to find a great digital place for discourse–especially for politics. On Facebook, Etim says, “your uncle is going to get into a big fight with you.” Not to mention Twitter, which is rampant with trolls. “We have a sweet spot,” says Etim, pointing to the civility of the Times‘ commenting section. “You can have your news arguments on our website.”

Read more at Fast Company

Trackback from your site.

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via