Google has made a news application that relies on A.I. What do you think of the new Google News site? Do you like it?
Earlier this month, Google rolled out its upgraded version which uses artificial intelligence to find the best and most accurate content created by journalists around the world.
Your New Google News
In the “Headlines” section, the user can view world news from trusted resources. Google also offer five stories it thinks are relevant to you, in your personalized “For You” section. This section can be a mixture of global and local news, based on topics you have shown interest in before.
Google promises the more you use the app, the better it gets. It uses easy controls so that you can choose to see more or less of a topic, as well as, great images and videos from YouTube to improve your user experience. Google is experimenting with a unique visual format called “newscasts“, making it easy to dive right into perspectives and learn more about a story.
To filter out bias, Google News no longer uses human editors, nor does it partner with specific news organizations, spokesperson Maggie Shiels told CNNMoney. Google’s A.I. will separate content into news, opinion, and analysis. This, Shiels says, will also prevent the problem Google had with YouTube, where automated recommendations tended to push people toward more extreme content. The changes come at a time when Apple is reportedly prepping a premium news subscription service, based on the technology from Texture, the digital newsstand business it bought in March. Notably, it also arrives amid serious concerns that publishers have about Facebook’s role in the media business, not only because of fake news, but also its methods of ranking content, among other things.
The Google News Initiative (GNI) launched earlier this month vowed to strengthen quality journalism and to empower news organizations through technological innovation. Google News was created 15 years ago simply to organize news articles so users could see a wide variety of sources on a topic. Trystan Upstill, head of News Product and Engineering at Google, announced the time had come to “…find the best of human intelligence – the great reporting done by journalists around the globe. We know getting accurate and timely information into people’s hands and supporting high quality journalism is more important than it has ever been right now.”
How Does A.I. Choose News?
First, Google’s artificial intelligence captures stories based on what’s popular on the internet right now. Once it picks a topic, it looks at more than a thousand news sources to gather details. Left-leaning sites, right-leaning sites – it chomps through them all.
Then, the A.I. writes its own “impartial” version of the story based on what it finds (sometimes in as little as 60 seconds). This take on the news contains the most basic facts, with the artificial intelligence supposedly remaining objective.
For some of the more political stories, the A.I. produces two additional versions labeled “left” and “right.” Those skew how you’d expect from their headlines:
- Impartial: “U.S. to add citizenship question to 2020 census”
- Left: “California sues Trump administration over census citizenship question”
- Right: “Liberals object to inclusion of citizenship question on 2020 census”
Some controversial but not necessarily political stories receive “positive” and “negative” spins:
- Impartial: “Facebook scans things you send on messenger, Mark Zuckerberg admits”
- Positive: “Facebook reveals that it scans Messenger for inappropriate content”
- Negative: “Facebook admits to spying on Messenger, ‘scanning’ private images and links”
Even the images used with the stories occasionally reflect the content’s bias, and the A.I. analyzes these as well. Impartial stories written by A.I. Pretty neat isn’t it? Does it work?
Is There Such a Thing as Objective News? Can Any News Source Be Objective?
Google’s A.I. uses these same algorithms to truss up “objective” selections in all its products from ads to images to search results.
Here are some of the stories:
In 2013, Harvard professor Latanya Sweeney investigated Google AdSense ads that came up during searches of names associated with white babies (Geoffrey, Jill, Emma) and names associated with black babies (DeShawn, Darnell, Jermaine). She found that ads containing the word “arrest” tagged, at least, 80 percent of “black” name searches but fewer than 30 percent of “white” name searches. Two years later, two men used Google’s photo software and found themselves labelled “gorillas” – the data lacked enough examples of people of color.
No, the A.I. is neither racist nor bent on punishing people of color for daring to integrate with whites. Rather, machines are programmed by humans and subsequently fed with their programmer’s bias.
Let’s say programmers are building a computer model to identify terrorists. First, they train the algorithms with photos that are tagged with certain names and descriptors that programmers think typify terrorists. Then, they put the program through its paces with untagged photos of people and and let the algorithms single out the “terrorist” based on what they learned from the training data. The programmers see what worked and what didn’t and fine-tune from there.
The program is supposed to work, but bias intrudes when the training data is insufficiently diverse. This prompts the software to guess based on what it “knows” – as happened with the black men tagged “gorillas”. Mistakes also occur when few pictures of alternate data are included – say, when 85% of the photographs show swarthy bearded males wearing turbans. The terrorist may be a white female, but the smart glasses convict a Muslim male, instead.
The point is that A.I. takes on the bias of their creator, so Google creators and editors will need to be as impartial as humanly possible to ensure the A.I. retains its impartiality. Are we getting that?
The top sources from today’s Headlines come from CNN, CBS News, The Washington Post, Washington Examiner. Are they objective? That’s for you to decide.
At the bottom of Google News site, it says: “The selection and placement of stories on this page were determined automatically by a computer program.” To which Rochelle, a so-called “enterprising sleuther” tweeted: A COMPUTER PROGRAM DEVELOPED BY HUMANS WITH BIASES.