Can AI reduce bias in news consumption?

On a mission to help people make sense of the news

Nicholas Borge
Coding it Forward

--

Impartial.AI is a team competing for the $5m IBM Watson AI XPRIZE.

Tweets, snaps, posts, texts, emails… the internet has increased the incentives and means to compete for attention at scale.

With so much out there we must turn to algorithms for help. Platforms that get it right win attention, loyalty, and dollars.

Social media is particularly effective at choosing content that keeps us engaged, but can result in narrower perspectives.

Along with cat videos, recipes, and photos of loved ones, news is an increasing part of the mix. In the US, adults who get news on social media are now a majority: 62%.

So if news on social media is tailored to our tastes too, then it’s not just our perspectives on cat videos that are being shaped; it’s our perspectives on the world.

For society to function we need creativity and compromise, which is only possible with a shared understanding of reality.

The urgency is increasing

Last year’s US presidential election was a wake up call in how narrowing perspectives are playing out. The real problem is that so many Americans were surprised by the outcome, perhaps fixated by a certain narrative.

We, the Impartial.AI team, spoke about this and other concerns at the United Nations “AI for Good” summit in Geneva last month. The UN wants a dialogue between governments, industry and technologists and we were invited to exhibit (check out our write-up).

As we listened to the talks, the urgency of the problem became clearer; by 2020 there will be 4 billion people online, and before long it may be most people on the planet.

Imagine 9 billion people, all plugged in and benefiting from all the internet has to offer … but each is served a personalized slice of reality that excludes all others.

If we want to get ahead of this, there is no time to waste; we must act now.

“We see the effects of narrow perspectives every day, so it’s exciting to be working on something that will have a positive impact on the way people interact with the digital world.”

Liz Merritt, User Experience and Interface design

There is some progress, but we must do better

There is increasing attention on the wider issue of misinformation, but we think it’s likely that many of these initiatives will miss the mark. Three reasons why:

1. Fake news isn’t the full problem

It’s a great rallying cry to help facilitate discussion, but it only targets one relatively small part of the problem.

Consider that if there was only “real” news (i.e. with only empirically validated facts and no overt intent to deceive), then that still doesn’t preclude bias.

Remember the FBI’s investigation into Hillary Clinton’s emails last year? News on the left highlighted “no criminal charges” while sources on the right emphasized “extremely careless.”

Both perspectives were correct but lacked the full context of the situation. In isolation, each reinforces a pre-existing bias—the impression is simply “Clinton good” or “Clinton bad.”

Most news isn’t fake but is subtly (and usually unconsciously) biased. The truth is more nuanced, and we need a range of perspectives to truly understand it.

2. Truth is a feature, not a product

We are motivated to read the news for a variety of reasons, and the internet can meet those needs in a variety of ways.

Feeling validated by content that you agree with, bonding over shared narratives, feeling smart for talking ‘authoritatively’ on a topic, or simply thumbing through to pass the time.

Here again, social media is king.

3. Platforms own the audience

If most Americans get at least some news on social media, then all news must now compete with, or within this finely tuned attention-holding machine.

If within, then they are competing with all other forms of content. If with, then they are competing with the platform itself.

In each case, to be successful news organizations must offer a much broader value proposition than just the facts.

Further, their very business model is becoming increasingly unsustainable. So they have started experimenting e.g. mainstream outlets are doubling down on paid subscriptions that promise value in some form of “higher quality” content.

But could this lead to a two-tier information system where a few pay for privileged access while most choose ‘free’ news on platforms that reinforce their own pre-existing worldview?

If news is about shared understanding so we can make decisions together, then news quality doesn’t matter unless it’s widespread enough to reach the right decision makers.

In a democracy this means voters, and enough of them to make a difference.

“Cognitive bias has evolved over millions of years. Survival in the stone age required quick judgements based on limited information; in the digital age, technology must helps us fight this urge and leverage virtually unlimited information.”

Scott Salandy-Defour (Saladz), expert on bias in tech

How can we fix this?

If we care about perspective, we must accept that for most people it’s only one part of the value proposition, i.e. we can’t sell perspective on its merit alone.

So our technology must start from the assumption that bias will always exist, while also adding value for people without telling them what to think.

For news this means we must get better at how we communicate context.

“AI will be the biggest technology disruption of our generation, but anything built on incomplete or biased data will produce poor results. The future of AI relies on a better understanding of information in context, and for this we need a wider, more inclusive approach.”

Oliver Christie, AI consultant and strategist

We believe that AI must be part of the solution

Our approach (without giving too much away), will analyze news from as wide a range of perspectives as possible, and deliver context simply and intuitively.

“It’s a challenging and exciting problem; we’ll need to use existing NLP and machine learning techniques but also develop new approaches and advance technology in a number of different fields to make this a reality.”

Dr. Colin Kelly, Phd. NLP from Cambridge University

It’s impossible to do this at scale without AI. We’ll need to develop serious stuff:

  • a range of natural language processing techniques; from linguistic parsing to semantic understanding, and from machine-learned pragmatics to intention resolution
  • modern machine learning algorithms and other learning approaches e.g. feedback loops, bootstrapping, self-learning
  • distributed data processing for plowing through millions of articles at scale, caching for high performance, and crawling the news in a structured and ‘influencer first’ way
  • data-driven bias metrics and measurements that are backed by cognitive psychology
  • designs for compelling yet simple and intuitive visuals that integrate seamlessly into the online experience.

A lot of these techniques start from existing tools, and quickly progress beyond the limits of what’s currently possible.

It won’t be easy, but that’s where you come in…

Do you care about solving this problem…? Then join us! You can get in touch at http://www.impartial.ai/

We’re an experienced team of designers, AI geeks and ex-strategy consultants and we’d love to hear from you.

By Nicholas Borge, Team Lead, Impartial.AI

All of quotes above are from team members at Impartial.AI.

This blog was first written for Coding It Forward, with special thanks to Chris Kuang and Neel Mehta.

Coding It Forward is a rebel alliance of people using technology for social good. If you’re interested in joining us or staying in touch, please email Chris Kuang at ckuang@college.harvard.edu. We’d love to hear from you!

--

--

Bringing data, advanced analytics and automation into decision making for organizations and society.