AI and the media

There was a time when journalists knew their sources, personally. The day at a local newspaper would begin with a visit to the police station to look, with the chief inspector, through the list of crimes. That might be followed by a trip to the nearby fire and ambulance stations to do something similar. Then the magistrates court and the municipal council offices, not to forget chats with pub landlords, religious and community leaders, business owners and general gossips. Real life members of the public came into the office and if all else failed, there was the telephone.

If the journalist didn’t know them personally, he or she spoke to someone who did. Sources were everything, reliable sources. And they still are.

The difference now is that with the advent of Artificial Intelligence (AI) and myriad social media platforms, there are many more sources. How they are constructed is more complicated, the journalist doesn’t know them personally and it is much more difficult to verify the veracity of any information that comes their way.

However, it is still the job of the journalist to check those sources are true.

But true? What do we mean by true? All the information on an algorithm, gleaned from reliable big data, may well be accurate. But filter bubbles, working with information from the recipient’s previous online activity, leave out what the user is not interested in, or may find offensive. It works on a ‘need to know’ basis.  But who decides who needs to know what? And is a partial truth still ‘true?’

But hold on! Haven’t news users always worked with partial information? The journalist, operating under time and space constraints, has always decided what is put in and what is left out, to then see their copy cut and altered by sub-editors and the editor. An algorithm is simply taking that process one step further. All information is selected, partial, incomplete.

The onus, as it always has been, is on the journalist to provide as much balanced, accurate, well-written material as conceivably possible. And on the reader or viewer to pick and choose then question the integrity of what they are seeing or hearing.

But what if a machine is now doing that journalism? Last year, Digital First, the owners of the Denver Post, began talks with the unions about using artificial intelligence to cover high school sports events. They hope eventually to have computers also gather and publish reports from municipal councils and community groups.

Digital First is owned by a New York hedge fund called Alden Global Capital whose principal aim in introducing the technology is simple cost cutting, despite the Denver paper, as it now functions, being profitable. Journalists will be replaced by computers.

It is not AI itself that is laying off the journalists. Most welcome the technology as a means of making their jobs more comprehensive and efficient. But when in the hands of owners whose prime aim is profit, it can be insidious. Ken Doctor, a media analyst with Nieman Lab, said: “The problem is the tools are being used by those who are primarily looking at cost-cutting. Actual journalism requires judgement.”

That judgement, from both the journalist and the consumer of news, is under greater strain with the threat of doctored photographs and deep fakes, videos changed using sophisticated editing and content management tools.

Deep fakes showed the two main candidates in the 2019 British elections, Boris Johnson and Jeremy Corbyn, apparently telling voters to back their main rival, Facebook boss Steve Zuckerberg seemingly admitted he’s stealing users’ personal data, and we’ve seen visual ‘proof’ that innocent parties have committed atrocities – all recent examples of deep fakes.

Bill Posters from the Spectre Project, which works to highlight such misuses, said: “Democracy just doesn’t work if people don’t believe in it. Danger is likely to increase as long as politicians and tech companies remain unsure of how to deal with it.”

And Aviv Ovadya, from the Thoughtful Technology Project, another organisation working in the field, added: “Politicians escape scrutiny by saying ‘that deep fake video was not me.’”  He called the affects ‘Reality apathy,’ where people opt out of politics, saying they don’t believe in it.

They are just some of those battling against the insidious consequences of big fake. Another is Dr Sander van der Linden at Cambridge University who is working on a Fake News game.

It tests the players who must get as many followers as possible without losing credibility. So, initially the news they publish can’t be too ridiculous. Then users, after being duped, must question the information. Van der Linden says the participants are fed what he calls small doses of mental antibodies to build up resistance to fake news, to become their own “bullshit detector.”

There are several other organisations working along similar lines. InVid says: “the ease in which fake information spreads in electronic networks requires reputable news outlets to carefully verify third-party content before publishing it.”

It offers what it calls “a knowledge verification platform to detect emerging stories and assess the reliability of newsworthy video files and content spread via social media.” This entails networks of media outlets, academics and others to check and share the validity of material, as well as tools that enable processes such as reverse image searches, that allow the user to check the source and reliability of suspect photo and video material.

First Draft was founded in 2015 and helps implement measures to counter fake news. Its webpage lays out its basic plan:

  • Core: newsrooms that have staff dedicated to social monitoring and verification and have publicly listed standards policies and corrections protocols.
  • Academic: journalism schools and researchers in a variety of disciplines that work to understand and explain information disorder.
  • Technology: organizations that help to bring insights to the reporting and understanding of how information travels online.

First Draft’s advisory board includes representatives from human rights organisations, journalism, law, copyright law, cyber-security and politics, trying to fill what Bill Posters calls a “regulatory black hole” – created by the rapid development of technology used by the media.

So, while journalists have swapped their manual typewriters for computers, their function remains much the same – to check their sources, to verify the validity of the information they’re using and to present as balanced an account as possible of the story they are covering.

After a period of (justified) panic over the damage to democracy that fake news could do, civil society, many in the industry, journalists, politicians and others are beginning to fight back by developing their own tools, combined with a more robust, questioning approach to where our information is coming from, who is producing it and how.

The journalists must be better if they are to win and keep the trust of an increasingly cynical public. But the public too, faced with the possibility of trusting no-one and believing nothing, must be more rigid in assessing the validity of what they read and watch.

Sources: 

THE ECONOMIST VIDEO: https://mail.google.com/mail/u/0/#label/Topics+in+Global+Digital+Cultures/QgrcJHsHmZfRfSHHRTdgcjgBGBRzVXDRxLv?projector=1

First Draft: https://firstdraftnews.org/about/

First Draft Field Guide to fake news: https://firstdraftnews.org/project/field-guide-fake-news/

Full Fact: https://fullfact.org/blog/2018/jul/spot-misleading-images-online/

TheIntercept (11/10/2019) https://theintercept.com/2019/10/11/digital-first-media-layoffs-outsourcing/

InVid: https://www.invid-project.eu/

InVid tool: https://www.invid-project.eu/tools-and-services/invid-verification-plugin/

Reuters Institute: https://reutersinstitute.politics.ox.ac.uk/risj-review/truth-behind-filter-bubbles-bursting-some-myths

2 thoughts on “AI and the media

  1. I really enjoyed this practical approach based on your background as a journalist. So as the public, we individuals need to be more BETTER in the information ocean today.

  2. I think both the public and the journalists need to be better at discerning the veracity of information. The fast-changing technology presents both opportunities and challenges – and we’re only just beginning to discover what they are.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: