Q&A

How AI Is Already Transforming the News Business

An expert explains the promise and peril of artificial intelligence.

A screen displaying the logo of Bard AI, a conversational artificial intelligence software application developed by Google, and ChatGPT.

The news business is falling apart and here comes AI to finish the job — at least that’s what some worry. AI isn’t the first and is surely not the last technology to upset the journalistic status quo. The telegraph and then the telephone allowed reporters to file dispatches from blocks or thousands of miles away. The Linotype machine obliterated the labor-intensive craft of hand-setting type in the 1880s. Radio moved news instantaneously. Computers replaced the Linotype, displacing thousands of skilled production workers.

The early vibrations of AI have already been shaking the newsroom. One downside of the new technology surfaced at CNET and Sports Illustrated, where editors let AI run amok with disastrous results. Elsewhere in news media, AI is already writing headlines, managing paywalls to increase subscriptions, performing transcriptions, turning stories in audio feeds, discovering emerging stories, fact checking, copy editing and more.

Felix M. Simon, a doctoral candidate at Oxford, recently published a white paper about AI’s journalistic future that eclipses many early studies. Swinging a bat from a crouch that is neither doomer nor Utopian, Simon heralds both the downsides and promise of AI’s introduction into the newsroom and the publisher’s suite.

Unlike earlier technological revolutions, AI is poised to change the business at every level. It will become — if it already isn’t — the beginning of most story assignments and will become, for some, the new assignment editor. Used effectively, it promises to make news more accurate and timely. Used frivolously, it will spawn an ocean of spam. Wherever the production and distribution of news can be automated or made “smarter,” AI will surely step up. But the future has not yet been written, Simon counsels. AI in the newsroom will be only as bad or good as its developers and users make it.

This interview was conducted over Zoom and online and has been edited for length and clarity.

Your report maintains that far from being a technology of the future, AI is already in the newsroom.

It’s not just in the newsroom, it’s in news organizations more broadly, and newsrooms are one part of that. If you consume news on a phone or computer and get any kind of article recommendation, in most cases, that is AI and machine learning. The end user is less aware of the use of AI in journalism in the production of news, like discovering information in very large datasets, which we’ve seen with the Panama Papers, which helps investigative reporters find stories in these big reams of data. Or something fairly common, like recording me and then having AI transcribe it.

Other specific uses?

The Daily Maverick in South Africa feeds original long-form content into AI systems that spits out summaries in bullet point form. In the days before AI, you would have to have a journalist write that summary and write those bullet points. Another is you can have articles read to you by an AI-generated voice. Instead of reading on your phone, or if you’re visually impaired, having a synthetic voice read to you becomes possible with the help of technology, and also possibly at scale.

What sort of road map for the adoption of AI in newsrooms do you see? Do you see expanded use in crunching big data sets? Grinding out formula stories like corporate earnings reports? AI illustrations? AI copy desks?

It will be an ongoing appropriation, but at different speeds, owing to the individual needs, capabilities, cultural mindsets and resources of different news organizations and newsrooms. So, an organization with a strong footprint in data journalism will likely use AI to drive more or deeper coverage on this front, whereas a newsroom that is focused mainly on churning out superficial fluff pieces will likely use it to that end.

Uses that will cut across are those that all newsrooms share in some ways. AI transcriptions are already standard, and everyone needs to copy edit or reformat content for different products and channels, or illustrate and visualize content — so these uses will increase. The same for content recommendation or dynamic paywalls. This was already on the rise, and I would be very surprised if this did not increase in the future. What is going to be really interesting — and most difficult to predict — are the more creative ways newsrooms will find to serve their audiences with the help of AI, not just in terms of good journalism that people will actually want to consume (and pay for), but also in formats that especially younger or currently underserved audiences will find appealing.

What can human journalists do that AI can’t?

Things like gaining someone’s trust, building up a connection to a source, maybe over months, maybe over years in some cases, which might not even lead anywhere in the beginning and then at one point you call them up and they say, I have a piece of information for you. That’s not something any AI system can do at the moment because it relies on human interaction and building rapport over a longer period of time. That’s not something you can do from typing a prompt into ChatGPT. You have to have boots on the ground, with their eyes and ears and going around and seeing what’s happening.

How will AI improve journalism?

That AI will improve journalism is not a foregone conclusion. If managers and editors decide to use it to help reporters do their work in a better way, that would be a quality improvement. But the decision has to be to made to use the technology for that end. It’s not something that happens automatically.

The opposite is also possible if you’re a news organization mainly interested in reaching lots of people, but not necessarily interested in quality journalism, as was the case in the recent CNET’s example [in which the site published shoddy AI-written pieces]. It basically comes down to what news organizations decide what to do with it.

Publishers might use it to boost quantity, not quality?

Any increase in efficiency and resources will generate an increase in resource consumption. AI can allow journalists to spend more time on the really valuable tasks. You can automate transcription with AI, make summaries with AI. But instead of giving you time to do more in-depth stuff, your editor might have you write 10 stories a day instead of five because technology speeds you up. It’s not necessarily all driven by technology, even though technology enables these different scenarios.

How should journalists relate to the current AIs? As a knowledgeable colleague? A sometimes reliable source? Or a lousy intern who fakes most assignments given to him?

The answer to this question depends on the kind of AI system we are talking about as there is no one single AI at work in news organizations. Instead, AI is best understood as an assemblage of different techniques, systems and approaches used in different places and for different tasks. I also try not to anthropomorphize them: They are computer systems. Admittedly, very good ones, but they are not human.

But if we whittle this down for the sake of argument to journalists and how they make use of chatbots such as Bard, Claude or ChatGPT, caution is advised. The underlying large language models are nondeterministic by nature — so you can get different outputs even with the same prompt and they are prone to errors. But depending on the task, this is more or less of a problem. I would not blindly trust them if I were searching for information, but they are rather good at tasks such as copy-editing or summarizing. In any case, I’d always double-check.

One pipe dream of the early internet era was a “Daily Me” news site tailored to individual tastes. That never came. Will AI build it, or is having a news product that other people have more desirable than having a customized one?

Ha! Well, in some ways, there is a bit of the “Daily Me” already, right? Recommendation systems driving things like Apple News or Google News use machine learning, in other words AI, to tailor content to individual interests — and increasingly, publishers around the world are doing the same in their apps or on their websites. So we are going down that route already.

In terms of how this is received, the picture is somewhat ambiguous. There are academic studies indicating that news recommenders are viewed as fair and useful as human editors while others show that people are generally skeptical of any kind of recommendation and tailoring — and that they fear missing out on things other people get to see. What we do know is that those with higher trust in news and institutions are more likely to be happy with automated recommendations tailored to their interests, too. Meanwhile, news organizations automatically tailoring and recommending news can sometimes clash with editors’ desire to set agendas through the stories they place. It’s a mixed picture without a clear answer.

One of your findings is that AI will reinforce existing inequalities among news outlets, with well-resourced ones outracing the less.

If you are a larger news organization, you have the time to invest in research and development, attract and retain talent, and build a customized AI. If you’re a small organization, you’re more a technology taker than a technology maker. That’s one way we see winners and losers.

The big news organizations, like the New York Times and Axel Springer [owner of POLITICO], can engage in direct negotiations with Microsoft and Google. This is not the case with the Oxford Mail or the Offenbach Post, which was my local newspaper in Germany. We’ve seen this power imbalance emerging in recent weeks with the squabbles over questions of copyright and the use of news to do data training. If you are larger, you have an advantage because you can afford to go to court [editor’s note: like the New York Times].

Is it likely that Big Tech will be able to use its power to cement control over news information?

In many ways, it’s already done that, if you look at the ways news is distributed by Google and Meta. There is already a sort of dependency. AI is already used to sort information to curate information on the platforms of those very companies and will increasingly be rolled out more deeply. If you become a technology-taker rather than a technology-maker, you are dependent on cloud computing infrastructure from places like Microsoft. They hold all the cards if they decide to raise prices or change the conditions of licensing and accessing deals. You are at the short end of the stick in many ways.

For many users, AI is a black box whose workings they don’t understand. What problems does that present for newsrooms that become AI-dependent?

If you don’t know where the information comes from, that can create problems. One could expose you as a journalist or an organization to copyright infringement or plagiarism. If you don’t quite know how these systems work, if you don’t know when they work, where they failed, what exactly they do. That can create problems in the journalistic process, of course, and in the way we consume information.

******

I rely on un-artificial intelligence less and less every day. Send IQ points to [email protected]. No new email alert subscriptions are being honored at this time. My Twitter and Threads accounts welcome their robot masters. My dead RSS feed is a Luddite.