The news, brought to you by bots

With help from Derek Robertson

Several humans touched today’s newsletter before you read it.

A reporter (that’s me) ran down and wrote up the facts. Two editors helped shape and polish the draft. And a third formatted it for your screen and gave it one last scan before hitting publish.

Producing the news is still a human-centric business, despite recent and widespread job losses, but that’s changing with the arrival of generative artificial intelligence.

For several years now, AI has been able to generate entire articles faster than a human journalist ever could. It can summarize government meetings, sports games and crime reports almost instantly. It can even, as one news outlet is attempting, spawn fake TV anchors. There have been some embarrassing mistakes, and most mainstream news organizations are being cautious — but given its sheer speed and scalability, AI is already a shadow hanging over human reporters and editors.

A different kind of worry is gripping media owners and businesspeople, who are starting to see AI as a force that could gut their fragile business model even faster than the Web or social media did — delivering fast news summaries to readers who never have to click on, or pay for, the underlying journalism.

The resulting fear about the future is already pushing news publishers and tech platforms into court battles and committee hearings focused on a question with existential impact for both sides of this argument: Should AI companies pay to use the news?

“The stakes couldn’t be higher,” Jim Albrecht, a former senior director of news ecosystem products at Google wrote in The Washington Post this week. “On one side of the conflict sits existential risk for the publishing industry; on the other, existential risk for technological innovation.”

This fight has its roots in the Web 2.0 era. As platforms like Facebook, Google and Twitter became dominant news distributors, zapping publishers’ revenue in the process, the news media has lobbied for laws that force them to pay for journalism. Some of those have passed; many others have not. And none of them has saved journalism.

Publishers now worry that generative AI will make matters even worse — once again, a technology piggybacking on their work that pulls readers away from it. Unlike social media, AI isn’t a platform to share existing articles and videos. It literally trains on media content, and then regurgitates that information into content of its own.

“These outputs compete in the same market, with the same audience, serving the same purpose as the original articles that feed the algorithms in the first place,” Danielle Coffey, president and CEO of News/Media Alliance, testified in the Senate last month.

So, what’s to be done? Some newsrooms are forming partnerships with AI developers. (POLITICO parent company Axel Springer has one with OpenAI, for instance.) Others are girding for battle, blocking bots from using their material and suing to defend their copyrighted work, most notably the New York Times.

This might actually be short-sighted, argues Marc Lavallee, director of technology product and strategy for the Knight Foundation’s journalism program. Lavallee sees AI as a key part of the foundation’s strategy to revitalize newsrooms, particularly small and local publications, by helping them to produce more journalism while keeping humans in the mix.

Because AI might be helpful, as well as damaging, he contends that simply demanding payment from AI developers puts the news industry in a “dangerous position.”

In an interview on today’s episode of the POLITICO Tech podcast, Lavallee points out that news organizations are also part of an ecosystem that depends on the flow of information, which they restrict at their peril: “This idea that news organizations are owed by everybody else for every single way that things were used, when they in turn build value off of fair use… feels like just a tricky, reductive approach,” he said.

Listen to the full interview and subscribe to POLITICO Tech on Apple, Spotify or Simplecast.

Lavallee previously spent a decade at the New York Times and oversaw the team tasked with applying emerging technologies to journalism. He suggests that news organizations should take a breath, and first learn what consumers actually want from journalism in the AI era. Only then can they fully understand the value that journalism brings to the technology — and the ways the technology will improve journalism.

That could include bots that help readers go deeper on current events, or generative tools that quickly deliver the same reporting in multiple formats. He sees “tremendous upside potential” for news organizations and their audiences alike.

“We don’t have a great model for what that looks like yet,” he added. “I think it’s going to take a handful of specific examples happening over the next year or two for us to even start to see what that looks like.”

In the meantime, Lavallee acknowledges things will be messy. The technology is likely to further hammer journalism jobs, and there will be more legal and legislative wrangling. But in the end, he predicts humans will remain central to journalism.

“We will see organizations do the worst of what our fears are here, of laying off humans and replacing them with little content-generating garbage bots,” he said. “My hope is that the market for that is going to wane, because the same sort of underlying enabling technologies, used in the right hands, ultimately with humans involved, ultimately makes for a better, more relevant product.”

The first U.S. presidential election in the age of AI is looming, with no easy fixes on the horizon.

POLITICO’s Mohar Chatterjee and Madison Fernandez reported this morning on the flood of AI-generated deepfakes and false information that have popped up already, and how elections officials are scrambling to fight it. The major AI firms that are already playing ball with the Biden White House on regulation have made commitments to prohibit the use of their tech for those uses, but watchdogs say the worst AI-generated political content is likely to come from the fringes.

“It’s going to be very difficult to regulate,” Rachel Orey, senior associate director of the Bipartisan Policy Center Elections Project, told Mohar and Madison. “Everyone talks about OpenAI and ChatGPT, but that’s unlikely where the most pernicious use cases are going to come from. They’re going to come from unregulated open-source technology.”

Some states have passed laws prohibiting the use of AI in political messaging, but as lawmakers have learned with past waves of technological innovation, it’s nigh unto impossible to put the genie back in the bottle.

“The reality is the technology to create deepfakes is going to keep advancing, just as we’re advancing on the technology to catch it,” Ginny Badanes, senior director of Microsoft’s Democracy Forward initiative, told POLITICO. “It’s going to be an arms race forever.” — Derek Robertson

The European Union is putting its own precautions in place for unauthorized deepfake sexual images after the circulation last week of some involving Taylor Swift.

POLITICO’s Clothilde Goujard reported yesterday on the deal the EU struck on a bill that would ban such content by 2027. Right now the EU’s Global Data Privacy Regulation, Digital Service Act and national defamation laws protect residents against such images or harassment, but this bill would explicitly criminalize AI-generated deepfakes.

“Our commitment to safeguarding the dignity and rights of women and girls in Europe has led to the criminalization of various forms of cyberviolence, such as the non-consensual sharing of intimate images, including deepfakes, cyberstalking, cyberharassment, misogynistic hate speech, and cyber-flashing,” European Commissioner for Equality Helena Dalli told Clothilde. — Derek Robertson