Now that the Biden administration has finally issued its long-awaited executive order on AI, it’s worth considering just how tricky it will be for the lumbering bureaucracy to keep up with a technology that can quite literally teach itself how to change and adapt to its environment.
POLITICO’s Daniel Payne tackled that very subject in a report published over the weekend covering how AI products are hitting doctors’ offices and hospitals across the country without the voluminous amount of testing that the government usually requires for new medical tools. That poses a big problem when the unanswered questions surrounding privacy, bias and accuracy in AI are applied to quite literally life-and-death situations.
Suresh Venkatasubramanian, a Brown University computer scientist who helped draft last year’s Blueprint for an AI Bill of Rights, told Daniel, “There’s no good testing going on and then they’re being used in patient-facing situations — and that’s really bad.” Even one CEO of an AI health tech company expressed concern that users of his software “would start just kind of blindly trusting it.”
Troy Tazbaz, the director of the Food and Drug Administration’s Digital Health Center of Excellence, acknowledged that the FDA (which falls under the Department of Health and Human Services) needs to do more to regulate AI products even as they hit the market, saying they might require “a vastly different paradigm” from the one in place now. Daniel writes that Tazbaz foresees “a process of ongoing audits and certifications of AI products, hoping to ensure continuing safety as the systems change” — echoing the type of “agile” regulatory framework proposed in a recent book by former Federal Communications Commision Chair Tom Wheeler.
The White House’s new executive order, in the meantime, looks to spur the agencies forward. As POLITICO’s Mohar Chatterjee and Rebecca Kern wrote in their exclusive look at the order published Friday, it directs HHS to develop a “strategic plan” for AI within the year. HHS is also directed to determine whether AI meets the government’s standards when it comes to drug and device safety, research, and promoting public health, and to evaluate and mitigate the risk that AI products could discriminate against patients.
According to the draft of the executive order, the HHS’s AI “Task Force” will be responsible for the “development, maintenance, and use of predictive and generative AI-enabled technologies in healthcare delivery,” “taking into account considerations such as appropriate human oversight of the application of AI-generated output.” It’ll also be required to monitor AI performance and outcomes like any other products, develop a strategy to encourage AI-assisted discovery of new drugs and treatment, and help determine the risks of AI-generated bioweapons.
The EO doesn’t necessarily set new rules around AI safety and privacy concerns in health care, but rather sets a plan in motion to create them. (Eventually. Maybe.) But even with this most sensitive — and highly regulated — of policy subjects, experts and lawmakers are skeptical about when or whether actual, legislative action might be taken once that plan is in place.
Brad Thompson, an attorney at Epstein Becker & Green who counsels companies on AI in health care, told Daniel the legislative “avenue just isn’t available now or in the foreseeable future.” Rep. Greg Murphy (R-N.C.), co-chair of the House’s Doctors Caucus, said state government should take the lead.
So in the meantime, it’ll be up to the executive order-mandated “task force” to suggest rules for how to deploy and regulate AI tools in health care most responsibly, and to the medical sector to follow them. As Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists, told Mohar ahead of the order’s publication today: “Issuing the EO is the first step. Now comes the budget battle, then comes the implementation.”
Drone warfare is transforming the battlefield in Europe, bringing major ramifications for the rest of the world.
POLITICO’s Veronica Melkozerova reports on how $400 commercial drones are besting the Russian military in Ukraine, similar to how Hamas militants have leveled the technological playing field in Israel and Gaza. Using drones with parts largely manufactured in China — something that’s led that country to restrict exports to both sides of the Ukraine war — Ukranian pilots can effectively learn to pilot first-person view kamikaze drones in less than a month.
“FPV drones are effective tools for destroying the enemy and protecting our country. The Ministry of Defense is doing everything possible to increase number of drones,” Ukraine’s Defense Minister Rustam Umerov said in a statement Wednesday.
They’re so effective, in fact, that the director of the drone training academy outside Kyiv predicts they could be coming soon to national capitals across the continent.
“It is almost impossible to shoot it down,” Pavlo Tsybenko told Veronica. “Only a net can help. And I predict that soon we will have to put up such nets above our cities, or at least government buildings, all over Europe.”
And now, some AI news from the other side of the pond: POLITICO’s Vincent Manancourt reported today for Pro subscribers on what to expect at this week’s long-awaited United Kingdom AI summit.
Wednesday will feature a discussion among a wider group of countries, and Thursday will future leaders from select nations to discuss the potential threats of AI to global security. Attendees will include Vice President Kamala Harris, European Commission President Ursula von der Leyen, United Nations chief António Guterres, and Italy’s Prime Minister Giorgia Meloni among others.
The summit will also feature representatives from the private sector, although there’s an element of exclusivity there too: The first day will have a wider group of companies including Palantir, Faculty, Salesforce, Hugging Face, Cohere, Conjecture, and Stability AI among nine other tech giants, while day two will include Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, as well as Elon Musk’s xAI and France’s Mistral.
And what’s actually expected to happen? Vincent writes that U.K. Prime Minister Rishi Sunak is hoping for four main outcomes: A statement on AI risk that the U.S. and China can both get behind; an agreement to create a network of AI researchers similar to the Intergovernmental Panel on Climate Change; plans for an AI Safety Institute; and commitments from AI companies to build on the voluntary safety commitments they made to the Biden White House.
- Gannett is denying claims that it used AI to write product reviews.
- How good is AI at meeting the standard benchmarks for a financial adviser?
- Investors largely believe in AI, but they aren’t going all in yet.
- Could personalized brain implants help with disorders like OCD?
- Meta’s head of AI research talks about the delicate balance of open-source models.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.