5 questions for ITIF’s Daniel Castro

Welcome to this week’s edition of The Future In Five Questions. Social media was big on the Washington calendar this week, but it’s hardly the only tech-policy debate swirling right now. This week I caught up on the full landscape with Daniel Castro, vice president of the Information Technology and Innovation Foundation, a tech-focused Washington think tank that has traditionally been friendly to the tech industry.

Over the past few years ITIF has had its hands full, to say the least, as concerns about antitrust, semiconductor competition and now artificial intelligence have put tech policy at center stage. He discussed this growing concern about U.S. cybersecurity, the urgent need for a comprehensive plan on AI and the extent to which he thinks Washington takes Silicon Valley for granted. An edited and condensed version of the conversation follows:

What’s one underrated big idea?

Cybersecurity and the way that AI can be used to address the major cybersecurity risks and challenges that we’ve faced for the last three decades or so.

The FBI came out this week with their statement about China infiltrating the U.S.’ critical infrastructure, and we know that for a long time we’ve had challenges with our cybersecurity workforce. We don’t have enough people in jobs, we don’t have enough skilled workers to address all the threats. When we look at what AI is able to do today and where it’s going, it’s replacing some skills, especially lower skills of white-collar workers. Cybersecurity is a great example of where we need more of these workers, and while these tasks are kind of automated and routine — it’s scanning code, looking for vulnerabilities, looking at network traffic and figuring out if there are anomalies — those are all things AI is really good at.

There’s still a real disconnect between cyber policy and AI policy in the United States, and there’s not enough recognition that to do cybersecurity well we need to be leveraging AI.

What’s a technology that you think is overhyped?

As someone who has run the AR/VR Policy Conference for the last three years, I’m not going to say it’s overhyped, because I think spatial computing is really powerful, but I think augmented reality and virtual reality is just now opening up so many possibilities. The problem that we have today is that where the technology is right now, it’s still far enough away from where it’s going that there’s a disconnect for the average person who hears about it.

It’s similar to the early days of the Internet, where people were talking about the World Wide Web but it was still just a lot of text-based websites and communication, and it was hard to see past that curve to what was coming in the future. I don’t want to say it’s overhyped, and I think spatial computing will be important, but I think reasonably a lot of people are shrugging their shoulders a little bit and saying we’ve been here before.

What book most shaped your conception of the future?

Larry Lessig’s work around “code is law.”

This concept demonstrates the ways in which technology is interconnected and fundamentally intertwined with society. You can’t separate those two things, the decisions that are made in designing technology and the impact it has on people’s lives. That’s incredibly important when we think about where we’re going as a society with technology and policy. In all these cases the design of these systems ultimately has a tremendous impact on what is possible, and the types of incentives, interactions, and environments that people will be working and playing and learning in.

What could the government be doing regarding technology that it isn’t?

More than 10 years ago we had a national broadband plan. There was a huge amount of recognition that broadband was necessary for Americans to succeed and thrive and be competitive. We need the same thing for AI.

We need to recognize that the deployment of AI across virtually every sector should have a huge mandate from government, and that it will require government support. It’s not something that will just happen in industry, or be industry-led. Whether or not we’re widely deploying AI in healthcare, and education, and transportation services, that’s going to require a significant investment from the government and also policy that works hand in hand with innovators and the private sector to figure it out. That urgency is what I think is missing in these policy debates right now around AI, there’s so much more focus on risk and how we make sure things don’t go wrong, and not enough focus on how we make sure things go right.

What has surprised you the most this year?

I don’t know how much this was shaped by Wednesday’s hearing, but I’m a little surprised by how much both sides of the aisle in Congress showed hostility towards American businesses that are leading innovators producing some of the highest paying jobs and the technologies that will keep America globally competitive for the next 10 years.

They’re producing the technology that will give the United States an edge potentially over China. There have always been different policy fights and different alliances that shift and change over time, depending on the issue. But it was less an “us vs. them” mentality, and there was less hostility. When you see hearings where you have elected officials telling companies “you have blood on your hands,” and making other similar accusations that I think are not backed up by evidence, it’s hard to see how those parties come together and work together in a collaborative way.

I’m still hopeful that as soon as these conversations happen behind closed doors there can be more reasonable voices and discussions, because that’s essential for the type of collaboration we need to see on U.S. tech policy.

It finally happened: The European Union has agreed on a definitive text for its AI Act.

POLITICO’s Gian Volpicelli reported this morning on the agreement, which followed months of negotiations and a touch-and-go final phase in which the bloc’s largest countries threatened to sink the bill over investment and innovation concerns. Now the bill will go forward to a vote, with reassurances to Austria, France, and Germany that the bloc will make “formal declarations” about its commitment to encouraging the growth of the European AI industry.

A spokesperson for Germany’s digital minister Volker Wissing, a prominent critic of the bill’s text as it approached passage, told Gian they “asked the EU Commission to clarify that the AI Act does not apply to the use of AI in medical devices.”

The AI Act will create a “risk-based” classification system for AI systems that restricts their application and imposes reporting requirements and require transparency over the technical details of the most advanced “foundation models.”

Speaking of Europe: The European Union’s antitrust chief sat down with POLITICO’s Steven Overly yesterday to chat about Europe’s roadmap for AI as her term comes to an end.

On the POLITICO Tech podcast, Vestager homed in on — what else — competition as a major focus for European regulators, saying “The choice should not be American or American” when it comes to procuring AI tools.

Vestager also defended the “risk-based” approach the bloc has taken in developing the strictures of the AI Act, saying it’s essential in balancing safety with the opportunity for companies to serve Europe’s high level of demand for public-sector AI tools.

“If you’re a mayor, or a minister, or a government who would [otherwise] say ‘no, this is too risky, I don’t want to be accused of discriminating against my citizens,’ you would not have that approach” of being fearful of AI deployment given the safeguards in the AI Act, Vestager said, adding that the goal of the act is to save the strictest regulations for areas “where something essential is at stake.”