Is The AI-Doomsday Narrative Overhyped?

Why AI isn't as scary as pro-regulation legislators make it out to be.

The following is an interview with James Czerniawski, a Senior Policy Analyst for Technology and Innovation with Americans for Prosperity. Follow James on X at JamescZ19. This interview was edited for clarity and conciseness.

Ari: Many journalists and politicians do not understand the nuances of legislating and reporting on tech issues. What are the biggest misconceptions regarding technology, AI, and deep fakes?

James: AI has been used in an internal-facing capacity for well over a decade plus. Go back to Google DeepMind and some other projects and see how companies have been using it. In many respects, the NFL has been using big data powered by artificial intelligence, to form things they're working on.

ChatGPT and these other tools represent a consumer-facing iteration of a technology that's been around for some time because it's now developed enough to be used.

I’m focused on how Congress and reporters misunderstand the timeline — the capabilities of what AI is — and what it's not. There are a lot of people who look at AI and think it's going to create a Terminator, Doomsday kind of situation. That's not necessarily the case whatsoever. You can get all the top experts in a room and ask them what the timeline is for something like that.

Some say 10 years, some say 30, some say 50, some say 100. There's no consensus around what that future looks like for AI, and that’s not a good way to walk into the conversation.

Plus, President Biden watched the Mission Impossible movie, at least part one. That shaped some of his thinking in developing his executive order on artificial intelligence. The whole point of Hollywood and movies is that you're supposed to suspend belief, not buy into it and let it shape real-world things. A lot of the views shaped in Congress around this tech aren’t based in reality.

This is the first election in which AI technology is available to everyone. What impact does that have on public trust? Half the time, online videos have a little disclaimer saying they are AI-generated. How will the advancement of AI play out in terms of a mass democracy, especially as the election draws closer?

When it comes down to it, you need to educate people. As we enter an increasingly digital world, you should always trust but verify. You should use multiple sources for things. Do not assume because you saw it somewhere, it's true.

Right now, the ecosystem is set up so that people are over-indexing. For example, the Joe Biden robocall situation in New Hampshire was a good thing.

Not the actual incident in and of itself, the response itself was good. Within 24 hours of that happening, everybody had already come to the consensus that this was likely generated by an artificial intelligence for doing voice. Within three weeks, The FCC issued a declaratory ruling and identified who was responsible. Also, the FCC is considering levying a fine against the individual responsible for it.

The good news is that the system worked as intended. There certainly are some reasons for concern. But we have to be very careful when talking about how to address it from a regulatory perspective to avoid the things you were worried about with social media censorship. There are definitely some trade-offs here.

To your point that people are over-indexing and expecting some things to be fake, they might be looking at things a little bit more critically. We've posted real videos on our social media, and we've had thousands of comments saying, “This is AI, this is not real.”

Real events that are happening — people don’t believe they are real. Isn’t that a dangerous side effect of the AI revolution?

That's a very valid concern. It's both a blessing and a curse that people are hyper-aware that there might be fake information, or augmented, or artificial information out there.

More broadly speaking when it is authentic and people are trying to hand-wave it away and say it's fake, that's harder to navigate. I've had conversations with folks where I literally show them the authenticity of the underlying file to explain it’s legitimate information. They do not understand or care, for that matter.

That’s certainly something we want to be cognizant of. It's just like redirecting. Linking out to other sources and increasing literacy amongst the population, more broadly speaking, will make it more easy and sanitized for people to be skeptical.

This is where news organizations have an opportunity to reorient themselves and rebuild the trust people have lost in them over the years. A very well-documented data point is lost trust in the main media systems and reporting outlets. This actually presents a good opportunity for news organizations to try to correct that trend.

Organizations have to be as transparent and open as humanly possible, showing multiple links to other sources as much as humanly possible. From a government perspective, and from the policy side of the equation, how do we increase literacy amongst people? And how do we make sure people are educated and aware of being healthily skeptical but not necessarily jaded?

Most solutions to this issue point to a literacy program run by the federal government and for media institutions to use this opportunity to build trust. But, trust in the federal government and media has never been lower. What happens if we can't rely on these institutions to fill that gap?

We believe in people, so we think that it starts there. There has to be a locally oriented solution and more community engagement. One area that's been fascinating is community notes that have been slapped on numerous tweets across the political aisle. There was a study on it that showed by having the community notes, people were actually less inclined to engage with some of the more radical content that is really bad.

But, it also helps people become familiar with and appreciate the process of vetting information. We've had some good community notes getting slapped on the White House's account and some Republican accounts. People understand that and leverage those kinds of features.

From a platform perspective, it might be more fascinating to see if other platforms experiment with that kind of process. There's definitely something promising in terms of people building up trust because it's not going to necessarily pop up in an individual media outlet.

We are seeing the rise of independent journalists who fill some of that void. But it's not a silver bullet solution. We need a multi-pronged solution where the government can work on increasing literacy, independent journalists and bloggers do their thing, and platforms themselves experiment with other solutions so people can trust the information that's before them a little bit more readily, or at least more easily.

You're describing a positive side to this. Being more skeptical about information that's thrown at you and having community notes is a good thing, given that so much of the information online has not been reliable for the past decade. People were just blindly accepting all of it.

One of the fun parts of the conversation is watching people be willing to question what they're seeing. For the administration, historically, it's been tied to their reporting on inflation and the economy. It's doing so well, right? Axios, just last week, had a piece showing that 55% of people think there's a recession, but the economy is actually doing fantastic. But there’s a 20% rise in prices for people, hitting their core goods.

It's a little disingenuous to call that reporting. That's an opportunity — where you see the rise of a Matt Taibbi, you see the rise of the Free Press with Barry Weiss, right? I have more faith in that than in asking the government to step in and address the concerns surrounding deep fakes and AI.

The reality is that a lot of these proposals, like the Senate proposals being considered two weeks ago, were going to basically convert the Federal Election Commission into a ministry of political truth. It would have empowered politicians to have a regime where they could sue first and ask questions later. It wouldn't have just been tied to political ads, it would have covered issue ads, and any AI Generated Content depicting a candidate.

That's very problematic when it comes to content creators and online accounts. At what point is someone’s speech going to be impacted because that speech is what AI is helping them generate in that instance? Who can afford that is an incumbent. Who will not be able to afford that is your general everyday person. You're going to get dragged into court to litigate whether or not this was authentic, covered underneath parody, satire, or wasn't straight facts. There are a whole host of reasons why we should be skeptical of that.

The FCC followed up with a regulation last week to mandate those disclosures in both political and issue ads. We’re going to have government officials in the administration or in the administrative state asking platforms to take down posts that might include AI-generated content related to COVID misinformation, some of the issues surrounding the 2020 election, or similar issues.

We're going to have the same exact problems that plagued the administration that led to Murthy V. Missouri, which is being considered by the Supreme Court, now with AI. That's not a net positive force for good because it will undermine trust in the technology and trust in the government, more broadly speaking. It will also infringe upon our core fundamental rights in the process. It's an untenable situation. We have to be very careful for those reasons.

Many politicians are making this seem like a doomsday situation because it gives them the ability and approval from the public to pass laws they may have wanted to pass for a long time but haven't had the opportunity to. What red flags should people be aware of concerning how the government is about to respond to deep fakes or AI?

Think about how this is going to undermine our free and fair press. How is it going to undermine your ability to express yourself online? If you want to run for office, and we certainly need more good people running for office, how is this going to impact you?

All of a sudden, you’re struggling to fundraise money as someone running for an elected office. Now you're going to have to spend money on a lawyer to consult you on whether or not you should do certain disclaimers. That puts you at a disadvantage because you should focus all your resources on building a grassroots movement to support you in your efforts to get into office.

I understand why people have these concerns, but we should not let the fear of this technology drive us to give up our fundamental civil liberties along the way just for a little bit of security. We have to avoid that outcome because the government only knows one way to operate. And it's always ratcheted up whenever these instances play out — time and time again. We have to keep in mind how this is going to impact you and the people you're interacting with as it pertains to the online content you're consuming.

I would hate to see independent journalists, bloggers, podcasters, and content creators, more broadly speaking, get screwed over because we were worried about deep fakes. I trust people, and I trust our ability to innovate around these issues enough to not want the government to fit in here. If there's a reason for hope, it's that the companies are doing what they can to fill those gaps right now. Google, Meta, and Microsoft all have policies concerning AI content in political ads in order for them to be allowed. The private sector is already coming through with some solutions here.

Google recently unveiled a new program to look at the metadata of underlying content to see if it was AI-generated and apply flags there. We're going to iterate our way around this. That's a lot faster and better than the government. If we ask them to do it, there's a chance they'll be overbroad. It'll get used like a blunt instrument, and it will undermine the technology and its ability to deliver on its massive promise and potential.

Is the 2024 election going to be defined by artificial intelligence?

I honestly don't think so. It will get used, but I don't think it will be the defining characteristic of the 2024 election. This election is going to be about, like most elections, the candidates, where they stand on different issues, and how people are feeling. Polls show, and we've seen this for the entirety of Joe Biden's administration, what are the number one and two issues?

They care about the economy and inflation. Artificial intelligence is not even remotely up there. It's definitely not even a top-25 issue.

Will it impact the election in terms of how people go out to vote? Will AI be utilized in a way that can, if there's a small margin, shift the election one way or the other?

I don't think so. Historically, when we've seen an October surprise, shock value moments, the data doesn't suggest that moves voters all that much.

Will it have an impact? In terms of shock value, perhaps, but in terms of actually moving voters, I don't necessarily think so. People have their opinions and will cast their ballots in November accordingly. I don't think AI will change votes on the margin much as we're heading into November.

Reply

or to participate.