Skip to main content

Most people distrust AI and want regulation, says new survey

Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.

The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.

A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.
Sanket Mishra / Pexels

When it came to specific concerns, 82% of people were worried about deepfakes and “other artificial engineered content,” while 80% feared how this technology might be used in malware attacks. A majority of respondents worried about AI’s use in identity theft, harvesting personal data, replacing humans in the workplace, and more.

In fact, the survey indicates that people are becoming more wary of AI’s impact across various demographic groups. While 90% of boomers are worried about the impact of deepfakes, 72% of Gen Z members are also anxious about the same topic.

Although younger people are less suspicious of AI — and are more likely to use it in their everyday lives — concerns remain high in a number of areas, including whether the industry should do more to protect the public and whether AI should be regulated.

Strong support for regulation

A laptop opened to the ChatGPT website.
Shutterstock

The declining support for AI tools has likely been prompted by months of negative stories in the news concerning generative AI tools and the controversies facing ChatGPT, Bing Chat, and other products. As tales of misinformation, data breaches, and malware mount, it seems that the public is becoming less amenable to the looming AI future.

When asked in the MITRE-Harris poll whether the government should step in to regulate AI, 85% of respondents were in favor of the idea — up 3% from last time. The same 85% agreed with the statement that “Making AI safe and secure for public use needs to be a nationwide effort across industry, government, and academia,” while 72% felt that “The federal government should focus more time and funding on AI security research and development.”

The widespread anxiety over AI being used to improve malware attacks is interesting. We recently spoke to a group of cybersecurity experts on this very topic, and the consensus seemed to be that while AI could be used in malware, it is not a particularly strong tool at the moment. Some experts felt that its ability to write effective malware code was poor, while others explained that hackers were likely to find better exploits in public repositories than by asking AI for help.

Still, the increasing skepticism for all things AI could end up shaping the industry’s efforts and might prompt companies like OpenAI to invest more money in safeguarding the public from the products they release. And with such overwhelming support, don’t be surprised if governments start enacting AI regulation sooner rather than later.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
ChatGPT’s new upgrade finally breaks the text barrier
A person typing on a laptop that is showing the ChatGPT generative AI website.

OpenAI is rolling out new functionalities for ChatGPT that will allow prompts to be executed with images and voice directives in addition to text.

The AI brand announced on Monday that it will be making these new features available over the next two weeks to ChatGPT Plus and Enterprise users. The voice feature is available in iOS and Android in an opt-in capacity, while the images feature is available on all ChatGPT platforms. OpenAI notes it plans to expand the availability of the images and voice features beyond paid users after the staggered rollout.

Read more
Meta is reportedly working on a GPT-4 rival, and it could have dire consequences
The Facebook app icon on an iPhone home screen, with other app icons surrounding it.

Facebook owner Meta is working on an artificial intelligence (AI) system that it hopes will be more powerful than GPT-4, the large language model developed by OpenAI that powers ChatGPT Plus. If successful, that could add much more competition to the world of generative AI chatbots -- and potentially bring a host of serious problems along with it.

According to The Wall Street Journal, Meta is aiming to launch its new AI model in 2024. The company reportedly wants the new model to be “several times more powerful” than Llama 2, the AI tool it launched as recently as July 2023.

Read more
GPT-4: how to use the AI chatbot that puts ChatGPT to shame
A laptop opened to the ChatGPT website.

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot. But when the highly anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, with some calling it the early glimpses of AGI (artificial general intelligence).

The creator of the model, OpenAI, calls it the company's "most advanced system, producing safer and more useful responses." Here's everything you need to know about it, including how to use it and what it can do.
What is GPT-4?
GPT-4 is a new language model created by OpenAI that can generate text that is similar to human speech. It advances the technology used by ChatGPT, which is currently based on GPT-3.5. GPT is the acronym for Generative Pre-trained Transformer, a deep learning technology that uses artificial neural networks to write like a human.

Read more