Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Here are 11 things that ChatGPT will refuse to do

ChatGPT is an amazing tool, a modern marvel of natural language artificial intelligence that can do incredible things. But with great power comes great responsibility, so ChatGPT developer OpenAI put some safeguards in place to prevent it from doing things it shouldn’t. It also has some limitations based on its design, the data it was trained on, and the sheer limitations of a text-based AI.

There are, of course, differences between what GPT-3.5 can do compared to GPT-4, which is only available through ChatGPT Plus. Some of those things are just on hold while it develops further, but there are some things ChatGPT may never be able to do. Here’s a list of 11 things that ChatGPT can’t or won’t do. — for now.

It can’t write about anything after 2021

ChatGPT doesn't know anything after 2021.
Image used with permission by copyright holder

ChatGPT is built by training the language model on existing data. That includes Reddit posts, Wikipedia, and even board game manuals — yes, really. But that data had to have a cutoff point somewhere, and for ChatGPT, it’s 2021. For GPT-3.5, it’s around June 2021, whereas GPT-4 was trained on data up till around September 2021.

If you ask it questions beyond that, it will typically tell you that, “As an AI language model…,” it only has access to its training data, which in the case of these models, stops in 2021.

It won’t get into political debates

The last thing OpenAI needs is politicians regulating it. It’ll probably happen, but until then ChatGPT is steering well clear of partisan politics. It can speak in generalities about parties, or discuss objective and factual aspects of politics, but ask it for a preference of one political party or stance over another, and it’ll either turn you down, or “both-sides” the discussion in as neutral a fashion as possible.

It (probably) won’t make malware

ChatGPT is excellent at programming, especially when given clear guidance, so OpenAI has safeguards in place to stop it from being used to make malware. Unfortunately, those safeguards are easily circumvented, and ChatGPT has been making malware for months already.

ChatGPT refusing to discuss the future potential price of Bitcoin.
Image used with permission by copyright holder

It can’t predict the future

Partly based on its limited training data, and partly because OpenAI wants to avoid liability for mistakes, ChatGPT cannot predict the future. It will have a good guess at it if you jailbreak ChatGPT first, but that sends accuracy nosediving, so view whatever response it gives you with skep[ticism

It won’t promote harm or violence

War, physical violence, or even implied harm are all off the table as far as ChatGPT is concerned. It won’t be drawn into debates on the war in Ukraine, and will refuse to discuss or promote harm. It can talk about war or historical atrocities in great detail, but existing or ongoing conflict is a no-go.

It can’t search the internet

This is one of the biggest differences between ChatGPT and Google Bard. ChatGPT cannot search the internet in any way, while Google Bard was designed as a current AI chatbot that can very much search the internet.

If you want to use the same GPT 3.5 and GPT-4 language models as ChatGPT, but with live search, you can always use Bing Chat. It’s basically ChatGPT, but incorporated with Microsoft’s Bing search engine.

It won’t promote hate speech or discrimination

Race, sexuality, and gender are topics that are very emotionally charged and ripe for leading into talk of prejudice and discrimination. ChatGPT will skirt around these topics, leaning into a meta discussion of them, or speaking in generalities. If pushed, it will outright refuse to discuss topics that it feels could promote hate speech or discrimination. For obvious reasons.

ChatGPT refusing to discuss illegal activity.
Image used with permission by copyright holder

It won’t promote illegal activities

ChatGPT is great at coming up with ideas, but it won’t come up with illegal ones. You can’t have it help you with your drug business, or highlight the best roads for speeding. Try, and it will simply tell you that it can’t make any suggestions related to illegal activity. It will then typically give you a pep talk about how you shouldn’t be engaging in such activities, anyway. Thanks MomGPT.

It won’t swear

ChatGPT does not have a potty mouth. In fact, getting it to say anything even remotely rude is tricky. It can, if you use some jailbreaking tips to let it off the leash, but in its default configuration, it won’t so much as thumb its nose in anyone’s direction.

It can’t discuss proprietary or private information

ChatGPT’s training data was all publicly available information, mostly found on the internet. That’s super-useful for prompts and queries that are related to publicly available information, but it means that ChatGPT can’t act on information it doesn’t have access to. If you’re asking it something based on privately held data, it won’t be able to respond effectively, and will tell you as such.

It won’t try to break its programming (unless you trick it)

Since ChatGPT launched, users have been trying to get around its limitations and safeguards. Because of course theyhave. Straight-up asking ChatGPT to circumvent its safeguards won’t work. There are ways to trick it into doing so, though. That’s called jailbreaking, and it kind of works. Sometimes.

Editors' Recommendations

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
ChatGPT is violating your privacy, says major GDPR complaint
ChatGPT app running on an iPhone.

Ever since the first generative artificial intelligence (AI) tools exploded onto the tech scene, there have been questions over where they’re getting their data and whether they’re harvesting your private data to train their products. Now, ChatGPT maker OpenAI could be in hot water for exactly these reasons.

According to TechCrunch, a complaint has been filed with the Polish Office for Personal Data Protection alleging that ChatGPT violates a large number of rules found in the European Union’s General Data Protection Regulation (GDPR). It suggests that OpenAI’s tool has been scooping up user data in all sorts of questionable ways.

Read more
Google Bard could soon become your new AI life coach
Google Bard on a green and black background.

Generative artificial intelligence (AI) tools like ChatGPT have gotten a bad rep recently, but Google is apparently trying to serve up something more positive with its next project: an AI that can offer helpful life advice to people going through tough times.

If a fresh report from The New York Times is to be believed, Google has been testing its AI tech with at least 21 different assignments, including “life advice, ideas, planning instructions and tutoring tips.” The work spans both professional and personal scenarios that users might encounter.

Read more
ChatGPT may soon moderate illegal content on sites like Facebook
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

GPT-4 -- the large language model (LLM) that powers ChatGPT Plus -- may soon take on a new role as an online moderator, policing forums and social networks for nefarious content that shouldn’t see the light of day. That’s according to a new blog post from ChatGPT developer OpenAI, which says this could offer “a more positive vision of the future of digital platforms.”

By enlisting artificial intelligence (AI) instead of human moderators, OpenAI says GPT-4 can enact “much faster iteration on policy changes, reducing the cycle from months to hours.” As well as that, “GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling,” OpenAI claims.

Read more