Specific examples of ChatGPT's prohibited words and the risks of their use

chro Team chro Team

Understanding prohibited words is very important when using ChatGPT.

In this article, we will focus on the risks that may occur if you actually use prohibited words and the consequences they may have, and introduce points that users should be aware of. In particular, we will explain the risks and consequences of using prohibited words, but first it is important to understand what kinds of words are prohibited.

table of contents

Types of words prohibited in ChatGPT

ChatGPT has a mechanism that prevents users from answering questions that include specific words.. This restriction is in place to avoid ethical issues and social risks. The main categories are as follows.

Discriminatory remarks or hate speech

ChatGPT will reject discriminatory comments regarding race, gender, religion, disability, etc., or comments that attack specific groups. This includes discriminatory slang and language that insults certain ethnic groups.

【example】

  • Words that insult specific ethnic groups or races, such as "chong" or "nigger"

  • Sexist statement: “Women should do housework”

  • Words that insult people with disabilities, such as "guy"

Questions related to violence/crime

Questions regarding violence or criminal activity will also be rejected. This is to prevent harm to users or the promotion of illegal activities.

【example】

  • "How to hit someone"

  • "Planning a bank robbery"

  • "How to Attempt Suicide"

  • "How to illegally download music and movies for free"

Sexual content/obscene information

Sexually explicit or obscene content is also prohibited. This is to prevent inappropriate information from being provided to young people or other users.

【example】

  • "Detailed description of sexual acts"

  • "Information about illegal prostitution"

  • "Sexual content related to minors"

Questions regarding personal information and privacy

Questions that violate personal information or privacy will also be rejected. This is an important measure to protect your safety and privacy.

【example】

  • “Tell me ○○’s address.”

  • “How to identify SNS accounts”

Questions regarding illegal drugs and dangerous substances

Questions regarding illegal drugs or dangerous substances will be rejected by ChatGPT. These are to prevent users from engaging in dangerous behavior.

【example】

  • "How to obtain stimulants"

  • "How to make a bomb"

  • "How to use and take illegal drugs"

Keywords related to the spread of disinformation and rumors

It also refuses to answer questions about the spread of disinformation and rumors. This is to prevent the spread of false or misinformation.

【example】

  • “How to create fake news”

  • "How to spread rumors"

Questions about political propaganda and extremist ideology

ChatGPT cannot answer questions regarding extremist political ideology or propaganda. These are rejected to avoid harmful effects.

【example】

  • "Radical political claims"

  • "Remarks supporting terrorism"

  • "How to spread a particular political ideology"

Medical misinformation and dangerous practices

ChatGPT also refuses to answer questions about dangerous medical practices or incorrect treatments. This is because these can be harmful to your health.

【example】

  • How to perform dangerous medical procedures, such as "how to perform surgery or treatment without qualifications"

  • “How to try the wrong treatment”

“You need to understand what words are prohibited in ChatGPT!”

What are the risks you face when using prohibited words?

Rejection by AI

If you enter a prohibited word, ChatGPT will reject the request. For example, if you request violent or inappropriate content, you will receive a response such as, "We are sorry, but we cannot respond to such requests." This kind of rejection isYou end up wasting your time by not being able to get the information you need.. You may have to re-enter the question using a different wording, which hinders efficient use.

Risk of account suspension

ChatGPT and its provided services areAccount suspension may be taken against users who repeatedly use prohibited words.. In particular, please be aware that if you repeatedly violate the terms, your account usage may be restricted.Please use with caution as there is a risk of permanently losing access if you continue to violate the terms of use..

Risks that adversely affect a company's or individual's reputation

ChatGPT is increasingly being used in business settings. If you enter content that falls under the prohibited words, inappropriate content may be generated as a result. In particular, soliciting discriminatory or offensive language or violent expressions can seriously damage a company's or brand's reputation.

A damaged reputation not only causes a decline in credibility, but also causes a loss of trust from customers.It also becomes. Therefore, companies are required to act with particular caution when using AI.

Legal risk: potential violation of regulations

Legal risks also exist regarding prohibited words.Requesting content that promotes illegal drugs, violence, or illegal activities may cause legal problems.. If AI generates unintended content and it is illegally disseminated, it is possible that AI could be held legally responsible.

For example, if a question arises about criminal activity or the unauthorized collection of personal information, legal issues may arise, so it is very important that you consider your own legal risks.

Risk of misleading

AI cannot necessarily understand all context. Therefore, if you use prohibited words, you may accidentally cause unintended consequences. For example, if questions containing expressions related to violence or discrimination are generated, there is a risk that they may be misused unintentionally.

Malicious third parties may use the generated content to facilitate criminal activities, and misunderstanding or unintended misuse may cause problems. Misunderstanding of the content generated by AI can also become a major social problem.

Long-term risks of using banned words

If you use prohibited words, you may not realize it right away, but it may lead to major unexpected problems, such as stricter regulations in the future or a decline in your social reputation. Let's take a closer look at the specific risks that may occur.

Future regulation tightening

AI technology is evolving day by day, and regulations and laws are changing as a result. Regulations regarding prohibited words may be further tightened in the future. In that case, prohibited words used in the past and their history may be considered problematic.

For example, if an AI's usage history is tracked in the future and it has requested inappropriate content in the past, that action could cause legal problems.As laws regarding the use of AI evolve, it is important for users to always understand the latest regulations and be aware of how to respond flexibly..

Ethical considerations as self-defense

When using ChatGPT and other AI tools, you should be careful about using prohibited words from an ethical perspective. It's important to consider whether the content you're looking for is truly socially appropriate, and to consider the impact on others.Words have great power and can unconsciously cause discomfort, so we must always be ethically aware..

Practical advice to avoid banned words

  • devise expressions

    To avoid banned words, try using other words with the same intent. When asking specific questions or requests, using softer language or other phrasing can help you avoid potential problems.

  • Clarify your purpose

    By clearly communicating what your purpose is, you can get useful information without including inappropriate content. If your intentions are clear, it will be easier for the AI ​​to give an appropriate answer.

  • Regularly checking the terms of use

    Be sure to regularly review ChatGPT's Terms of Use and updated banned word list to stay up to date with the latest regulations.

summary

The use of prohibited words involves various risks. The content generated by AI can also cause unexpected misunderstandings and problems. In order to protect corporate use and personal safety, it is important to be aware of avoiding prohibited words and to use them ethically and appropriately. In order to use AI tools correctly and safely, be sure to understand the risks of prohibited words and act carefully.