Mitigating Prompts Injection Attacks in ChatGPT: Safeguarding AI Conversations from Harmful Manipulation

By: Vajratiya Vajrobol, International Center for AI and Cyber Security Research and Innovations (CCRI), Asia University, Taiwan,


In the context of ChatGPT, prompt injection attacks are attempts by malevolent individuals to trick the AI model into producing offensive or dangerous content. Despite being a strong language model developed by OpenAI, ChatGPT is not impervious to abuse. These attacks can take many different shapes, but they usually involve taking advantage of the model’s propensity to produce text depending on the input that it is given. The following describes quick injection attacks in ChatGPT and offers mitigation techniques for them:

1. Creation of Inappropriate Content:

   In order to deceive the AI into producing damaging results, attackers may enter prompts containing sexual, offensive, or inappropriate content. This can be used to harass other users or to produce offensive text.

   Mitigation: To stop inappropriate content from being created and published, OpenAI and other platform operators use content screening and moderation [1].

2. Trickery and Manipulative Prompts:

   It is possible for malicious individuals to provide prompts with the intention of tricking the AI model into producing inaccurate or misleading data. This could be used to promote scams, disseminate false information, or engage in phishing activities [2].

   Mitigation: Build the system to identify and reject the creation of content that seems manipulative, dishonest, or possibly hazardous.

3. Injecting Hate Speech and Bias:

   Attackers may input prompts with the intention of inciting the model’s known prejudices or taking advantage of them, leading to the creation of hateful or biased content [3].

   Mitigation: OpenAI and other developers strive to make AI models less biased and offer prompt design rules that dissuade the addition of damaging or biased content.

4. Abuse and Destructive Conduct:

   Some users might take advantage of the model’s text-generating capabilities by using prompts to bully, threaten, or treat others abusively [4].

   Mitigation: In order to recognize and lessen abusive and harassing conduct, platforms frequently include moderation procedures and community norms.

Developers and platform operators use a combination of pre- and post-processing techniques to reduce prompt injection attacks in ChatGPT:

– Content Filtering: Put in place a content filter with the ability to recognize and prohibit offensive or dangerous content [5].

– User Reporting: Request that users report offensive or abusive content so that moderators can investigate and take appropriate action [6].

Review and Moderation: To review and mark objectionable content, use both automated systems and human moderators [7].

– Reinforcement Learning: Reduce the amount of times the AI model deviates from accepted norms in order to teach it to avoid producing offensive or dangerous information [8].

– User Education: Inform users on how to utilise AI technologies in a morally and responsibly manner and provide tools for reporting abuse.

It’s crucial to remember that even if these countermeasures can lessen the impact of rapid injection attacks, constant research and development of AI models and systems is necessary to increase their resistance to harmful inputs and offer a safer and more responsible AI experience.


  1. Yu, H. (2023). Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Frontiers in Psychology, 14, 1181712.
  2. Sison, A. J. G., Daza, M. T., Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2023). ChatGPT: More than a weapon of mass deception, ethical challenges and responses from the human-Centered artificial intelligence (HCAI) perspective. arXiv preprint arXiv:2304.11215.
  3. Borji, A. (2023). A categorical archive of chatgpt failures. arXiv preprint arXiv:2302.03494.
  4. Alizadeh, M., Kubli, M., Samei, Z., Dehghani, S., Bermeo, J. D., Korobeynikova, M., & Gilardi, F. (2023). Open-Source Large Language Models Outperform Crowd Workers and Approach ChatGPT in Text-Annotation Tasks. arXiv preprint arXiv:2307.02179.
  5. Derner, E., & Batistič, K. (2023). Beyond the Safeguards: Exploring the Security Risks of ChatGPT. arXiv preprint arXiv:2305.08005.
  6. Skjuve, M., Følstad, A., & Brandtzaeg, P. B. (2023, July). The User Experience of ChatGPT: Findings from a Questionnaire Study of Early Users. In Proceedings of the 5th International Conference on Conversational User Interfaces (pp. 1-10).
  7. Hacker, P., Engel, A., & Mauer, M. (2023, June). Regulating ChatGPT and other large generative AI models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 1112-1123).
  8. Shi, J., Liu, Y., Zhou, P., & Sun, L. (2023). BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT. arXiv preprint arXiv:2304.12298.

Cite As:

Vajrobol V. (2023) Mitigating Prompts Injection Attacks in ChatGPT: Safeguarding AI Conversations from Harmful Manipulation, Insights2Techinfo, pp.1

56120cookie-checkMitigating Prompts Injection Attacks in ChatGPT: Safeguarding AI Conversations from Harmful Manipulation
Share this:

Leave a Reply

Your email address will not be published.