Inject AI -Automated Tool for Prompt Injection
Ashwin S
Dept Of Computer Science Engineering Panimalar Institute Of Technology Chennai, India Ashwin200323@gmail.com
Mohammed Sarfaraz
Dept Of Computer Science Engineering Panimalar Institute Of Technology Chennai, India mohammedsarfaraz2003@gmail.com
Lokesh S B
Dept Of Computer Science Engineering Panimalar Institute Of Technology Chennai, India lokeshshoffl@gmail.com
Nithish Kumar K S
Dept Of Computer Science Engineering Panimalar Institute Of Technology Chennai, India Nithishneyamar16@gmail.com
Bala Abirami B
Dept Of Computer Science Engineering Panimalar Institute Of Technology Chennai, Indiabala.bami@gmail.com
Abstract—
The growing deployment of Large Language Models (LLMs) in different applications requires immediate solutions to protect them from prompt injection attacks. Attackers exploit prompt injection techniques to manipulate model responses while bypassing security protocols so they can extract sensitive information by creating specific prompt inputs. InjectAI operates as a complete automated penetration testing tool for command-line interfaces which checks web- based LLM chatbots for prompt injection flaws. The automated testing system InjectAI employs various attack strategies through systematic prompt generation and injection to detect prompt injection vulnerabilities in web-based LLM chatbots. Static injection, rule-based mutation, response-based adaptation, token manipulation, context injection and grammar obfuscation are included in its attack strategies. Through HTTP requests the tool sends dynamically generated prompts to LLM-based interfaces while replacing predefined placeholders (PRMT). The system checks response data to find security holes before it records successful injection attacks. This paper presents the design of InjectAI alongside its attack methods and evaluation process for detecting prompt injection threats. This paper empahsis on how automated security testing affects LLM safety and presents possible approaches to strengthen AI model robustness.
Keywords— Prompt Injection, Large Language Models (LLMs), AI Security, Web-Based AI Penetration Testing, Automated Exploitation, NLP Security, Prompt Engineering Vulnerabilities.