🐢 Open-Source Evaluation & Testing for AI & LLM systems
-
Updated
Jun 11, 2025 - Python
🐢 Open-Source Evaluation & Testing for AI & LLM systems
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
An offensive security toolset for Microsoft 365 focused on Microsoft Copilot, Copilot Studio and Power Platform
A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.
🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.
AI Red Teaming Range
LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.
This is my prompts for Lakera's Gandalf challenges
An Offensive Security Blog
Add a description, image, and links to the ai-red-team topic page so that developers can more easily learn about it.
To associate your repository with the ai-red-team topic, visit your repo's landing page and select "manage topics."