In the Eddy AI chatbot, certain legitimate cybersecurity-related terms—such as the word “attack”—are being flagged by the moderation layer and result in an “Unexpected Error” message. This occurs even when the queries are clearly contextual and related to defensive, educational, or simulation-based use cases.
For cybersecurity-focused organizations, terminology like “attack,” “attack simulation,” or “build an attack scenario” is a core part of product documentation and user workflows. Improving the moderation logic to better understand context and reduce false positives would significantly enhance the chatbot’s usefulness for security and compliance-driven industries.