AI Risks Glossary
A Glossary of Emerging Risks in AI-Generated Code
Dependency Hallucination
LLMs frequently suggest plausible but non-existent packages (e.g., python-fast-parser-v2). Attackers can register these phantom package names on PyPI or npm, enabling supply chain attacks.
Logic Hallucinations
Code that is syntactically valid but functionally incorrect for edge cases. LLMs suffer from "context blindness," violating project-wide invariants to solve local file-level tasks.
Insecure Defaults
AI models frequently default to the path of least resistance:
- Wildcard CORS permissions (*)
- Hardcoded secrets for "runnable" snippets
- Deprecated cryptography (MD5/SHA-1)
Tautological Tests
AI is proficient at writing tests that pass but assert nothing
(e.g., assert true == true). These provide false confidence in code quality.
Architectural Erosion
AI agents lack "global context" and often violate architectural boundaries to achieve local goals. This leads to layer violations where frontend components directly import persistence models, bypassing the API layer.
Geopolitical Bias
Research on models like DeepSeek-R1 shows that politically sensitive triggers can cause severe degradation. Mentions of certain topics caused 50% increases in vulnerability rates for industrial control system code.
Context Window Overflow
When prompts exceed the model's context window, earlier security requirements may be "forgotten," leading to inconsistent security posture across generated code.