If you only skim headlines, AI security looks like a churn of new acronyms. The part that matters for builders is simpler: the community is finally naming two different attack surfaces under one roof. The OWASP Gen AI Security Project now tracks both LLM and GenAI application risks (the LLM Top 10 for 2025 line of work) and, separately, Top 10 risks for agentic applications (2026). Same stewardship, two graphs.
I treat that split as a signal, not a shopping list. Chat-only flows and long-running agents share DNA, but the failure modes stop lining up when tools, memory, and delegation enter the room.
What the LLM top ten still screams
The current OWASP GenAI material still opens with prompt injection: crafted input that steers the model off policy. Their LLM01 write-up is the canonical place to read the mechanics. Nothing in 2026 "solves" the core issue at the architecture level: instruction-like text and untrusted content still ride the same channel into the model.
Next items on the same list are the usual suspects with clearer names: sensitive data leaking through answers, poisoned data paths, output passed unchecked into shells and browsers, and "agency" without enforceable brakes. If you squint, it is the same guidance security people have repeated since tool use became normal, just with PDFs that match the product.
Why agentic gets its own ten
The agentic Top 10 landing page describes the scope plainly: autonomous agents that plan, call tools, and hand work off across workflows, reviewed by a large pool of practitioners. The risks stop being "bad text in, bad text out" and start including tool misuse, identity and permission drift, poisoned memory, shaky agent-to-agent channels, and cascading retries that look fine in metrics until finance calls.
That matches what I see in code reviews. A loop that "works" on golden prompts turns into an operational incident when a connector credential is too broad or when retrieved docs carry hidden instructions.
NIST in the same conversation
Standards bodies are not a substitute for unit tests, but they give you shared vocabulary with risk and compliance. NIST's AI Risk Management Framework (voluntary, January 2023) is still the baseline document set. For generative systems specifically, NIST published AI 600-1, the Generative AI Profile in July 2024 to map GenAI risks into the RMF structure. Use it when you need a neutral reference in a cross-functional review, not when you need to block a CVE.
Where I do the actual work
Frameworks sort risks; engineering closes holes. For tool boundaries, logging, and treating untrusted content as hostile, use the same moves as production tool use: narrow credentials, redacted logs, and assume pasted or retrieved text can carry instructions. The OWASP/NIST layer helps you explain why those controls exist to someone who does not live in the trace. Your job is still to enforce them in CI, not to laminate the PDF.
Trend noise will stay loud. The quieter signal is structural: two top tens, one stubborn injection class, and more blast radius whenever an agent can act without a human in the path. Build for that, and the acronyms sort themselves out.