Self-service tools - Synk: The unbearable weight of full-stack ownership
This is a guest post for the Computer Weekly Developer Network, written by Randall Degges, head of developer relations & community, Snyk.
Degges writes in full as follows…
Let’s be honest: being a developer today feels less like building software and more like being a Swiss army knife with a caffeine dependency.
You’re not just writing code; you’re the architect, the plumber, the janitor and the bouncer for your application.
You own the feature development, the bug fixes, the deployment pipeline, the performance monitoring, the cloud infrastructure… and security.
The modern software development lifecycle (SDLC) has thrust developers into the driver’s seat for everything. Application Security (AppSec) teams, bless their souls, have transitioned from gatekeepers to orchestrators and visibility providers. They set the guardrails, manage the central scanning platforms and then essentially shout “fire!” when a vulnerability is detected, trusting the developer to be the one who puts it out.
Herein lies the rub. You’re juggling a dozen high-priority tasks. Adding ‘be a security guru’ to your to-do list is often the thing that gets pushed to tomorrow. It’s not malice; it’s cognitive overload. When a critical feature release is due, a deployment is failing, or performance is tanking, security falls into the ‘I’ll fix it later’ bucket.
… and we all know where that bucket ends up: the backlog graveyard.
The AI era
We’re now in the AI era, moving from post-mortem to pre-mortem security
The good news is that we now live in a world where intelligent agents can help shoulder some of this cognitive burden. The AI coding assistant, be it Copilot, Cursor, Windsurf, or a custom LLM in your CLI, is becoming an indispensable pair programmer.
But for security, we need to transition this helper from a suggestion engine to a security enforcer.
This is where we move past simply asking an AI, “Hey, is this code secure?” (a question often met with confident, yet hilariously wrong, hallucinations) and move toward a reliable, agentic loop. We need to leverage the best feature of generative AI: its powerful human-language instruction and execution loop and marry it to highly reliable, deterministic security tooling.
Next we come to the agentic security loop: security + LLM.
Into the loop
The core concept is to create an Automated Security Agent that runs in the background of your coding environment, enforcing security by default. The pattern is simple, yet revolutionary:
Step 1: Reliable detection (the source of truth)
Instead of relying on an LLM’s guess, we can use tried-and-true, best-in-class Static Analysis Security Testing (SAST) tools. For example, a good engine is constantly updated, excels at Multi-Context Program (MCP) analysis and can accurately flag issues in custom code, open-source dependencies, containers and infrastructure as code.
The trick is to give your AI coding tool direct, authenticated access to the analysis server or its CLI. This makes your security element the single source of security truth for the AI.
Step 2: Custom instructions (the AI’s mandate)
This is the secret sauce. You provide a robust, custom instruction set to your AI coding agent (your system prompt or custom instructions, whatever your tool calls it) that defines the agent’s security mandate. This instruction might look something like this:
> “Security Mandate: After a file is modified and saved, you *must* automatically run a Snyk scan using the provided API access. If a vulnerability is detected (e.g. Snyk Code, Snyk Open Source, or any other issue), you are *mandated* to generate a fix based *only* on the Snyk-provided security context and vulnerability details. Once the fix is applied, you *must* run a final Snyk scan to *verify* the issue is resolved and that no new, unrelated vulnerabilities were introduced. If the fix fails verification, try a different approach and repeat the process until the scan is clean.”
Step 3: Execution and verification (the agent’s grit)
When you modify a file, the AI agent, guided by your mandate, springs into action:
- Code Change Detected: you finish writing that new ‘getUserData’ function.
- Scan Executed: the agent automatically calls your AI-powered developer platform.
- Vulnerability Found: Your solution returns a high-confidence finding: SQL Injection or a vulnerable dependency.
- Fix Generated: the AI, using tool context and a recommended fix, rewrites the code (e.g., implements parameterised queries or updates the dependency).
- Re-Scan and Verify: the agent runs your code-checking solution again. If the vulnerability is gone and the code is clean, the agent commits the change and notifies you. If the vulnerability persists or a new one pops up (a glorious hallucination gone wrong!), the agent *loops* back to step 4 and tries a different fix.
The best of both worlds
This agentic security loop gives you a massive win:
- You get deterministic reliability: the core detection of a real security issue is handled by an expert, non-deterministic tool. No guessing games.
- You leverage generative power: the iterative, problem-solving ability of the LLM is used to generate and correct code based on reliable data.
- Developers stay focused: the burden of remembering to run a scan, read a report and then figure out the fix is automated away. Security is woven directly into the fabric of your code writing.
In short, we stop treating security as a post-deployment disaster to be cleaned up and start treating it as a first-class, automatically enforced property of the code, allowing developers to finally focus on what they do best: shipping great features.
The AI era isn’t – only – changing how we write code, it’s changing how we guarantee code safety.

