gonin - stock.adobe.com

How AI code generation is pushing DevSecOps to machine speed

Organisations should adopt shared platforms and automated governance to keep pace with the growing use of generative AI tools that are helping developers produce code at unprecedented volumes

The growing use of generative artificial intelligence (GenAI) tools for coding is transforming software engineering practices, with developers now building continuous integration and continuous deployment (CI/CD) pipelines that generate code at scale and rapidly pushing it into production.

As a result of this automation, said Tom Scully, principal architect for government and critical infrastructure at Palo Alto Networks in Asia-Pacific and Japan, engineers are increasingly sitting “on the loop” rather than “in the loop”.

“Agents write code, QA [quality assurance] processes try to catch mistakes, and everything happens at a volume and pace that challenges traditional QA and governance,” said Scully.

According to Palo Alto Networks’ State of cloud security report 2025, 53% of organisations are now deploying code at least weekly, while 17% do so daily.

“That volume stresses QA pipelines and shortens the window to deploy new offerings,” Scully observed. Furthermore, the report found that 85% of respondents feel security hinders the delivery of software releases.

GenAI is a force multiplier for software development, especially when paired with experienced developers who know how to write prompts and check the outputs. But that level of experience is scarce, Scully said, so the question is how to use AI-powered tools while ensuring governance and having tooling across the DevSecOps pipeline that can validate outputs as if a senior developer had reviewed them.

If security, SOC, CI/CD and infrastructure teams operate on a shared platform with governance controls embedded, you can bring inspections, policy and automation together, and move at machine speed
Tom Scully, Palo Alto Networks

It is not that DevSecOps practices are failing; rather, week-long handoffs are becoming a thing of the past as deployment and infrastructure provisioning become fully automated. Consequently, security teams are under immense pressure to validate and integrate everything into the security operations centre (SOC) quickly.

“The security and operations teams need runtime visibility and the ability to trace issues back to the CI/CD pipeline,” said Scully. “If security, SOC, CI/CD and infrastructure teams operate on a shared platform with governance controls embedded, you can bring inspections, policy and automation together, and move at machine speed.”

Palo Alto Networks offers such a platform, with Prisma and Cortex to secure code-to-cloud flows and AI runtime security, including model context protocol protection and automated red teaming, while providing end-to-end visibility.

Scully doesn’t suggest that every security problem can be left to runtime. “You need secure coding policies, QA and posture checks integrated into the pipeline. Tools should validate code, deployment YAMLs and cloud configurations before deployment. Then, runtime controls and red teaming ensure things that slip through are detected and remediated. It’s about defence-in-depth – checks at authoring, pre-deploy, deploy-time and runtime.”

And while large language models (LLMs) are improving rapidly, they don’t generate perfect code. “You need human oversight – human in/on the loop – and visibility into what the tool is doing. Think of the autonomous driving analogy – automation with a human supervising works until the supervision needs change. The key is tight integration and the ability to observe, orient, decide and act quickly – an OODA loop – using consolidated data and tooling.”

Furthermore, the quality of LLM-generated code is largely down to the model provider, but it can be improved by building “an automated loop where the model outputs are scored for security and quality, and prompts are refined,” said Scully.

And external guardrails can be imposed – for example, by checking outputs for personal identifiable information, toxic content, permissions and other policy violations. “That supervisory layer can block risky outputs before they get used,” he added.

To navigate these changes safely, Scully advised boards and C-level executives to implement AI governance and standards at the board level. This involves defining clear policies and mapping them to established frameworks such as ISO 27001 and the US National Institute of Standards and Technology’s risk management framework.

He also urged leaders to explicitly decide which models and tooling are permitted, making thorough security assessments mandatory. Finally, organisations must track their runtime posture and maintain up-to-date model inventories to ensure visibility, combining overarching governance with platform-level controls so the business can continue to innovate safely.

Read more about AI governance in APAC

Read more on Endpoint security