AWS extends hands-on ‘experimental’ agentic development with Strands Labs
The AWS developer team has blogged to announce new services in Strands Labs, the company’s latest state-of-the-art, experimental approach to agentic development.
Strands Labs is a new Strands GitHub organisation designed to give developers experimental approaches to agentic AI development.
By way of definition, AWS Strands is a model-driven framework (i.e. one that uses high-level designs to automatically generate code, which is often used for streamlining complex software development and maintenance) and software development kit SDK used for building, scaling and deploying agentic AI systems.
Model-driven magnificence
For additional background, AWS Strands’ model-driven approach means it benefits from the fact that the framework allows the delegation of complex reasoning to powerful cloud-based agents while maintaining millisecond-level control at the edge so there is an abstraction of complexity factor to be happy about here.
The Strands Agents SDK is available for both Python and TypeScript and the team say it has gained traction in the developer community since we released it as open source in May of 2025.
The SDK has been downloaded more than 14 million times and the AWS team has shipped a number of updates, including experiments like ‘steering’ (see below), to support a very active developer community.
What is Strands steering?
Strands’ steering function is an experimental feature that provides a modular prompting mechanism to guide AI agents toward desired outcomes without using rigid, hard-coded workflows. In terms of use in practice, this means that developers can enjoy the fact that… rather than having to think about front-loading all instructions into a single, massive system prompt, steering allows developers to inject context-aware feedback at specific moments in an agent’s lifecycle.
“We’ve chosen to make Strands Labs a separate GitHub organisation to encourage innovation through experimentation and to push the frontier of agentic development. We’ve also opened Strands Labs to all of the development teams across Amazon – meaning, they can all contribute their innovative open source projects for community use and feedback. This model will encourage faster experimentation, learning and growth for Strands’ community of developers, without coupling experiments to the Strands SDK and its production use release cycle,” blogged Joy Chakraborty, senior technical program manager for AWS Agentic AI Foundation; Andrew Shamiya, product marketing manager for AWS Agentic AI; and Ryan Coleman, product manager at Amazon Web Services.
At launch, the team is making Strands-Labs available with three projects:
- Robots – Exploring how AI agents extend to the edge and the physical world, interacting with physical environments through sensors and hardware via a unified Strands Agents interface.
- Robots Sim – Integrates agentic robots with simulated 3D physics-enabled worlds for rapid prototyping and algorithm development without requiring physical hardware.
- AI Functions – Allows developers to define agents using natural language specifications instead of code, with Python-based pre- and post-conditions to validate behaviour and generate working implementations.
The AWS team says that agentic AI systems are expanding beyond the digital world into the physical domain, so as AI systems increasingly interact with robotics, autonomous vehicles and smart infrastructure, a key challenge emerges: how to leverage massive cloud compute for complex reasoning while maintaining millisecond-level responsiveness for physical sensing and actuation.
Strands Robots provides orchestration, intelligence and infrastructure to transform edge devices into coordinated physical AI systems.
No paranoid androids here
Strands Robots extends the Strands Agents capability for: 1/ AI agents to control physical robots through a unified Strands Agents interface that connects AI agents to physical sensors and hardware; 2/ Rapid prototyping and algorithm development in a safe, simulated environment without requiring physical robotic hardware, which is perfect for iterating on agent strategies, testing VLA policies and validating approaches before real-world deployment.
AI Functions is also new here.
“AI Functions introduces a new way to write code using natural language specifications instead of full implementations. Using the @ai_function decorator, developers define intent and validation conditions. The system generates the implementation, validates the output and retries automatically if validation fails,” added the AWS team.
How does that work then? Well… let’s say a software engineer has to provide functionality to load invoice data from files in unknown formats.
Traditional approaches require determining the file format, writing transformation logic for each format, constructing prompts, parsing responses and orchestrating retries when validation fails. This typically involves dozens of lines of code and may not account for every scenario. With AI Functions, a developer writes a small function describing the desired output and a validator function expressing what success looks like. The LLM determines the file format, writes the transformation code and returns a real Python DataFrame object.
Developers interested in exploring more and starting to experiment should visit Strands Labs.

