Automat-it LLM selection optimiser saves trial-and-error tax
With AI adoption being something of a headache for some of the enterprise software application development teams now tasked with is urgent implementation, those migration migraines are arguably made even worse by the need to select which large language model (LLM) a software engineering unit will run once its AI deployment is brought into live production.
Automat-it thinks it can make those processes easier.
The managed services provider and AWS partner now offers its LLM Selection Optimizer as a specialised service designed to eliminate the “trial-and-error” tax that currently plagues AI-driven startups.
It works by providing a data-backed roadmap to choose the most efficient LLMs for specific business needs.
As generative AI adoption accelerates, many founders find themselves overspending on overpowered models or struggling with inconsistent performance. Automat-it’s product replaces guesswork, offering tailored recommendations for foundation models on Amazon Bedrock based on an organisation’s (typically startups in this case) unique proprietary data.
By right-sizing their AI infrastructure, the company claims that early adopters of the LLM Selection Optimizer have already slashed their LLM spend by up to 60% while simultaneously improving output quality and scalability.
Why LLM selection matters
For startups, choosing the wrong LLM can have a real impact on burn rate and scalability:
- Wasted Budget: Idle and unused resources make up 27% of cloud budgets for startups. Using an LLM that is not right-sized can play a significant role here.
- The Selection Headache: In general, companies use 3.1 different model providers on average, as they struggle to balance reliability, cost, and latency.
- Inference Impact: Inference is the biggest compute cost for 74% of startups, making LLM model efficiency crucial.
Choosing an underpowered model requires frequent human intervention, while an unnecessarily large model drives up computational costs without improving outcomes. This impacts the chance of failure for startups, with 29% failing because they run out of money.
An end to LLM guesswork?
According to Nir Shney-Dor, VP of global solutions architecture at Automat-it, the LLM Selection Optimizer uses Automat-it’s AWS AI Services Competency, a status awarded for meeting rigorous technical standards in security and reliability, to provide a three-step optimisation path:
- Audit: Evaluating proprietary datasets against the current LLM landscape.
- Test: Benchmarking cost, latency, accuracy, and task performance through real-world workload simulations.
- Optimise: Delivering a comprehensive Benchmarking Report to deploy models that maximise ROI.
Key benefits for here include the chance to optimise burn rate and right-size models to avoid wasted spend and extend runway; the chance to skip trial-and-error cycles with standardised, reproducible benchmarks; and also to avoid vendor lock-in with a flexible architecture aligned with long-term growth.

