Armenia Science Body (FAST): GreenOps requires building efficiently, from the start
This is a guest post for the Computer Weekly Developer Network written by Suzanna Shamakhyan in her capacity as executive education strategist and partnership architect
At Foundation for Armenian Science and Technology (FAST).
Shamakhyan writes in full as follows…
Much of the debate around artificial intelligence still treats “scale” as a sign of seriousness. Countries and companies are judged by how many datacentres they build, how many GPUs they acquire and how quickly they pour capital into permanent infrastructure. GreenOps challenges that instinct. It asks whether progress is really about accumulation, or about building systems that use power, compute and capital intelligently. AI is where that question becomes unavoidable, because nowhere else in modern technology are compute, energy and cost pressures so tightly linked.
How to build an AI indusry
The key is that when building an AI industry, infrastructure decisions must serve several purposes at once: technological relevance, energy security, sovereignty and long-term resilience. That is where GreenOps moves from a corporate framework to a national operating principle.
My own country of Armenia, is actually doing this and the planned AI factory being developed by Firebird AI is an example. What matters is not only the scale of the investment, but how it is integrated. Tying high-performance compute infrastructure to stable and independent energy sources, including nuclear power, reflects a GreenOps mindset at a national scale. AI infrastructure is meaningless without reliable power and power independence is inseparable from national resilience.
This contrasts with a global pattern in which some countries can afford to build hyperscale AI facilities without much regard for geography, efficiency, or sustainability. Cooling massive datacentres in extreme climates or separating compute strategy from energy planning is possible when capital is effectively unlimited.
Reality check: AI is expensive
AI systems are expensive to build and even more expensive to operate. Training models, running inference, securing data and managing storage create compounding demands on compute and power. In many ecosystems, inefficiency is masked by access to capital and hyperscale cloud platforms. Overprovisioning becomes normal, duplication is tolerated and optimisation is postponed.
Shamakhyan: In many ecosystems, inefficiency is masked by access to capital & hyperscale cloud platforms.
Firebird’s AI factory, built on Nvidia GPU and software stack, therefore fits naturally into GreenOps thinking. Large infrastructure, intelligently placed, shared and governed, is more efficient than fragmented private buildouts. Centralisation enables visibility. Visibility enables accountability. Accountability enables optimisation. When compute is measurable, energy use can be understood and managed.
Policy reinforces this direction. It is critical to develop frameworks around data protection, cybersecurity and digital sovereignty that treat AI infrastructure as strategic national terrain rather than generic IT. It’s important to position AI capacity as part of national security and economic development, not merely private-sector experimentation.
Connectivity follows the same logic. Starlink’s nationwide deployment is not just about faster internet. It provides redundancy and resilience in a mountainous region vulnerable to disruption. It avoids slow, capital-heavy terrestrial buildouts and delivers coverage through optimisation rather than expansion. From a GreenOps perspective, it is another example of achieving capability with fewer structural commitments.
Armenia’s software culture
Armenia’s software culture amplifies the value of this infrastructure.
Its strongest technology companies have traditionally grown through engineering quality rather than hardware dominance. Firms like Krisp, SuperAnnotate, Activeloop, PicsArt and RenderForest have built global products by maximising algorithmic efficiency and product focus. That discipline carries directly into AI. Efficient architectures reduce unnecessary training cycles, smarter data pipelines lower storage and transfer loads and careful deployment avoids redundant inference. Infrastructure matters, but code determines how hard that infrastructure must work.
Developers must be trained to optimise before scaling naturally designed systems that consume fewer resources. In AI, where costs compound rapidly, that discipline becomes decisive. And every component must serve multiple strategic goals: AI competitiveness, energy independence, resilience and efficiency. Limited capital encourages coherence. Geopolitical exposure encourages sovereignty. A small ecosystem encourages coordination. These pressures produce architectures that are easier to measure, govern and improve.
Optimisation must be built into the foundation rather than applied later. Early design choices matter enormously in AI, where inefficiencies compound faster than in almost any other industry.
Long-term national coherence
In a global AI race often framed as a contest of size alone, our country is pursuing a different form of competitiveness: building large where it matters, integrating energy and compute from the outset and insisting that infrastructure serve long-term national coherence. For a small country operating under real constraints, that combination of ambition and discipline may prove to be its most durable advantage.
Suzanna Shamakhyan is the executive director of the Foundation for Armenian Science and Technology (FAST), a diaspora-created NGO with a vision of Armenia transforming into an innovator nation and a global Artificial Intelligence hub.

