GreenOps - Memgraph: Going green doesn’t mean going cloud-Free
Graph database expert and founder of Memgraph is Dominik Tomicevic.
Memgraph is known for its in-memory graph database for real-time streaming, querying and data analysis.
On the subject of GreenOps, Tomicevic thinks simplistic anti-cloud arguments miss the point and believes graph technology deserves its own green spotlight – he writes as follows…
It’s no secret that with an AWS, Azure, or GCP license, it’s easy to spin up hundreds of servers to solve a problem. But as best practice around GreenOps reminds us, increasingly it’s the engineer’s responsibility to be aware of the sustainability impact of their organisation’s IT footprint. That raises a broader issue: is relying on brute-force compute becoming architecturally lazy, wasting power and company resources?
A related issue concerns AI-generated code. If a vibe coding engine can produce highly efficient solutions, should we be handing over all of our programming to AI?
The reality is more nuanced than some anti-IT environmental critics suggest. At the same time, there are genuine considerations that any cost-conscious and sustainability-aware developer should take on board. Beyond defaulting to outsourced cloud or AI coding, there may be a third way – one that aligns closely with FinOps and GreenOps principles.
This approach would advance technology value management through best practices, smarter architectural choices such as graph databases and a stronger focus on education and standards.
Cloud vs on-premise, which is greener?
Let’s start with first principles. The link between financial discipline and environmental responsibility is obvious: infrastructure optimised to be greener is almost always more cost-efficient. Yet the path forward is rarely straightforward: today, cloud infrastructure is frictionless and a team wants more compute, it can be provisioned in minutes.
The economic logic, however, strikes a different note. Unless there is a clear and time-sensitive rationale, such as rapid prototyping, overprovisioning infrastructure simply does not make financial sense. Eventually, your CFO will ask hard questions about the return on that spend.
But were in-house environments ever paragons of efficiency? Far from it. When organisations operated their own data centres, they often struggled with underutilised capacity and structural inefficiencies. By contrast, cloud providers work at huge economies of scale and are strongly incentivised to optimise utilisation and energy efficiency — their margins depend on it.
In most cases, consolidating workloads in the cloud is actually more carbon footprint efficient than maintaining equivalent on-premises infrastructure. That calculus may differ for certain AI workloads, but it’s a distinction developers should be aware of. Developers should also be mindful of the real current state of vibe coding. Research suggests code written by large language models can, in some scenarios, be significantly slower and more memory-intensive than human-written code. That translates into higher energy consumption per task; its innate efficiency, in other words, cannot be assumed.
That perspective must be balanced against the promise of significant efficiency gains. We are still at the dawn of machine programming. Just as earlier eras of computing gave rise to new disciplines in performance engineering, optimisation and tooling, we can expect vendors to emerge with business models focused on making AI-generated code more efficient.
Tomicevic: Brute force is rarely a prudent long-term strategy.
Scale drives architectural optimisation
An interesting case of inevitable computational efficiency is Facebook. In its early years, the company heavily relied on PHP, which was slow when scaled. Rather than continue adding servers, Zuckerberg invested in a compiler to reduce inefficiency.
The result dramatically lowered costs and enabled further growth, suggesting the lesson that brute-force is fine in the short term, but sustained scale eventually forces architectural optimisation.
I think it is, therefore, too simplistic to say the cloud is inefficient and automatically a poor choice in terms of energy and water usage. Note I’m also saying that while AI-written code may eventually surpass humans in efficiency, we’re not there yet.
Open source also has a role to play. More eyes on the code increases the chance that inefficiencies will be spotted and corrected, while contributors with different skills can propose performance improvements over time. Open source also extends the life of hardware beyond official support windows by maintaining patches and updates, reducing electronic waste. Transparency adds another layer of accountability, as visible code invites scrutiny and challenges inefficient practices.
However, another way toward greener programming could be a computer science, not a hardware, solution. It’s not simply about sound programme design or Efficient Programming, though there’s always value in principles that encourage clean and non-wasteful design. Like many architects, I am grateful to have cut my teeth on Big O algorithmic notation, Formal Methods, Type Theory and other foundational tools that teach rigour and precision.
The problem with all that theory is, to coin a phrase, it doesn’t last long in contact with the real world. Optimising a small, early-stage system for microsecond gains may not make business sense if it delays product development. Efficient algorithms often introduce complexity, which can increase maintenance costs and bug risk. There is always a trade-off between execution efficiency and business agility.
Graph to model processes
At the same time, for a surprisingly wide range of problems, the choice of data representation can deliver huge efficiencies. Graph-based models of the real world can more faithfully reflect the ‘prototype’, enabling better inference and query results, while also being computationally parsimonious to construct and run.
For example, is it better to force reality into complex inter-object relationships via relational tables and repeated JOIN operations, or to leverage entity-node traversal as a more direct way to build structures like digital twins? Recomputing complex joins in a relational database can be far more resource-intensive than traversing a native graph.
For relationship-heavy queries, like fraud detection, recommendation engines, or route optimisation, a graph model, not relational, offers clear advantages. Modern graph databases precompute or efficiently store relationships for rapid traversal. Arguably, this may require more memory, but it can dramatically accelerate operations. For inherently connected data, shoehorning a graph model into a relational system leads to code bloat, as developers write layers of logic to mimic native graph behaviour.
I am therefore calling for more creative thinking in how we model problems and for developers to not just default to the relational approaches they learned at university; exploring alternative representations can help cloud workloads run far more efficiently and reduce concerns about resource-hungry infrastructure.
Ultimately, programming efficiency isn’t about austerity, but about matching the optimum architecture to the problems at hand. This alignment delivers financial discipline and environmental responsibility — surely the very goals the FinOps and GreenOps movements champion?
