AI In Code Series: Sumo Logic - AI for CI/CD visualisation, normalisation & standardisation

We users use Artificial Intelligence (AI) almost every day, often without even realising it i.e. a large amount of the apps and online services we all connect with have a degree of Machine Learning (ML) and AI in them in order to provide predictive intelligence, autonomous internal controls and smart data analytics designed to make the end user User Interface (UI) experience a more fluid and intuitive experience. 

That’s great. We’re glad the users are happy and getting some AI-goodness. But what about the developers?

But what has AI ever done for the programming toolsets and coding environments that developers use every day? How can we expect developers to develop AI-enriched applications if they don’t have the AI advantage at hand at the command line, inside their Integrated Development Environments (IDEs) and across the Software Development Kits (SDKs) that they use on a daily basis?

What can AI can do for code logic, function direction, query structure and even for basic read/write functions… what tools are in development? In this age of components, microservices and API connectivity, how should AI work inside coding tools to direct programmers to more efficient streams of development so that they don’t have to ‘reinvent the wheel’ every time?

This Computer Weekly Developer Network series features a set of guest authors who will examine this subject — this post comes from Christian Beedgen in his role as chief technology officer at Sumo Logic, a company known for its cloud-based services for logs & metrics management that uses machine-generated data for real-time analytics 

Beedgen writes as follows…

For developers today, Continuous Integration / Continuous Deployment (CI/CD) pipelines are how they take software from initial creation through to production release. Managing this process involves multiple teams and tools, all working together. 

However, not all software is created equal. 

Why should that simple application available to a few people internally follow the same process as the mission-critical, customer-facing app that spans multiple platforms and clouds?

Pipeline proliferation

The number of pipelines that developers may be involved in within the enterprise has therefore gone up. In large enterprises, there may be tens or hundreds of different pipelines in place, all interacting with different software lifecycle tools and all creating data, all the time.

Today, we tell our businesses to be data-driven, but how data-driven are we ourselves?

Getting this data into one place so it can be used is one objective. Under the blanket term of observability, joining logs, metrics and tracing data together has been discussed and targeted as a way to improve. However, alongside this application data, getting insight into what is taking place across all those pipelines is just as important.

Developers have so many choices around tools that they can use as part of their pipelines, it is hard to standardise. What suits one team, their work and their deployment approach may not suit another; in addition, the use of multiple cloud services can add another layer of complexity to this. Getting data out of all these tools, platforms, cloud services and processes is therefore tricky, but necessary.

Coalescing all this data from multiple tools should provide insight into what is happening across the software development lifecycle and across multiple pipelines at the same time. With this data, individual developers can get recommendations around how well their implementations are doing in comparison with other developer teams using the same tools and combinations of services, as well as what to concentrate on next.

AI automation for normalisation

This insight is only possible if you are able to link CI/CD data together and use automation to normalise, process and present that data into information that developers can use. By comparing existing team data to industry-standard performance metrics, software developers can gauge their own levels of success and get actionable feedback on what changes are needed.

Alongside your own data, it’s worth looking at how the industry as a whole performs around its DevOps processes and pipelines. For example, the DevOps Research and Assessment (DORA) reports have details on how these automated processes can improve performance and reduce stress on teams. Using this as a guide, you can gauge how well you are doing across your team, across your pipelines and ultimately across the business.

CI/CD pipelines are supposed to run continuously, so we should be getting information from them continuously as well. Using this data will involve getting a continuous process in place to turn that data from multiple logs, files and metrics spread across pipelines into a form that developers can visualise and use themselves.

Wider industry view

Automating this process makes it easier to get the results back faster. Using AI, you can then start to make comparisons and recommendations back to developers. This is only possible when you can make recommendations based on what the wider industry is achieving, so you can see the context for whether you are performing well or not.

It is possible to achieve all this without automation. It is, however, time-consuming and less valuable compared to the results.

With automation and AI, your developers can concentrate on what will make the most difference within their pipelines, regardless of the tools in place or cloud services used. The aim should be to improve everyone’s decision making across the software development lifecycle, making this just as data-driven as the rest of the business.

Sumo_Logic:’s Beedgen: You ‘can’ avoid automation, but it’s time-consuming & less valuable.

 

 

 

 

 

CIO
Security
Networking
Data Center
Data Management
Close