Edge AI explained: Everything you need to know

In this essential guide, Computer Weekly investigates the current rapid proliferation of artificial intelligence deployed at the edge of networks – edge AI

Within knowledge work, it’s a rare individual who hasn’t yet come across the potential benefits that artificial intelligence (AI) technologies could bring to their industry.

According to research by TechTarget’s Enterprise Strategy Group (ESG), as AI continues its meteoric rise into business and IT environments, organisations are rapidly assembling or accelerating strategies to support AI technologies across every applicable area. Unlike niche technologies that impact only certain processes or personnel, AI has wide-ranging potential to transform entire businesses, IT environments and associated teams.

What is driving edge AI?

Everything these days is connected and becoming more complex making the edge a new frontier. Essential equipment can do more by communicating with more objects and devices. This increases the complexity of data transmission and management significantly.

Generative AI (GenAI) systems such as ChatGPT have captured the imagination unlike anything the technology industry has seen in the past 50 years. AI at the edge will be crucial to ensure desired real-time performance, data security and customisation in key applications such as autonomous driving, infotainment and robotics.

As a result, experts say what is driving AI at the edge is three principal vectors: real-time processing; data and security/privacy; personalisation/customisation.

Where is edge AI being deployed initially?

Among the innovations being rolled out, edge AI leaders would agree that there are seven key use cases where AI on the edge was transformational in development: healthcare and life sciences, smart retail, communications, smart city, automotive, digital home, and intelligent factories.

AI advancements in healthcare are transforming convenience and quality of life, redefining how people interact with technology. AI is driving cutting-edge research and helping doctors make more accurate diagnoses.

In automotive, AI is powering driver-assist and advanced safety features in cars. In the industrial space, AI is used in robotics and vision systems. Think of modern cars and how many connected components they now contain. AI assistants are on the verge of being introduced in vehicles. These will allow people to personalise their cars, from adjusting settings to making dinner reservations, reading a user manual, and so on. This has fundamentally redefined convenience in the industry.

In industrial applications, the modern design to manufacturing process is based on many interconnected disciplines taking place, often at the same time. Each member of design and manufacturing teams needs to have access to the right product information at the right time, meaning information availability from multiple sources at once is mission critical.

With embedded systems, AI is changing how chips are designed, tested and debugged. AI is also allowing industrial robots to learn and act autonomously.

What are the technical challenges to edge AI?

The world of AI is innovating at a fairly rapid pace with more models and putting tremendous stress on computing. AI at the edge will grow when the technology industry can deliver very high compute performance within a constrained environment.

To make AI work, unlock its full potential and make it pervasive, there needs to be an underlying infrastructure, and other supportive elements must be fully capable of supporting these strategies. This infrastructure needs to be adaptable, providing the ability to update or reconfigure as time goes by and encompass central processing units (CPUs), graphics processing units (GPUs), and programmable logic controllers (PLCs) with open software, distributing workloads from the cloud to the edge of networks and endpoints.

As AI is pushed right to the edge, it is igniting new requirements for decentralised intelligence, which means AI cannot depend on cloud and compute infrastructures, with possible challenges centred around power, data throughput latency, accuracy, environment temperature, safety, security, regulation, diverse workloads and frequency changes.

There are also specific technology challenges that each specific industry will likely face.

What will be the particular challenges for particular use cases?

Harnessing the power of these continuously evolving models is posing extreme challenges, especially because at the network’s edge, constraints in applications in general can be quite severe. Yet as specific applications change, these constraints and requirements continue to change. Each one of the use cases and applications places different demands and different requirements on edge applications.

In industrial applications, for example, firms will be placing emphasis on coping with diverse workloads, regulation, safety and accuracy. In healthcare, accuracy is paramount, leading to potential challenges encompassing safety, security, power and data accuracy. For automotive applications, latency, accuracy, safety and regulation are key.

The net result is that designing edge AI applications requires flawless integration of multiple systems, including pre-processing, AI inference and post-processing stages. These support system scalability and adaptability.

Which firms are leading innovation in edge AI?

The processor and platform giants are already leading the charge in edge AI. That is to say the likes of Qualcomm Technologies, Intel, MediaTek, AMD, ARM, Advantech, Adlink, Cadence Design Systems, Microsoft and Bosch.

Given the fact that the edge AI ecosystem is far reaching, partnerships between various stakeholders will be the key. In April 2024, taking advantage of Embedded World 2024, Qualcomm unveiled what it called embedded ecosystem products in the form of the QCC730 Wi-Fi service and RB3 Gen 2 Platform to provide what it calls critical upgrades to enable on-device AI, high-performance, low-power processing, and connectivity for internet of things (IoT) products and applications.

Qualcomm rates the products as being able to “revolutionise” products in battery-powered industrial, commercial and consumer applications. The RB3 Gen 2 Platform is attributed with offering a combination of high-performance processing, a 10 times increase in on-device AI processing, support for quadruple 8MP+ camera sensors, computer vision and integrated Wi-Fi 6E. It is designed to see use in a wide range of products, including various types of robots, drones, industrial handheld devices, industrial and connected cameras, AI edge boxes and intelligent displays.

And indicating the need for partnerships at Embedded World, Qualcomm announced a strategic collaboration with Advantech to establish what the embedded and automation products and solutions provider describes as an open and diverse edge AI ecosystem, paving the way for solutions tailored to artificial intelligence of things (AIoT) applications.

The collaboration has a stated aim of looking to drive continuous innovation and expansion in edge AI devices for IoT, combining AI expertise, high-performance computing and industry-leading connectivity to propel innovation for industrial computing.

As the partnership was announced, Miller Chang, president of EIoT at Advantech, elaborated: “In the vast and fragmented IoT landscape, deploying AI applications efficiently is a challenge. [We] are working together to meet market demands and surpass perceived limits [and we] will look to provide edge AI platforms to navigate the industry’s fragmentation, ensuring interoperability. Together, we aspire to redefine AI possibilities at the edge, shaping the future of edge intelligence.”

Noting that as edge AI scales, ARM has unveiled its third-generation NPU to support edge AI, the Ethos-U85. The chip design firm believes that silicon innovators face having to navigate growing system and software complexity while software developers need more consistent, streamlined experiences and easy integration with emerging AI frameworks and libraries.

Also noting the importance of partnerships, ARM said the new Ethos-U85 offers the same consistent toolchain so partners can take advantage of existing investments to deliver a “seamless” developer experience. It provides support for AI frameworks such as TensorFlow Lite and PyTorch, and supports Transformer Networks as well as convolutional neural networks (CNNs) for AI inference.

The company also believes that transformer networks will drive new applications, particularly in vision and GenAI use cases for tasks like understanding videos, filling in missing parts of images, or analysing data from multiple cameras for image classification and object detection.

ARM said it had made its launch after recognising the deployment of microprocessors into more high-performance IoT systems for use cases such as industrial machine vision, wearables and consumer robotics. The new technology is also intended to be used to accelerate machine learning (ML) tasks and bring power-efficient edge inference into a broader range of higher-performing devices.

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close