Augmenting audio visual with intelligence and automation

This is the second in a series of articles exploring the challenges and opportunities facing the audio visual (AV) industry, which has its annual European flagship event, ISE2018, in February. This article looks at how artificial intelligence (AI) and sensor data driven automation might transform the use and perceptions of AV.

Increasing complexity

IT connected AV systems have become pervasive. Low cost flat panel displays are replacing projection and can be placed anywhere that workers or customers might congregate. Many already are, or will have to be, connected to the network. Screen sharing, remote participants, unified communications and video conferencing tools are becoming more widely deployed. Organisations are expecting productive collaboration and individuals are increasingly expecting to be on camera and sharing data with colleagues. They will, however, still quickly lose confidence after a bad experience.

AV technology is now as sophisticated as anything else on the IT network, but there remain some fundamental usage challenges. Cabling difficulties or getting a screen or video collaboration system working should no longer be an issue, but total system complexity might be. Which begs the question, could intelligence embedded in the AV systems themselves make for simpler and more effective usage?

Intelligent audio visual

Keeping meeting room and display technology under control and working effectively is increasingly a complex IT task, with some asset management challenges thrown in. However, few organisations would be looking to deploy more people just to help support or augment this, despite potential user frustrations from un-integrated or unusable expensive AV displays and equipment. Artificial and automated intelligence needs to be applied.

Automation is already playing increasingly important roles in other areas of connectivity. Networks are becoming smarter with software defined approaches allowing for the intelligent centralisation of control with distribution of impact or power to the edge. Sensors and the internet of things (IoT) are gathering masses of data available for use for machine learning and automation.

The combination of smart edge devices and smart networks means that once manual processes can now be intelligently automated. For AV, this can be applied to enhance the already important user experience to new levels.

Joined up meetings

Since a large element of AV is about supporting people conveying information to other people, an obvious area to apply AV intelligence is to meetings. Many organisations will find that much of information shared and discussed during meetings will be lost or forgotten. Collaboration tools and repositories help, but only if everyone is sufficiently aware and disciplined to remember to use them.

This challenge can be addressed with a bit of joined up thinking. For example, Ricoh and IBM have together created the Intelligent Workplace Solution. Rather than just providing an interactive whiteboard, it includes IBM’s Watson as an active participant capturing meeting notes and action items, including side conversations that might otherwise be missed. It logs attendance using smart badges allowing participants to keep track of the meeting content and uses simple, voice control so that any participant, whether present or located remotely, can easily control what is on the screen through simple voice commands.

The intelligence can also be used to augment other aspects of more complex meetings. For example, the system can translate speakers’ words into several other languages. These can then be displayed live on screen or in a transcript. Applying automation to integrate the visual, audio and network elements in this way improves the experience for the participants. It also makes the overall meeting process much more efficient.

Smarter command and control

In many mission critical settings AV is already widely used to view live content. This may be from numbers of cameras, industrial sources, social media feeds and computer visualisations. Videowalls or large screens complemented by smaller displays are often used to allow large numbers of information feeds to be simultaneously monitored. Increasingly this centralised command and control approach is becoming a constraint rather than an ideal solution.

Firstly, there is a risk from having a single location and single point of failure. But also getting all the right individuals to one place to see and absorb the information is a challenge. Expertise may be widely spread; many employees will choose to work remotely or need to be mobile and so the control ‘centre’ needs to be distributed.

This more sophisticated model for command and control requires more intelligence in the AV network. Information must be shared to those who need to see it, but too much information could easily overload networks. This is especially true given that video content quality has advanced through high definition (HD) and is now increasing 4K. Information needs to be shared intelligently.

Higher definition increases the possibility to apply advanced recognition systems to the higher quality images. Intelligent analytics can be applied to do more automated monitoring. This means that the total volume of data does not need to be shared and manually monitored. Smart applications of this type of technology will make command and control systems more effective. They will also allow more worker flexibility and ensure that individuals (and networks) are not being overwhelmed by unnecessary data.

Wearable and IoT data

Technology innovations, such as sensors, IoT and wearable devices will increasingly impact on the AV world. This will add further intelligence and live data feeds to systems already dealing with masses of video and audio content. Coping with the vast array of sources, let alone the levels of data will be an increasing challenge.

As the data variety and volumes increase, intelligent systems are required to automatically capture and analyse this ‘big data’ live. Then, human operators can react quickly and appropriately. For example, combining conventional surveillance cameras or audio capture systems with machine learning or artificial intelligence systems will help to automatically detect anomalous data or abnormal situations.

Human operators, no matter how good or well-trained become tired or lose focus over time, especially if the task in monotonous. However, augmenting their decision making with automated systems will improve their responses. This is  applicable to issues such as security, but also to any type of application where change monitoring is required. This could be changes in an industrial process indicating failure or a drop-in quality, or patient vital signs in healthcare. New data sources and AV need to be well integrated within the overall system in order to fully realise the benefits.

Automatic for the people

In all situations, the ability to rapidly visualise and comprehend the implications of changes or trends in masses of data sources highlights how critical AV is to the user experience can be and how it needs to be integrated into a broader IT systems approach. Many organisations are already evaluating how AI and IoT can augment and automate some of their business processes. Many of these processes now rely heavily on AV infrastructure and especially the use of video. It will become increasingly important to think about this holistically as an AV/IT integrated architecture and not one where the visual elements and displays are added a potentially expensive afterthought. To get more insight into how intelligence is being applied in many different ways to the AV sector, visit ISE2018 in Amsterdam in February.

CIO
Security
Networking
Data Center
Data Management
Close