AI in UC: Where it shows potential and where it’s just for show

This is a guest post co-authored by Zach Katsof, director of intelligent communications at Arkadin; Holger Reisinger, SVP of large enterprise solutions at Jabra; and Alan Shen, VP of consulting services at Unify Square.

Artificial intelligence (AI) and machine learning (ML) are evergreen buzzwords. Even within the unified communications ecosystem, AI and ML are popping up more and more frequently, from Cortana voice assistants in Teams and information overload reduction technology in Slack, to call quality troubleshooting algorithms in UC monitoring software.

When it comes to the unified communications and collaboration market, the potential for AI applications across enterprise messaging, presence technology, online meetings, team collaboration, smart headsets and room systems, telephony and video conferencing is endless. But this begs the question: within the UC ecosystem should we think of AI as still very experimental or as having already crossed the chasm? And, if the latter, which of the AI applications and solutions are over-hyped and what’s the real deal?

AI takeover

The AI potential in UC extends both forwards into the realm of the end-user as well as backwards into the domain of IT. For the end-user, AI can automate a series of actions to improve human-to-human collaboration. AI can sort through data (emails, chats, speech recognition) and identify keywords and patterns to then provide feedback on the best way to communicate based on the audience and topic. The more index-able user data becomes, the greater the ability to compare it with keywords from chats and create automated responses to instant messages based on user communication patterns.

AI can also navigate through data and categorise whether people are using time efficiently and productively. For example, while logging the meetings that take place in a company, AI can determine how many of those meetings had agendas, who were the participants, what the minutes included and how much time was spent on each topic. In a similar way, setting up a meeting using AI allows for better resource management. It can evaluate who is attending and recommend the best possible meeting space based on the number of people, the name or topic of the meeting and the tools that might be needed during the meeting. Additionally, it can determine whether or not the participants are in the same office or require a Skype for Business dial-in, and what hardware is needed, like a speakerphone or whiteboard based on whether it’s a brainstorm session or catch-up meeting.

On the IT side, AI can analyse vast amounts of data and UC logs that are available for use in troubleshooting and specific problem solving. Instead of IT having to be reactive in its response to either individual user or systemic UC issues, the presence of AI allows for extrapolated insights regarding how the individual, team or company is performing.  Using this learning AI can then issue proactive guidance to IT regarding everything from changes to server configurations to recommendations for a new or different UC headset for a specific end user.

AI in action

AI is regularly applied to enterprise communications to increase efficiency and reduce unnecessary expenditure from humans on tedious work that a machine could take care of instead. The present and future of AI in UC, along with a rating of hype versus reality, is seen in the following areas:

  • AI-based gesture recognition on devices: On a conference call, gestures can improve the conferencing experience. For example, on a video conference the system can measure expressions. Cameras can provide details regarding the body language of the participants and provide real-time feedback to improve presentation skills and/or responses. Rating: Early stage.
  • Completely automated conference calls: There is ML in transcription, but it is not advanced enough to where voice transcription has completely eclipsed human comprehension. It wasn’t like that 10 years ago. Nowadays, Amazon’s Alexa can understand speech like a human. Rating: Early stage.
  • Meeting Management and Follow-Up: AI-enabled devices learn who is speaking, identify key points and then automatically assist people with tasks and send notifications or meeting summaries to all attendees. Rating: Early stage.
  • Conference rooms and room systems management: AI-systems drive the entire process of scheduling and setting up meetings. Rating: Nascent stage.
  • Smart devices: Meetings are made more efficient and productive by augmenting the conversation with information/insights that currently take hours or days of additional work post- meeting to realize. Rating: Early stage.
  • UC systems/platforms (e.g. Cisco, Skype for Business, etc.): IT departments can monitor entire UC systems as well as room systems and identify, for example, when a specific audio/video system in a specific conference room may require a maintenance check-in by IT to reduce possibility of down-time before users are impacted. Rating: Early stage (mature via third party apps).
  • Web-chat systems/platforms (e.g. Slack, Teams, etc.): Based on ML from all conversations on the platform, web-chat systems can think in real-time and adjust the questions/suggestions to cater to a specific situation based on prior history/database of similar discussions. Rating: Nascent stage.
  • End-user productivity enhancing bots: Personal assistants built into UC apps can simplify actions (e.g. search for information in real-time), and interactive bots can improve customer service interactions (e.g. IVRs driven by bots). Rating: Early stage.

 AI risks and considerations

When analysing data input and output, there are risks to consider with AI. If we are able to achieve a level where software is actually taking action and either self-healing the UC systems, or self-scheduling new meetings, we open the door to the software potentially taking the wrong action. Once the algorithm is able to come up with a better conclusion than what a human could do, it issues a recommendation. The model for ML algorithms can be so complex that, if the user or IT wants to ask “why,” there may not always be a why. There are third, fourth and fifth-level elements in this massive, complex algorithm with a huge data model and structure. The human element must arrive at a decision point regarding whether they simply trust the output (because they believe that the outcome will be better), or whether they remain in constant oversight mode.

This is perhaps the ongoing AI dilemma – can an algorithm spit out a decision that would force IT or the end-user to think the machine is doing a better job at managing UC than the human?  AI won’t replace our need to think and react. No matter how good a ML platform or AI solution is, people will always need to exercise judgment and validate actions prior to proceeding. The more AI is integrated into UC, the more dependent users will become on it. This could lead to an increased expectation of “perfect” meetings, chats and calls before, during and after the event.  If AI systems are not able to keep up or deliver as expected, there will be limited patience and tolerance for poor performance and users will likely stop using it.

Per the state of hype versus reality, the irrefutable notion is that we have not hit the peak of AI – we’ve only begun to scratch its surface. In the current stage, we can still make AI work for us by improving efficiencies, and as it becomes more complex and developed, hope for an automated state of perfection. In short, AI is the real deal but there are still several miles to cover before we cross the chasm to peak performance.

CIO
Security
Networking
Data Center
Data Management
Close