everythingpossible - Fotolia

Developers at the mobile edge

We look at the challenges and opportunities to be found in creating mobile apps for an expanding array of devices

This article can also be found in the Premium Editorial Download: Computer Weekly: Davos debates global tech risks

While most technology advancement has been evolutionary, a lot of recent innovation has been more disruptive. Despite – or perhaps because of – a trend to openness and common standards, the technologies and devices at the edge of the network have diverged, both in terms of their capabilities and how they interact with people and the physical world.

Mobile devices might be tiny and worn on the wrist, or could be large high-definition displays sat on a lap. They may be stacked with sensors and audio input, and offer displays mixing real and virtual imagery.

This changes the focus of attention for developers. IT is no longer separated from users by an upright television screen and typewriter keyboard.

The relationship is both closer and more complex, with more attention on the whole user experience, not just the interface, and on a more varied set of devices. This raises a number of challenges and opportunities for developers.

Size matters

One obvious issue is the range of capabilities and screen sizes. The challenges of mobile device diversity have been well understood for some time.

This has included profiles and specifications, such as User Agent Profile, as well as advances in HTML endeavouring to unify and build on earlier attempts such as Java and Flash. Despite its age, for many, Java is still proving a vital element of the cross-platform toolset.

The challenge has not diminished. As well as variability of size and aspect ratio of mobile device screens, resolutions have grown to support high-definition and 4K content. There have been some attempts to create abstracted models, which developers may wish to consider. Android developers need to become familiar with density-independent pixels (DPs) – around 160 pixels in 1in of screen space – to help them plan layouts on phones and tablets. For those targeting Apple devices, there is the roughly (but not exactly) similar abstract model of points.

Managing the consistency of development code bases can still be a challenge, but there are tools from major players and new entrants alike. Some platforms build on the model originally popularised by PhoneGap, where a lightweight native app launches an embedded browser. Open source Apache Cordova has a diverse ecosystem of PhoneGap tools such as those from Adobe, ionic, Telerik, Framework7 and Evothings.

Cross-compiling points to other solutions, such as Microsoft Xamarin, RubyMotion or Appcelerator for converting to native code for the target mobile devices. Many enterprises will still require applications to function well across desktop as well as mobile device platforms. This may mean looking more closely at fully functional multi-device enterprise development platforms such as Kony’s AppPlatform, Pega Application Mobility Platform (formerly Antenna) and the SAP Mobile Platform.

Wearing well

Application development gets even more challenging once wearable mobile devices are contemplated. The screens will be tiny and there will be a range of sensors and data feeds to accommodate. Wearable platforms are undergoing rapid innovation, and in some cases disappearing suddenly. For developers, this uncertainty makes it difficult to invest in more significant applications, yet a flourishing and growing ecosystem is exactly what the sector needs. Although independent sophisticated devices are appearing – Samsung Gear S and the latest Apple iPhone, for example – many wearable devices operate as companions, at an application level as well as mostly piggybacking off the cellular connection of a smartphone.

Read more about mobile development

Microsoft's Visual Studio IDE allows development teams to create apps for multiple Windows 10 mobile and desktop platforms, as well as for iOS and Android.

Want to add voice recognition to your Java apps? You can do it with Amazon echo.

This means development requires specialised skills and a distinct approach. Consequently, wearable development platforms have yet to become broad or generic. For those targeting Samsung devices, the Tizen Studio offers a useful collection of functionality and there are tools for .Net developers to hook into and extend Visual Studio.

Developers using Android Studio can build for all Android devices, including the latest Wear 2.0 for wearables, and test them without devices by using an emulator. Wear 2.0 broadened the appeal by allowing apps to be downloaded directly via an on-watch Google Pay Store, without relying on a Google phone. This means apps can run on Android Wear devices paired to iOS devices (iPhones).

Apple’s watchOS 4 Software Development Kit (SDK), plus its WatchKit extension and its Xcode interactive development environment, provide support for smartwatch app developers. For enterprise app developers, Apple also has an enterprise development support programme, as well as useful overview guides such as the In-House App Accelerator Guide.

Amazon, too, is bringing its voice to the wearables sector on several fronts, with the Alexa Mobile Accessory Kit, a lightweight alternative to the Device Software Development Kit introduced last year. For more sophisticated devices there is a reference solution, the Alexa Premium Far-Field Voice Development Kit. With these approaches, Amazon is trying to reduce the cost and effort required to add its voice assistant to wearable and “ornamental” devices.

Speak to me

Voice recognition has long been popular in sci-fi – and has been around in one form or another since the early 1980s – but its popularity has recently surged with internet-connected smart microphones. Not only is it important to Amazon with Alexa, but all the major technology companies are gearing up for this evolving vocal user interface opportunity. There is a range of personal assistants, such as Apple’s Siri, Google Assistant (similar to Google Now but more artificial intelligence), Microsoft’s Cortana and Samsung’s Bixby. There is also Watson, IBM’s natural language artificial intelligence (AI) platform.

The model for all of them is based on linking voice commands to the actions, or “skills”, that will be performed as a result by a typically AI-enabled back end. This means building an ecosystem of skills, as well as the voice recognition technology and AI platform. Siri was one of the first assistants, but Apple has been slow to extend it to other devices and it is still seen as a closed system. Microsoft has been pushing hard using its developer heritage and a strong breadth of development offerings such as Cortana Intelligence Suite support for cloud-native applications with Azure. Google, despite multiple approaches, is still lagging in terms of volume of skills, with Bixby limited to a subset of Samsung devices.

Amazon has progressed furthest, not only by catching a marketing wave with Alexa, but also generating a strong skills ecosystem. It has supported this with its Alexa Skills Kit, which is a collection of self-service application programming interfaces (APIs), tools, documentation and code samples.

This is important since, just like other platform battles, strong ecosystems and support will be critical. Developers will be making bets on which platforms will succeed, so they will not only require tools, templates and stable APIs, but also the right level of business support to build marketable solutions against credible use cases.

The large suppliers are not alone in addressing voice-enabled command and control. Irish startup Voysis is developing an independent approach that enterprises can trim to fit with their own products, data and branding. There are others too, such as Convessa, Smartly and Snips, but the eventual outcome for MindMeld – acquisition by Cisco – shows that this sector is maturing and consolidating rapidly.

As devices become wearable or fade into the background on shelves at home or in the workplace, voice commands will grow in relevance. We may never fully reach the levels envisaged in 1970s sci-fi, but it would be very wise for product and application developers to understand how best to incorporate voice-activated skills into their designs.

Augmented reality

Alongside vocal augmentation, there is visual, especially for smartphones, goggles and glasses. Recent advances in immersive screen technology offer a different way of thinking about what to do with digital information – after all, why must data always be presented in a single rectangular image? Google’s return to its Glass technology, along with Microsoft with HoloLens and Epson with Moverio, indicate smart glasses are moving into a new, interesting phase.

There is a large and growing number of augmented reality (AR) tools available. As mobile device capabilities, cameras and sensors have grown in sophistication, the ways to connect and overlay the real and virtual worlds have evolved.

The simpler approach, which has the least computational impact on the end device, is to apply markers to the real world. These might be known or recognised objects or might be applied to the environment to make it easier to recognise using a device’s camera, such as the set of unique circular card tokens used in tools such as Zappar.

As technology such as simultaneous localisation and mapping (SLAM) evolves – where the device builds its own model or map of reality to apply the virtual elements to – markers are less necessary. Sensors, GPS, compasses and accelerometers are increasingly being built into mobile devices – and drones, as well as the technologies entering autonomous vehicles such as light imaging, detection and ranging (Lidar) – allowing for a markerless approach, but consuming a greater volume of compute power.

Linking this to cloud-based resources becomes invaluable; tools such as Wikitude’s all-in-one augmented reality SDK – which intelligently combine object and image recognition, with instant tracking and cloud services – have become popular. There are plenty of options available for developers wanting to experiment with or deliver credible enterprise and consumer AR applications.

Time to experiment

The key for all developers – and those who control their purse strings – is what innovation will make a difference.

Many of these technologies are still at the early stages of finding how to best fit into use cases. This means a fair degree of experimentation will be necessary. Whether the organisation is currently adopting an agile or DevOps approach or not, there is a great deal of merit in this approach when it comes to new types of mobile and edge devices.

Start small, but with a specific purpose in mind, experiment, deploy quickly to users and, most importantly, gather, incorporate and rebuild based on their feedback. With lots of choices out there, it is hard to settle on the right ones in advance, but getting closer to understanding the user or customer will always help.

Rob Bamforth is an Independent Industry Analyst, known for his work with Quocirca.

Read more on Mobile hardware

CIO
Security
Networking
Data Center
Data Management
Close