NASA developer challenge protects Earth attack by deep space Asteroids

bridgwatera | No Comments
| More

Bah humbug, not another developer competition surely?

Well yes, but if we said NASA and Asteroids would you read on?

1 hwdiugqw.png

NASA has been working with Appirio's [topcoder] community of 630,000 data scientists, developers and designers to kick off the "Asteroid Tracker Challenge".

The challenge begins July 25 2014 and competitors are tasked to optimise the use of an array of radar dishes when tracking Near Earth Objects (NEO) from the time they become visible over the horizon till the point at which they cease to be visible.

NOTE: NASA define Near Earth Objects as are comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth's neighbourhood.

"Composed mostly of water ice with embedded dust particles, comets originally formed in the cold outer planetary system while most of the rocky asteroids formed in the warmer inner solar system between the orbits of Mars and Jupiter. The scientific interest in comets and asteroids is due largely to their status as the relatively unchanged remnant debris from the solar system formation process some 4.6 billion years ago."
... anyway, back to the challenge


This tracking (the kind developers have to do for the challenge) is meant to allow scientists to gather information from each object such as composition, spin rate, among other properties.

NEO detection and characterisation is a critical need for NASA, it says.

Want to know why?

NASA has been directed to develop capabilities to observe, track and characterised NEOs and other deep space objects that could pose a threat to the Earth.

Developers, it is your duty, please go forth.

Does guaranteed datacentre PUE engender better applications?

bridgwatera | No Comments
| More

While software developers focused on the more cerebral design-centric and user interface level of the application structure might not spend too much time thinking about the architectural back end, there could be a rationale for more inner introspectiveness.

1. mobile relies on back end power to feed the device efficiently, so, ergo, the theory states that a super efficient datacentre will serve apps better than a sloppy one
2. cloud applications mirror mobile in the same sense as point #1
3. our attention and interest towards the datacentre (as the "network is the computer" after all) today is higher overall, primarily perhaps because of point #1 and point #2

So then, shouldn't programmers care more directly about the Power Usage Effectiveness of the datacentre that underpins their applications?

800px-Datacenter-telecom.jpg

NOTE: PUE s a metric used to determine the energy efficiency of a datacenter -- PUE is determined by dividing the amount of power entering a data enter by the power used to run the computer infrastructure within it.

Ark says it is "pioneering change" (yes, they all say that, but bear with us) in the datacentre industry by GUARANTEEING PUE rather than using it as some barometric business barometer than is endlessly negotiated over as pat of some flaky Service Level Agreement.

Ark CEO Huw Owen says that, "One of our biggest challenges in the datacentre industry is educating businesses and governments about our role in underpinning modern technology and our ability to do so in an efficient and socially responsible manner. That requires ownership and intelligent advocacy by all of us. In this regard, we are not yet where we need to be."

To give some perspective and real numbers, Ark finds that most companies tend to run at a building PUE of 2.5 or higher.

If you could attain a building PUE of 1.25, or less there is potential to achieve savings of around £1.1 million, per megawatt, per year. From an environmental perspective, that's 6000 tonnes of carbon that you could potentially be taxed on.

"Data storage is seen as power hungry which makes our industry a potential political football if we don't share the facts available. The harsh reality is that with the majority of UK data stored in warehouses and shoe-horned into broom cupboards, our critics on one level are right. The good news is that modern, sophisticated, highly efficient, purpose built datacentres offer the solution. That does however have to be broadcast both loudly and effectively, accepted and then acted upon," added Owen.

Bridging the Java to .NET interoperability divide

bridgwatera | No Comments
| More

The following piece is a guest post for the Computer Weekly Developer Network by Wayne Citrin, an interoperability specialist at JNBridge.

Citrin is also CTO and the architect of JNBridgePro -- the company is a provider of interoperability tools to connect Java and .NET frameworks.

Bridging the Java to .NET interoperability divide

01ec2fba5bb5ad7cf6335a43d91261c9_400x400.jpeg

With an increasing number of today's enterprises using a mixture of both Java and .NET technologies, interoperability between the two platforms has become an imperative.

The various business reasons behind the need for interoperability include:

  • the reuse of existing skills,
  • technologies and systems,
  • the need to reduce project costs,
  • and the requirement for faster time to market.

While the reasons for interoperability have remained the same, the landscape of approaches has endured some change. Indeed, of the class-level interoperability approaches, only one -- bridging -- has truly prevailed.

Approaches to Java-.NET class-level interoperability

Depending on the business reasons for interoperability and the requirements of the application, a company might choose either a service-oriented architecture (SOA) or class-level integration.

For the purposes of this article, we will focus on class-level integration.

Three basic approaches

Historically, there have been three basic approaches to Java-.NET class-level interoperability:

Porting the platform: Port the entire .NET platform to Java or vice versa. In addition, compile the developed code to the alternate platform.

Cross-compilation: Convert Java or .NET source or binaries to .NET or Java source or binaries

Bridging: Run the .NET code on a .NET Common Language Runtime (CLR), and the Java code on a Java Virtual Machine (JVM) or a Java EE application server. Add a component to manage the communications between them.

Platform porting and cross-compilation certainly have some overlap. But cross-compilation involves only a subset of the code and will usually try to substitute the API calls of one platform to another. Platform porting implies porting all APIs of one platform to the other. Additionally, cross-compilation normally happens once with the result that the Java code is converted to a .NET language (or vice versa). After cross compilation, the initial code base is no longer used.

When evaluating each approach, the following criteria are commonly used:

Performance: How much overhead is involved in inter-platform communication?

Direction of interoperability: Does the approach support Java calling .NET, .NET calling Java, or both? Are callbacks supported?

Binary compatibility: Can we use the approach to access Java binaries from .NET, or do we need source code?

Type compatibility: Does the approach offer full implementation inheritance? Using the approach, are values converted to native data types on the respective platforms, where possible?

Portability: Does the approach work only on Windows, or is it cross-platform?

Conformance to standards: Using the approach, is the behavior of the Java code guaranteed to conform to Java standards?

Ability to evolve: Will the approach break as either the .NET or Java platform evolves?

Both platform porting and cross-compiling -- while still in existence -- have fallen off the interoperability radar to a significant degree, mostly because they've failed to meet all or some of these evaluation requirements. Here, we take a deeper dive into each.

Porting the platform

One software vendor attempted at one time to reimplement the entire .NET platform in Java as a set of Java packages. This meant that the framework became available to be called by any Java code that imported the relevant package. While porting .NET to Java allowed Java to call the .NET APIs, in itself it didn't allow Java classes and .NET classes to call each other. To allow Java to call .NET, cross-compilation must also be used.

1 iugwdiuwge.png

Platform porting offers a number of benefits, including low inter-platform overhead. But its pitfalls far outweigh its benefits, contributing to its near-demise. The .NET framework is quite large, and porting the entire framework is a daunting task. There are a number of namespaces in the framework that are tightly tied to the underlying .NET runtime, and it is not clear that it would be possible to fully implement them in Java. Even if these types of namespaces were available, the Java code loses its portability because it depends on the native Windows code. As .NET is a very large platform with many thousands of classes to port, this opens a Pandora's box of complexity. Platform porting quickly became a very unlikely method to succeed in achieving interoperability because of the sheer size and complexity of the task.

Cross-compilation

Cross-compilation can either compile Java source to MSIL (the Microsoft Intermediate Language that runs on the .NET CLR), thereby truly making Java a .NET language, or compile Java source to a .NET language such as C# or VB.NET.

If compiling Java source to MSIL, full inheritance between Java and other .NET languages is possible, and if done properly, the Java compiler can be fully integrated with other .NET development tools. Full inheritance is supported between Java and other .NET languages, and there is full interoperability in both directions, so that Java methods can call methods written in other .NET languages, and vice versa.

But with cross-compilation, there exist a number of shortcomings -- not least of which is that the resulting code will only run on Windows, unlike Java source code compiled into Java byte codes, which is cross-platform. Also, any Java code that calls .NET framework APIs is no longer portable, since it relies on calls to methods other than Java methods or Java APIs. Additionally, in order to use a Java-to-MSIL compiler, the Java source code is needed, which means that the option is not available if the user only has Java binaries (for example, a compiled Java library that the user purchased from a third party).

The differences between the Java and C# object models lead to some problems when integrating Java and C# classes. For example, when implementing a Java interface with constant fields, such an interface must be legally compiled into MSIL and used by other MSIL-compiled Java classes, but any C# classes attempting to implement the interface would not see the constants.

When compiling Java source to a .NET language, there is no overhead for inter-platform communication. Full inheritance is possible. In the case of binary cross-compilation, the approach works only when Java binaries are available. When the MSIL is cross-compiled to Java byte codes, the result is cross-platform.

There are some disadvantages to this cross-compilation approach as well. It is necessary either to re-implement the APIs for one platform in the other (that is, re-implement the Java API in .NET or the .NET framework in Java), or to translate one platform's API calls to the equivalent on the other platform. Java byte codes translated to MSIL would only run on Windows. Finally, there is no guarantee that the behavior of Java code translated to MSIL will conform to Java standards.

Bridging

1 anina.png

Bridging solutions address the conversion issue by avoiding it. .NET classes run on a CLR, Java classes run on a JVM and bridging solutions transparently manages the communications between them. To expose classes from one platform to classes on the other, proxy classes are automatically created that offer access to the underlying real class. Thus, to allow calls from .NET methods to Java methods, proxies are created on the .NET platform that mimic the interfaces of the corresponding Java classes. A .NET class can inherit from a Java class by inheriting from the Java class's proxy, and vice versa.

Bridging has a number of advantages over other interoperability approaches. For example, bridging can evolve as the platforms evolve. Future versions of Java and .NET will work with a bridging solution as long as they remain backward-compatible. As new versions of Java and .NET are introduced, they can be incorporated without having to update the bridging solution.

Bridging has the additional advantage that, since the Java runs on a JVM or a Java EE application server, it is not necessary to have source code; the solution will work when only Java binary is available. Finally, since the Java classes are still compiled to Java byte codes, they remain cross-platform.

Bridging solutions also often conform to standards. Since the actual runtime environment is a CLR or a JVM, and as long as the runtime environments and compilers conform to standards, the resulting code will exhibit conformant behavior.

In addition to these general advantages, bridging solutions support callbacks, allowing Java code to implicitly call .NET code without having to alter the Java code, both pass-by-reference and pass-by-value, automatic mapping of collection objects between native Java and native .NET, and on-the-fly generation of proxies for dynamically generated Java classes.

Rationale and reasoning

There are a number of reasons why one would wish to interoperate Java and .NET code, most of which centre around preserving an investment in Java code or Java developers, and using existing Java code in a new .NET setting.

Each of the various approaches to interoperability, platform porting, Java compilation to MSIL, cross-compilation, and bridging, offers advantages and is appropriate in different situations.

However, as the previous discussion shows, bridging solutions provide the best combination of portability, ability to evolve, conformance to standards and smooth interoperability. These advantages have ensured bridging's survival as the interoperability solution of choice now and into the future.

Institutionalised omnichannel commerce analytics

bridgwatera | No Comments
| More

The arrival of that job title we now call CAO (chief analytics officer) comes with a few other new realities for the 'next-generation' IT shop.

This next-gen IT Nirvana sees analytics now driven from a top-down perspective (i.e. the boardroom and the central IT function) and, also, successfully disseminated throughout every lower echelon and tier of the company (i.e. every employee is armed with an analytics-aware device) so that every workers' data streams are captured for the wider data analytics pool and not left redundant in a silo.

This, in real terms, is institutionalised analytics -- in a good way.

1ShawshankRedempt_184Pyxurz.jpg

Firms will now combine institutionalised analytics with their ecommerce channels to complete the picture (for now) as they bring cloud-based financials and ERP in to direct the normal throughput of corporate information around their business.

NetSuite is aiming to form a logically structured and even wider virtuous circle here and create what it calls omnichannel commerce as the new norm.

The company recently acquired London-based Venda, a leading provider of ecommerce solutions, to build upon its own NetSuite SuiteCommerce footprint.

"By combining Venda's customer insight and years of experience delivering a real-time, scalable commerce platform with NetSuite's cloud leadership, we can bring new capabilities to B2B and B2C companies of all sizes and transform how they run their businesses," said Zach Nelson, NetSuite CEO.

Venda's Convergent Commerce Platform is an ecommerce platform for retailers and brands to use online, mobile, social and in-store.

This is institutionalised omnichannel commerce analytics... a term we are perhaps not quite used to yet.

NetSuite says it is aiming to enable companies to "re-platform core operational business systems" in the cloud -- and work to support the transformation of those core operational business systems to help organisations transform B2B and B2C commerce to support an omnichannel world.

... and you thought institutionalised was a bad word?

Relax, that's only if you're watching the Shawshank Redmemption.

Will Apple Swift fly higher than Google Go?

bridgwatera | No Comments
| More

Swift is a popular term, name, noun and thing.

Quite apart from SWIFT as a type of Suzuki Jeep, a bird, an alternative metal band from North Carolina and an Australian netball team -- swift crops up in technology circles several times.

1xcode.png

OpenStack Swift, (sometimes also known as OpenStack Object Storage) is an object storage system licensed under the Apache 2.0 open source license designed to run on standard server hardware.

Apple's Swift

Swift is ALSO new programming language from the Apple developer team.

Designed for Cocoa (the native API for the OS X operating system) and Cocoa Touch (the user interface framework for Apple's own iOS operating system), Swift enjoys syntax that is "concise yet expressive" (says Apple) and Swift code works side-by-side with Objective-C.

Swift has only been around since June, but it's already ranking well on the
the July Tiobe Index and the PyPL Popularity of Programming Language index.

While these indices are not always regarded as accurate tabulations of real programmer interest, the world developer community has shown particularly close interest in this Apple-originated product.

Tiobe managing director Paul Jansen has pointed out that Google's Go language also ranked highly when first released, but has dropped off considerably since launch.

Apple says that Swift was built to be fast using the high-performance LLVM compiler -- Swift code is transformed into optimised native code, tuned to get the most out of Mac, iPhone, and iPad hardware.

According to Apple, "The syntax and standard library have also been tuned to make the most obvious way to write your code also perform the best. Swift is a successor to the C and Objective-C languages. It includes low-level primitives such as types, flow control, and operators. It also provides object-oriented features such as classes, protocols, and generics, giving Cocoa and Cocoa Touch developers the performance and power they demand."

Your next generation of iPhone and iPad apps will all be written in Swift, eventually.

1swift-screenshot.jpg







What is liquid computing?

bridgwatera | 1 Comment
| More

The Computer Weekly Developer Network spots a new industry term in the process of crystallisation this week.

Liquid computing.

As is so often the way with this kind of terminology and nomenclature, this is a user-driven trend rather than a programmer one... or is it?

But...

1 apple.png

If it sticks, it will very arguably have direct implications for the way application developers structure the next iteration and generation of the software they focus on.

Apple's Handoff

The term is liquid computing appears to have been used to describe the process driven by Apple's Handoff feature which will feature in iOS 8 and OS X Yosemite at the end of July 2014.

Apple tells us that now you can start writing an email on your iPhone and pick up where you left off when you sit down at your Mac -- or browse the web on your Mac and continue from the same link on your iPad.

Don't hold your breath

Not to be left out in the cold, Google and Microsoft are also said to be working on features that emulate this kind of functionality -- but we would advise you not to hold your breath waiting for technical details.

Apple explains that it all happens automatically when your devices are signed in to the same iCloud account.

"Use Handoff with favorite apps like Mail, Safari, Pages, Numbers, Keynote, Maps, Messages, Reminders, Calendar, and Contacts. And developers can build Handoff into their apps now, too," said Apple.

Apple showed of this feature off at its recent WWDC conference's public keynote.

The term liquid computing was used originally by Galen Gruman and we like it.

DevOops! I did it again

bridgwatera | No Comments
| More

It was just a minor mistyping, but when DevOops cropped up this week it was more than the Computer Weekly Developer Network could handle... it just had to be blogged.

If DevOps = Developer Operations then...

LOBDOPS = Line Of Business Developer Operations and so...

DevOops = That State Of Being When DevOps Is Discussed Too Much

That being said, recent blogs in this channel have covered DevOps perhaps more than ever before and now we're convinced that DevOps is a cultural practice and methodology to deliver products and services, not a thing.

Additional comment came in this week from TK Keanini (Ed - don't be shy, tell us your first name) who is CTO at Lancope,

Keanini (presumably TK to his friends) says that for those who live and breathe DevOps, it is a necessity driven by the needs of scale for Internet applications.

He writes below as follows:

"DevOps is not for everyone but for those who require it, it is a necessary part of the business. To understand this better, just go to a DevOps organisation and ask them why they cannot operate in the historical service models. What you will see is that the speed at which the business functions is lighting fast when compared to the non-DevOps methods and this tempo is necessary, not optional. The people, processes, and technologies all need to work in concert for DevOps to really take foot but when it does, a structure much more tolerant to the scale and hostility of the Internet emerges. This is the value of DevOps."

Lancope, Inc. is a provider of network visibility and security intelligence -- by collecting and analysing NetFlow, IPFIX and other types of flow data, Lancope's StealthWatch System claims to be able to detect attacks from APTs and DDoS to zero-day malware and insider threats.

A real Internet of Things smart home experience

bridgwatera | No Comments
| More

Five years on from now, this story will sound ridiculous.

The rise of so-called 'smart home' technology and the plethora of devices emerging into the so-called Internet of Things (IoT) category is, of course, very rapid at the moment.

The Computer Weekly Developer Network blog has (for some months now) been talking about how software application developers are now given the opportunity to program not just for the enterprise, but also for the toaster, fridge and microwave as we start to connect these domestic appliances to the communication protocols and the web so that we can start to manage them and digitise our lives.

716MkCvGjKL._SL1500_.jpg

But when will the digital home become a reality?

There are some higher profile examples than the one you will read about below, but I have finally started to implement smart home technologies in my own house.

Let's start with the basics, a Fitbit.

Firmly in the so-called 'wearables' category, many users (myself included) can't put their trousers on without dropping their Fitbit into their pocket.

Since I started carrying a Fitbit One I have clocked up 1000 miles in the last 8 months and I know that my climbing peak was 200 flights of stairs in one day?

Does that sound silly?

It's now a part of my life and I already have a spare one for when this one wears out.

Like I said, five years from now, this story will sound ridiculous -- everyone will be used to wearables and have one or two devices of their own.

Moving on... I am ashamed to tell you that my toaster still works by clockwork and my microwave is ever so conventional... but, my home heating and hot water system is much more exciting.

British Gas Hive

As a proud owner of a British Gas Hive system I am able to turn my heating on and off from the app on my iPad and Android smartphone -- the absence of an app on the Windows Phone store is a shame.

I am also able to see what the temperature is at home and adjust my heat controls and the complete schedule of when my heating (and hot water) comes on and off from the app itself.

1 oiehwdw.png

I have been in meetings over the last couple of months and opened up the app just to show people (usually quite techie people) that I can turn my hot water on when I'm not at home.

Everyone thinks this is so cool today, in 2014 -- and it is of course, but... as I keep saying, it won't be long before we accept these technologies as the norm and therefore start actively demanding them as consumers.

Like I said, five years from now, this story will sound ridiculous.

The fact that I can now be away in a foreign country in winter time and not have to come home to a cold house is... well, it's life changing if I am completely honest with you.

Hive is controlled from a hub that plugs into your broadband router so that your thermostat can connect to the Internet and be controlled remotely.

British Gas says that while we have seen innovation in transport, retail and leisure, the pace of innovation in many aspects of our home has been pretty slow.

Outside of the living room, technology hasn't really changed how we manage our homes. The way we heat, power and light our homes has not changed for decades. The last mainstream innovation in our home could be described as the mass adoption of central heating in the 1970s.

Until now then right?

So will my heating switch off if my broadband goes down?

No, the heating will continue to work, it will just need to be controlled directly from the thermostat itself.

I would argue that one of the best things about British Gas is its engineering staff and our unit was fitted by one Charlie Cole (no relation to Cheryl) who explained the system clearly and slowly.

So what else have we been connecting?

piper_01.jpg

Now for the best bit, I also have a Piper.

What's a Piper?

The Piper is a home surveillance, security, home management, alert system -- or something like that.

It's makers call it a home automation system.

Piper is the first device to combine panoramic video, so-termed 'Z-Wave' home automation, and environmental sensors into a single product.

There are zero service contracts or monthly fees.

"The ability to simply and easily interact with and secure your entire home -- not just one room -- when you're away has been a priority of ours since we first developed Piper," said Russell Ure, creator of Piper and executive VP & GM of Icontrol's Canadian business unit.

Using up to five Pipers, users can create independent security zones within their homes.

Each Piper operates as an "independent sentinel" and joins together to form an integrated security network.

Z-Wave integration allows (say its makers) for complete home awareness and automation control.

Shared environmental and motion sensor data, camera views and recorded videos provide the ability to track changes and movement through each zone.

Piper features include a two-way audio -- this means I can talk directly to occupants through Piper and app on mobile device. Users also have the ability to customise three security modes (home, away and vacation) and program a motion detector which connects to a piercing 105-decibel siren.

There's also free cloud storage that provides Piper with a place to store event videos, send various types of notifications and perform additional login/connection negotiation.

Piper's HD Panoramic camera has a 180° fisheye lens, electronic pan, tilt and zoom and 1080p camera sensor.

The fact that I can now sit in a meeting and show people a live video stream of my home that I can talk into and interact with as other house occupants pass by is amazing, today, in 2014.

The fact that I can use Piper and Hive to see what the temperature is in my own house (Piper has a thermometer too) and turn my heating on and say hello to my dog as I watch him scamper around and wait for me at my front door is amazing, today, in 2014.

But I'm telling you, five years from now, this story will sound ridiculous.

1 dog.png

EMC defines the hypercloud, sound fishy?

bridgwatera | No Comments
| More

EMC held a hi-energy (think electronic floors and rock band) press and analyst get together this week in London's Old Billingsgate market.

The building itself has been sluiced down and washed out conscientiously since its heyday, so none of this gathering should smell fishy, not even if the event itself is called to EMC Redefine Possible 2014.

The main news from this meeting centres on announcements across EMC's Flash, enterprise storage and Scale-Out NAS portfolios including EMC XtremIO 3.0, which adds new inline data services -- and new EMC VMAX3, which bids to provide enterprise storage with an open enterprise data service platform that offers 3X previous levels of performance.

The company is, essentially, almost describing a sort of super-charged hyper-cloud without using the term "hypercloud" specifically.

The hypercloud (if it now exists and develops) with its hyper-performance apps will have implications for software application developers who will need to build apps that span potentially hundreds more users at a far higher velocity than they might initially have considered architecting for.

EMC used this event to announce that the EMC ECS Appliance, a hyperscale storage infrastructure designed for the datacentre, is now generally available.

The ECS Appliance, powered by its own ViPR 2.0 data storage technology systems, is EMC's gambit which claims that it is capable of "redefining storage economics" and balances the benefits of the public cloud (cost, simplicity, scalability) with the security and control of the private cloud.

Just a bit of spin or a great customer?

EMC has shipped the first ECS Appliance, a single system with a total power of three Petabytes, to The Vatican Library -- and yes, they did actually photograph the box coming out of its factory in Cork on its way to Rome.

So we said that EMC defines the hypercloud, it fact it didn't. It didn't actually use that term. But that's the next thing you expect the company to say given the amount of messages it comes out based around:

• Agile computing and EMC's own Hypermax OS
• Cloud applications that now have thousands of users
• Extreme I/O (input output) needs of modern cloud applications

The XtremIO value proposition is simple says EMC: an advanced always-on status, inline data services and consistent (and predictable) high performance regardless of workload.

It's an architectural must-have for all-flash arrays, or so they say.

David Goulden, CEO, EMC for the Information Infrastructure division has said that organisations are now conducting their software application development to harness the four IT megatrends of:

• social,
• cloud,
• mobile
• and big data.

"Although these new applications will be architected differently, they cannot become another IT silo. [Companies now need to] dramatically reduce the TCO of existing application estates, and accelerate new application delivery on their journey to the hybrid cloud," he said.

What is hypercloud?

It's faster cloud, faster OS, faster I/O, faster access to petabytes of data, faster application delivery, faster content, faster multichannel software architectural engineering, faster access into so-called Data Lakes, faster application performance from a more defined (and logically smaller) application estate -- it is "hypercloud" really isn't it?

Not officially no - Wikipedia reminds us that HyperCloud Memory (HCDIMM) is a DDR3 SDRAM Dual In-Line Memory Module (DIMM) used in sever applications requiring a great deal of memory... but if EMC wants to start using the term then it's OK by use.

EMC may not believe in hypercloud as a piece of terminology yet, but it does believe XtremIO is the fastest-growing all-flash array, and the fastest-growing storage array in history.

The key question from here on is just how well the cloud model is capable of working with the new hyper-speed technologies.

Does the open cloud model have the answer with its access to more tangible tools? Possibly not if you accept that there is always so much complexity in terms of implementation. The proprietary cloud may not be so much easier and may be need to be "more open" over the next decade if these technologies are to come forward.

Hypercloud isn't real yet, but the things that make cloud hyper-powered are.

7 dysfunctional signs you need DevOps

bridgwatera | No Comments
| More

Do we still need to stop and define DevOps as a methodology, practice, tactic, approach and/or portmanteau in and of itself?

1 CA pic.png

Whereas we might have justifiably suspected the IT industry of spin doctoring up a new term simply to ply its wares, the number of genuine DevOps postings on IT job boards is proof enough for most that this new Developer + Operations DevOps role does indeed exist.

That being so, we still appear to be looking to pin down and define DevOps today -- so TechTarget's own definition takes some beating:

"DevOps is a philosophy or cultural approach that promotes better communication between the two teams as more elements of operations become programmable. In its most narrow interpretation, DevOp is a job description for an employee who possesses the skills to work as a both a developer and a systems engineer."

DevOps is a cultural approach

A "cultural approach" is good isn't it? If we say that DevOps is a "social attitude" even... is that acceptable?

This might be too touchy-feely for the enterprise IT industry, but then again it might not.

There is now, in 2014, more (arguably) more noise coming out of CA Technologies in this space than most other companies.

Even firms that previously sold themselves heavily on being specific developer to operations orchestration (and/or service management and perhaps some ALM too) specialists (think Serena Software for example, but also BMC and ServiceNow) are falling by the wayside in terms of share of voice... or so it seems to be so.

Dysfunction Junction

CA's latest DevOps spin (okay sorry, it's not spin, it's real) is to talk about the Dysfunction Junction.

According to CA "Getting Dev and Ops on board and heading in the same direction can be difficult. But first, you have to recognise if your teams are on different tracks to begin with."

The firm lists some of the warning signs of a dysfunctional process (and therefore a need for formalised DevOps) as including:

7 warning signs

1. You don't discover software defects until late in the lifecycle--or worse, in production.
2. You use Agile to speed development, but any gains evaporate once the app goes into production.
3. Your developers and testers are constantly waiting to access the resources they need, causing delays.
4. You can't pinpoint problems across development, testing, and production operations.
5. You see simple human errors wreaking havoc during development and deployment.
6. Development views their job as finished once the app is in production.
7. Anytime a problem arises, everyone starts pointing fingers to lay blame on someone else.







Was Heartbleed an expensive free open source lunch?

bridgwatera | No Comments
| More

This is a guest post for the Computer Weekly Developer Network by Luke Potter, operations manager at SureCloud -- a cloud application service provider focusing on governance, risk and compliance process automation.

Heartbleed heartache

1luke-potter.jpg

Closer analysis of the circumstances that led to the recent media furore surrounding the 'Heartbleed' bug shows the IT industry needs to quickly learn some important lessons if it is to avoid a similar own goal ever happening again.

The reasons why a two-year old bug managed to stay undetected for so long says much about the way the open source community works and the culpability of some organisations who were happy to take freely available open source software and put it at the heart of their mission-critical applications without contributing to the OpenSSL project.

Don't panic!

However, the bug caused far greater panic than was necessary. People's website credentials, bank account details, digital 'life' hadn't necessarily been compromised and they probably never will be. The risk was that they 'may' have been but there was no way of knowing for certain.

The Heartbleed vulnerability was a bug within an OpenSSL extension called Heartbeat. The bug was introduced accidentally over three years ago on December 31, 2011 by a developer working voluntarily on the OpenSSL project. It only came to light recently when a security researcher from Google's security team [Neel Mehta] identified the bug and reported it to OpenSSL on April 1, 2014. The cause of the bug was a combination of poor code control and limited funding for the security testing and code-review of a free product that is used by major institutions worldwide. It caught most IT teams and vendors off guard. Some quickly responded by patching their vulnerable systems, but recent statistics show there are still over 300,000 vulnerable servers worldwide that are yet to be patched (not including any internal systems!). After patching the bug, many organisations started revoking/re-issuing SSL certificates and advising users to change their passwords which in turn sparked widespread panic.

To prevent something like this happening again, companies should take stock and review their strategy for using similar software.

A open source plea from the heart

1 heartimgres.jpg

More time and money needs to be spent on projects such as OpenSSL where talented developers are often spending their personal time, without pay, providing and improving software and applications that benefit the masses. There are a lot of security researchers out there who are skilled at finding and fixing vulnerabilities. But an industry that is so reliant on this software should not expect this important work to be undertaken without contributing financially to this project.

A positive outcome following the Heartbleed incident has been the creation of the Core Infrastructure Initiative (CII). Set up by the Linux Foundation, the CII's mission is to commit funds provided by the world of big business to open source projects that lie at the heart of core computing functions. The aim is to help the developer community become more security aware and hence reduce the chances of bugs being introduced in the first place.

The level of risk to which Internet users were exposed with Heartbleed was in fact quite insignificant.

Yet for anyone unlucky enough to be affected the consequences were serious. One of the major issues with this particular vulnerability was that the 'detection' of an attack (or previous attack) was actually very difficult.

Exploitation of the vulnerability allows an attacker to read a small amount of memory. This memory space could potentially contain sensitive information such as a server's SSL certificate private key and/or user's credentials. This is why, as part of their response, many companies placed the emphasis onto their users/customers by advising them to change their passwords.

Over and above this, it's considered best practice to ensure that each site you use has a unique password (preferably with a random mix of alpha-numeric and non-standard characters). To take this a step further, consider using a unique email address/username for each and every site.

What we need to do next

Perhaps the most important lesson for organisations to learn from Heartbleed is not to wait for the next high profile industry emergency before kicking into action. We will without doubt see many more similar 'high profile' vulnerabilities. It takes the occasional critical bug to shake things up and help the industry appreciate where there is room for improvement.

How can humans navigate 2.5 billion gigabytes of machine data a day?

bridgwatera | No Comments
| More

IBM's software intelligence promotion du jour is focused on transforming how organisations engage with "business content" and make it more usable.

Business content?

Could a more generic and meaningless term be possible?

bda_operations_imperative_286x248.jpg

No no wait a moment -- this terminology often refers to big data machine-generated data -- that portion of information defined as that created automatically as a result of a computer process, software application process or other machine -- without the intervention of a human.

What IBM is doing is to offer new big data services on its IBM Cloud marketplace via any browser, desktop and mobile device

2.5 billion gigabytes of data

The company estimates that 2.5 billion gigabytes of data are created every day, and 80 percent of that data is comprised of unstructured content such as Tweets and posts to social media and all manner of contracts, claims forms and permit applications.

So what does IBM have cooked up?

IBM Navigator on Cloud is built on Softlayer's Cloud platform and works with the firm's own Enterprise Content Management (ECM) capabilities that have been proven (says IBM) in "regulated environments" for file sharing application security.

Can we have an example please?

Okay for example, a maintenance worker in the field can use a mobile device to pull up the latest schematics for a piece of equipment, take photos of a damaged part, make updates to a safety document based on a repair, and synchronize this content back to the cloud making it instantly available to colleagues on desktops or mobile devices. Or, a human resources manager can work on sensitive policy and procedure materials that need to be reviewed and approved by employees in several locations. Built-in mobile content management capabilities provide authorized users with secure access to manage business-critical documents and provide feedback in real time.

"IBM is fulfilling an unmet need in the marketplace by providing a new service that combines enterprise grade security, governance and integration with mobile and web apps that are easy to interact with and use," claims IBM's ECM manager Doug Hunt.

1big data ibm.png


A more API-driven software-centric cloud

bridgwatera | No Comments
| More

It is often worth reminding ourselves that this is no single actual cloud.

Cheesy TV advertisements have started to use the term cloud for some time now.

But despite this arguably somewhat loathsome consumer-level corporate spin doctoring, the IT community has (over the last half decade) come to understand that ‘the cloud’ is any number of hardware-based servers operated by what (for any given customer) could be a number of different cloud hosting providers all running various levels of software intelligence to coalesce the cloud resources that we end up consuming.

But could individual clouds, not meant for sharing, now have more of an impact?

The ice cream makes the meal

Well that being so, the ice cream makes the meal (as they say) and the software makes the cloud (as they really should say more often) today.

This is the Computer Weekly (software) Developer Network blog — we like software, so sue us.

1OnMetal_Graphic.png

Rackspace this month launched its OnMetal Cloud Servers — and (as solid as OnMetal sounds) this is an extremely software-flavoured portion of cloud.

The product itself can be best described as an API-driven single-tenant cloud Infrastructure-as-a-Service (IaaS) offering.

So what? What does that mean?

Rackspace president Taylor Rhodes explains that the rising complexity of the multi-tenant cloud affects software applications in a variety of ways.

“Virtualisation and sharing a physical machine are fantastic tools for specific workloads at certain scale; however, we’ve learned that the one-size-fits-all approach to multi-tenancy just doesn’t work once you become successful, so we created OnMetal to simplify scaling for customers,” said Rhodes.

Nasty noisy neighbours

The company is aiming to fix the problems caused by “noisy neighbours in multi-tenant environments”, which can can degrade network latency, disk I/O and compute processing power, which can create unpredictable application performance.

This offering proves a host operating system that is ready to run container-based applications from first boot.

If you still give credence to the ever changing world or cloud standards then you will want to know that OnMetal Cloud Servers are built with Open Compute Project specification hardware and powered by OpenStack.

The first “real step change” in cloud

Rackspace technology and product VP Nigel Beighton claims that, “This (OnMetal) is the first real step change in cloud since its conception. With OpenStack, IaaS has finally broken its dependency on virtualisation and the inherent limitations that come with virtualisation technology. Virtualisation will still be one of the building blocks of IaaS going forward, just not the critical dependency and limitation it has been to date.”

The servers come in three different sets of specifications, each bespoke designed and built for workloads associated with large web scale applications.

A compute-optimised, memory-optimised and I/O-optimised configuration all exist.

TIBCO transform 2014: big (fast) data & the two-second advantage

bridgwatera | No Comments
| More

You can pretty much forget big data.

Okay maybe let’s not completely forget it. But you can forget our previous (already arguably archaic) definition of what constitutes useful big data to the business.

Today we need to focus on fast data.

A little bit of spin here perhaps yes… but this is the message that TIBCO has pushed out at its ‘transform 2014’ conference in Paris this week.

What is fast data?

The company known among the data connectivity cognoscenti as The Information Bus Company (hence the acronym), explains that so-called “fast data” is a combination of big data with real-time actionable information intelligence (up and) down cloud-driven conduits and channels.

What is the two-second advantage?

This is an information technology notion (or data behaviour archetype) which suggests that a little bit of the right information, in context, immediately, is better than all the information (often big data) six months later.

The term itself was coined by TIBCO CEO Vivek Ranadivé, an affable and approachable leader with a love for data analysis, basketball and maverick communications strategies.

So take these two notions together i.e. fast data inside the two-second advantage… and you might be able to see where TIBCO is going. The firm is almost suggesting that big data is irrelevant to businesses today — if you don’t use fast data then don’t bother in the first place.

TIBCO CEO Matt Quinn has said that big data (i.e. not fast data) has never really gained any “operational relevance” in the modern workplace.

He goes further and suggests that the software application development within the more useful fast data space will mean the IT agenda will dictated by the Line of Business rather than by any strategy set by the CIO or CTO.

A real world example?

Matt-Icon-@2x-bw.png

So fast data inside the world of the TIBCO two-second advantage might see us analyse a consumer shopper’s behaviour as they walk around a department store using Internet of Things connection points to the individual themselves.

Indeed, Quinn himself has suggested that our next data integration responsibilities will see us work to harness that information emanating from the Internet of Things.

Even if we only have 80% of the information about that shopper (but we have it immediately, or in two-seconds) then that’s better than us having the remaining 20% as well later on. The shopper has left the building, the business decision opportunity has gone.

The implications for where this approach can be deployed go further than retail; TIBCO has electronically tagged doctor hand washing in hospitals and correlated that activity with patient health.

It’s no good having that information later on. We need it within two-seconds, or at least we need some of it. If we wait then the patients don’t get well… or worse.

Product expansions

The TIBCO transform 2014 event itself was staged in Paris. Those that weren’t drinking Laurant Perrier for lunch will have also noticed that the company used this conference to launch the latest iteration of its log management platform LogLogic Log Management Intelligence 5.5 release.

This is intended to boost operational control of insight derived from machine data for enterprises around the world.

LogLogic Log Management Intelligence 5.5 acts as a centralised source for enterprise log and machine data

What is machine data?

20140617_124408.jpg

Machine-generated data is widely agreed to be defined as that information created automatically as a result of a computer process, software application process or other machine — without the intervention of a human.

According to sqlstream, “Machine data contains a wealth of information on customer and consumer behaviour and location, consumer quality of experience, financial transactions, security and compliance breaches, as well as the state of industrial processes, transportation networks and vehicle health.”

Back to LogLogic 5.5 then… this product bids to allows firms to use machine data management to act and respond when identifying event and log data issues related to compliance, IT operations and security.

“The ability to scale easily and automatically find, filter and forward the right machine data to the right applications and individuals is vital to businesses and MSSPs, especially as we transition into a mobile-first web experience,” said Frank Brown, business development, cloud solutions, Versatile Technologies.

20140617_113213.jpg

“Many of our customers process machine data from a vast number of sources and for different purposes - some managing as many as one million events per second,” said Craig Hinkley, vice president and general manager, TIBCO.

“LogLogic 5.5 offers a real-time machine data management platform focused on providing operational advantage with industry-leading ease of deployment and faster time-to-insight than ever before,” added Brown.

So where do we stand?

If we buy TIBCO’s brand of big data theorising… then we can say that we need big data to move onward and embrace the notion of fast data — further, we can say that the analysis of this fast data needs to be done within a two-second (or real-time) time window in a manner that integrates with applications designed and driven by actions as prescribed by the Line of Business rather than (but not necessarily excluding) the core IT function which of course still remains — and that this new reality spans all verticals with much closer connection points to the Internet of Things and machine data at a granular level.

The office is obsolete

bridgwatera | No Comments
| More

In 1817, Robert Owen, founder of the eight-hour movement in the UK, coined the slogan: ‘Eight hours labour, eight hours recreation, eight hours rest’ in order to regulate the hours that factory employees were subjected to.

11jhdw.jpg

That was 200 years ago. In the course of two centuries, our attitudes to work and that place we call the “office” haven’t ostensibly changed, until now.

Things have changed… and we know that mobile, wireless, cloud and device innovation are all responsible.

#GenMobile

In a new report released in line with its Airheads@ EMEA Atmosphere conference, wireless networking specialist Aruba Networks suggests that the change might be bigger than we have so far realised.

“In the near future the term ‘office’ will be obsolete and the drab surroundings we associate with the executive life will be erased as a new model for work emerges,” states the firm’s Workplace Futures report, produced in association with industry think tank The Future Laboratory.

Future + workplace

Those two words are important i.e. “future + workplace” and there are already conferences, blogs, white papers, newspaper columns and promotional T-shirts devoted to evangelising the movement towards new work practices.

Aruba says that this new world will be an “All Wireless Workplace”.

As already reported on Computer Weekly, Aruba CEO Dominic Orr has spoken about the “third place” as he called it i.e. an area he defined as neither home nor business, but where work is still done.

What started off as clunky 11 kilo bag of luggage at the start of the 1980s (yes, we are referring to the Osborne 1) heralded the start of 30-plus years of innovation, miniaturisation, product-isation and of course advancement in processing performance and related developments in screen capabilities, memory, connectivity etc.

Eight hours is now 20-seconds

2121w1e2.jpg

Our new mobile device usage patterns have been widely documented. The Aruba CEO is of the opinion that we now longer work in eight-hour chunks, but 15 to 20 second “segments” instead — and it is entertaining to analyse how the work/life balance is blending and changing as a result. People work at home, on the go, when they are waking up and when they are on aircraft journeys and these are all areas when we would have previously been unable to carry out productive work tasks.

Interestingly, this is a large part of the reason why firms are like Google, Adobe, Rackspace and others are designing their offices to resemble home and leisure spaces.

The office really is becoming obsolete.

The buzzphrase here is “Bleisure” i.e. business + leisure. You either think that’s a handy new term or you feel bilious, it’s one of the two.

According to Aruba’s report, architects of the Bleisure revolution now use terms such as ‘serendipity corners’ and ‘chance-encounter corridors’ to describe the subtle social engineering they are employing, as laptops, mobiles and tetherless (Ed- is that a word yet?) working allow people to move more freely within a building.

So if these are the trends, then what are the implications for:

  1. network management engineers,
  2. software application developers,
  3. cloud services management DevOps and,
  4. the rest of IT function that sits at the back end?

Aruba doesn’t just think the office is obsolete, it think that traditional enterprise networks need to be transformed into more intelligent systems in their own right and this is what its mobile-defined network solution seeks to achieve. A mobility firewall employs advanced deep packet inspection and this opens the door to providing more granular policies, service quality and control.

Essentially then, Aruba Mobility-Defined Networks is a software architecture to network “highly mobile” employees, or indeed consumer customers.

What this means for apps

Software application developers targeting these new environments should be aware of the fact that, at the back end, there is added network intelligence for an automated self-adjusting infrastructure.

When the cloud developer wants to consider how much always-on connectivity he or she can tap into, the suggestion here is that, at the back end, changes in wireless state and usage patterns (among all devices in the network) trigger security actions and performance optimisation measures so that the infrastructure adapts to its environment.

Unlike traditional static Wi-Fi networks, Aruba Mobility-Defined Networks eliminate the need for IT professionals to make manual changes to accommodate new mobile devices and applications says the firm.

The firm’s own ClearPass Exchange is designed with a set of common language Application Programming Interfaces (APIs) and data feeds that allow it to automate workflows with “almost any third-party IT and business system” from Aruba partners inlcuding IBM, AirWatch and MobileIron.

“For example, a network policy violation will prompt ClearPass to trigger a push notification to the device in question through the Mobile Device Management (MDM) system. ClearPass integrates with the helpdesk systems to then automatically generate a ticket notifying security teams,” said the company.

Time to “rightsize”

What the company is doing here is trying to give us tools that will help to “rightsize” the network so that can perform best for a more dynamic group of users on the (mobile) move.

If the office is indeed obsolete, then we need the non self-optimising networking infrastructure to be positively prehistoric.







Aruba Networks: your next breed of user is #GenMobile

bridgwatera | No Comments
| More

Aruba Networks staged its Airheads@EMEA Atmosphere 2014 conference this week to clarify and colour the company’s current standing within the enterprise wireless networking technology marketplace.

aruba651_diagram.jpg

The company’s messages centralise on the creation of what it calls mobility-defined networks and, in direct consequence of this, the firm has also has a direct software competency as well.

Indeed, the Airheads conference itself started off as a technical certification and education exercise.

Proceedings kicked off in Italy’s Lago Maggiore with chief marketing officer Ben Gibson. After initial civilities and welcomes, Gibson explained that the firm’s “singular focus” is on wireless, mobile and security.

Don’t disconnect me, I’m close to the edge

The company itself is known as a networking vendor that specialises in enterprise wireless LAN products and “edge access” networking equipment. This seemingly generic background statement is actually important; that thing that we now call Edge Computing is becoming very important today.

When the company talks about its #GenMobile concept (oh yes, with a hashtag too) - this is the new breed of worker with:

  • x4 plus mobile devices about their person at any one time,
  • a predisposition for working in non-traditional work hours, perhaps early am and late at night
  • a preference for good WiFi network connectivity over and above any opportunity to connect into a fixed line connection (however strong that fixed line might be!)

This new always on always mobile behaviour has implications for:

  • the network engineers that need to support the applications that these new users will demand
  • the software application developers that will need some degree of appreciation for network behavioural aspects (such as load balancing) that will serve each app once deployed
  • the software security specialists that will need to lock down the apps used in the new ultra-mobile #GenMobile generation

original.jpg

The Mobility-Defined Networks (that’s a term with a ™) theory that Arubu talks about is hardware and software engineering intelligence to (and we quote), “Automate performance optimisation and security actions that used to require IT intervention” — so the network engineer can stop being the IT authority and become the IT ally.

Aruba’s stance across these mobile wireless technologies leads us quite comfortably onward to Enterprise Mobility Management (EMM) theory.

Ovum senior analyst for enterprise mobility and productivity Richard Absalom was in attendance at the Aruba event and gave an informal keynote presentation to riff over what works and what doesn’t for the typical customer.

BYOD, CYOD, COBO device options

Ovum’s Absalom reminds us to:

  • start with the users and find out what applications they are using
  • work with software application developers to find out where requirements are pushing the next stage of application usage
  • think about the BYOD policy up front… also think about which firm’s employees say about
  • CYOD (Choose Your Own Device) where the company provides the device and COBO (Corporate Owned, Business Only) environments where this works best.

Aruba’s stance dovetails in with these thoughts - this is because Mobility-Defined Networks are supposed to add controls with real-time data about users, devices, apps and location.

“Self-healing and self-optimisation functions dramatically reduce helpdesk tickets and protect enterprise data. Software that adds mobility intelligence makes Mobility-Defined Networks easy to deploy, without changing the existing infrastructure,” says the company.

The takeaway from the firm is that businesses can use Aruba technology to “rightsize their fixed network infrastructure” and theoretically now deliver the mobile experience that #GenMobile expects.

Avanade: the finger is the new magic marker

bridgwatera | No Comments
| More

Avanade throws another chilli into the big data strategy wok this month with the arrival of its Touch Analytics for Mobile Devices product.

First there was big data... then analytics, visualisation and touch - in that order

23423r3f3.jpg

Here's the theory then... first there was big data, then came analytics and the ability to break down, categorise and manage that big data.

Next came visualisation and the option to be able to produce "shape"-based representations of data movements, values or trends.

... and finally there came devices with touch-enablement and so the pressing need to convert these data analytics visualisations on to touch devices that C-level managers could use easily in board meetings, presentations and airline lounges etc.

Avanade (pron: Avah-naad, never rhyme with lemonade) talks about what it likes to call in corporate-speak "empowering insights with information at the point of decision" today.

What the company means by this is creating data analytics to communicate, collaborate and make decisions upon on any mobile device.

The firm's technology works with viewers located at favourite device app stores - iOS, Android, Windows Phone and associated devices, HTML 5 or otherwise - and you can now consume this information and make decisions when needed regardless of location or device.

"The application helps businesses to align with their own 'Bring your own device' (BYOD) policy, as the application can provide online and offline capabilities natively and securely across all devices which is crucial to commercial advantage," said the company.

Adoption of analytics applications and experiences is critical to deriving business value from them and Avanade believes that being "Built for Touch" makes a difference.

The notion of using fingertips as a stylus, where information is accessed with a tap versus a click, or a swipe instead of dragging and dropping and screen real estate is inherently small means a different design approach, particularly with delivering analytics for a digital business.

According to the Avanade blog, "With ATA you will be able to leverage a publishing interface with pre-configured gauges, maps and gadgets to help facilitate a beautiful experience your users will enjoy. You can tie Avanade Touch Analytics to back-end systems and a variety of data to bring together insights from big data, small data and all data whether contained in a blog or an excel spreadsheet. You can create a dashboard and a set of KPIs so that they are consistent in look, feel and experience across a host of devices. All of this gives you the confidence to provide this capability to your organisation."

MuleSoft: how do we program the Internet of Things?

bridgwatera | No Comments
| More

The Computer Weekly Developer Network hosts a brief Q&A with founder of integration platform company MuleSoft, Ross Mason.

MuleSoft provides the Anypoint Platform of integration products that tie together SaaS apps and on-premises applications... or work connecting any application and/or data source or API, whether in the cloud or on-premise.

1wleadership-ross.jpg

CWDN: With more sensors collecting information about people and activities, how do we move on from here?

Developers in the IoT space need to think very differently about scale, reliability, security and dealing with many more connected consumers. Our traditional architectures need a rethink.

IoT brings in a new era for edge computing and developers working on IoT projects have to think about more layers to enable 100,000s or millions of sensors to exchange information with the back end systems and also with each other. One layer emerging is termed the Fog.

Unlike the cloud the fog layer is concerned with connecting the sensors to backend or cloud systems. The Fog layer is essentially a collection hubs that sensors connect to that can be managed remotely but also have enough smarts to communicate with each other.

CWDN: What's involved and how much 'heavy lifting' is needed at the developer architecture end of the API spectrum?

Generally, because IoT type architectures are new for most people there is a lot of heavy lifting around device management, data management, architecture and connectivity to other systems.

APIs are typically used to a) provide an interface to hub devices that sensors or smaller devices connect to or b) to provide access to the server side where developers can access data and maybe control some aspect of the devices either directly or through a hub.

Building APIs has gotten a lot easier with open languages to express APIs such as RAML and web-based tooling like MuleSoft's API platform (disclaimer I founded MuleSoft) to enable developers, architects and product managers to take a design first approach to APIs.

These tools allow you to create repeatable patterns - traits of an APIs - and re-use them in all of your API implementations. This saves time, reduces usability problems and solves the major issue of creating consistent APIs across teams.

CWDN: Highlight the advantage of creating APIs on the fly / seeing the outcomes in real-time whilst designing the API - can you explain why this is important for developers?

This point about API-first design is important. It introduces the concept of APX (Application Programming eXperience) to API development, (which is borrowed from User interface design).

It changes the way APIs are built today. It puts the focus squarely on the consumer of the API rather than the more technical aspects of building APIs. Most enterprise APIs are coded directly and then the code is annotated to describe the API interface (i.e. Java's JAX-RS). This is fraught with problems since the bit that the end consumer sees is slapped on as the code is written and there is no real design process to create an API around the requirements of the consumers.

A real example of this (who I can't name) is a company that has a mobile team and API services team. When the mobile team needed a new search API, the spec'd it on paper and gave it to the API team who took it away and then spent 2 months creating this new API.

When the new APIs was available it wasn't what the mobile team needed. Partly it was the fault of the mobile team not specifying everything properly. And part was the fault of the API team that made assumptions and misinterpreted some requirements. Now wouldn't it have been better is they could have created the API by simply defining it with a simple language like RAML in a couple of hours collaboratively.

What if they could then quickly mock out the service so the mobile team can actually get a feel for it?

And then once they agreed on a design, they could lock it down and the API team could invest the time in building it while mobile team could build their application against the agreed mock API. Introducing the design phase up front and using tools like MuleSoft's Anypoint Platform for APIs allows teams to work together in this way and focus on building APIs that are designed with the user experience first.

CWDN: Will we be able to turn websites in their entirety into API channels?

Its possible but the tools are so good and easy for creating websites and there are so many developers that have those skills that we'll keep doing a mix of traditional and API-driven web sites. New companies are already thinking API- first or Mobile-first for everything, so the shift away from traditional 3-tier web sites is gradually happening. Note that APIs strategies in the enterprise are being driven by mobile, not web sites.

Samsung Tizen HTML5 SDK: "we'll put apps on your TV"

bridgwatera | No Comments
| More

The Seoul-based Korean Bizwire newsfeed (yes, we know, we don't usually follow their channel) this month reports news of Samsung Electronics Co. Ltd. (let's be formal and use the whole name) announcing plans to release a Tizen-based Samsung TV Software Development Kit (SDK).

This marks the industry's first SDK that seeks to allow software application developers to build applications for the Tizen-based TV.

Samsung Tizen is Samsung's implementation of Tizen, an open source mobile operating system.

Tizen is an open source project that is being developed by the Linux Foundation.

The Tizen software development kit (SDK) and application program interface (API) allows developers to employ HTML5 and related web technologies to write software applications that run across these multiple devices -- including printers, cameras, smart TVs and in-car displays.

What we now see if the Samsung Tizen-based TV SDK Beta supporting the HTML5 standard through its framework (this is called Caph) and this enables developers to write apps that run on Tizen OS -based TVs.

Samsung Visual Display Business YoungKi Byun says that the firm's "ultimate goal" is expanding the TV ecosystem for apps.

"We will continue our efforts to provide innovative functions and improve the development environment," he said.

Samsung-Smart-TV.jpg

Samsung's new SDK marks the industry's first attempt at significantly improving the development ecosystem by offering new technologies such as an interface for virtual TV apps development. Developers can now virtually see all necessary TV functions without a physical TV. Developers can also remotely modify code on their PCs with the new debugging feature, whereas in the past they had to connect directly to the TV's software to correct application errors.

The SDK will be available this July at the Tizen Developer's Conference.

Microsoft developer tools, on a 'dizzying' roll (in a good way)

bridgwatera | No Comments
| More

Key software application development press invited to the recent Microsoft Build and TechEd North America conferences this year will have spent considerable time listening to the company's execs detail roadmap updates.

More specifically, attendees will have heard news of the Microsoft's Visual Studio 2013 Update 3.

NOTE: This is a Community Technology Preview (CTP) of Visual Studio 2013 Update 3) -- CTP is within the alpha and beta stages of the software application development cycle i.e. after pre-alpha and before Release Candidate.

Improvements in this latest release concentrate on Visual Studio features such as IntelliTrace, CodeLens, debugging functions and developer testing tools.

An assortment of other early preview stage functions are also included here...

... which leads us to the question: just how fast is Microsoft pushing its development cycle for its core programming tools suite?

Answer: really very fast.

Program director for software development research at 
IDC Al Hilwa says that the Microsoft VS team is "really on a roll" and that it is "really dizzying" how fast Microsoft is moving, especially since the two major releases before it were only a year apart.

"The innovation is moving in the industry and tools are enablers that sit at the headwaters of innovation, so it is not unreasonable. You also have to consider that a lot of this is because Microsoft is re-aligning its ecosystem to snap to the web ecosystem (Cordova)... and bringing phone and tablet closer together with Universal apps. Both were parts of Update 2, which only came out a month ago," said Hilwa.

The best place to follow Visual Studio roadmap development is the Visual Studio blog itself.

111VisualStudio1.gif

Subscribe to blog feed

Find recent content on the main index or look in the archives to find all content.