June 2011 Archives

In the land of software testing, static analysis reigns king

bridgwatera | No Comments
| More

This is a guest post by Rutul Dave of Coverity, a company that builds tools and technology to equip developers with resources, techniques and practices to help maximise the integrity of software.

I recently started a discussion on the 'Static Analysis' LinkedIn forum on the topic of "what can static analysis find that other forms of testing cannot" -- which led to a healthy and informative discourse from the other participants. I was so impressed with the debate that I wanted to share some of the most thought provoking ideas from the forum.

Among the 40 comments I received, here are the major points that stuck with me:

#1 -- It's not simply a question about which metric/metrics static analysis helps with when compared to other forms of testing. There are various ways to look at how static analysis provides a cost-effective and usually easy-to-use way to improve the quality of code as it is developed. I always focus on the development cycle to evaluate the value that this method of testing brings.

Developers are already aware that the cost to repair defects increases the further in the software development life cycle a defect is allowed to persist. For this reason, there is value in addressing newly detected defects as they are identified because they cost less time and effort to repair. By lowering defect numbers during development, organisations can lower the cost of development by delivering higher integrity code to testers that require less test case generation for effective coverage.

#2 -- Defects in parts of code not executed during normal operation and error-handling routines are things that are usually close to impossible to test with most other forms of testing. Static analysis really helps here. A good example is its ability to spot what I call an "invisible defect," which is memory corruption or a leaked system resource.

The execution is usually way past the place in the program where the defect is when it manifests into a visible error. Testing for such defects using traditional testing is like trying to find the fatal disease that caused the death by figuring out the time of the death. By looking for the defect like a null pointer dereference that leads to a visible error like a program crash, static analysis is testing code for invisible defects.

#3 -- Static analysis can show the absence of bugs while other testing methods usually focus on showing their presence. For example, while functional testing a mobile phone application, a QA tester will be looking for the visible bugs - missing functionality and unexpected behavior in context of what he or she is testing. But what about the hidden resource leak or uninitialised variable during execution that will result in an error only after a certain number of iterations or in another part of the software?

#4 -- There are hidden benefits when static analysis finds defects in difficult-to-understand code - even in situations where they might not even be true defects or false positives. The main advantage is it forces the developer to look at the code again and make improvements towards reducing code complexity and to help in maintaining the code going forward. (I should stress, however, that static analysis is generally known for its low rate of false alarms and that the warnings they issue often do correlate very well with real defects. So if an issue is flagged and its not necessarily a problem- its still worth looking into. Otherwise, why would it be singled out? There is probably room for improvement)

Stop!.png

The forum was a good reminder of how topical the issue of software testing has become for businesses. I'd say tt's probably one of the most time consuming and frustrating processes for a developer - especially if they don't have access to the right tools. The sheer complexity and size of the software being developed these days makes developer testing a challenge that will go away any time soon- or one that should be ignored.

Consider these statistics:

  • For every thousand lines of code output by commercial software developers there could be as many as 20 or 30 bugs on average.
  • As they progress through the development cycle, defects that are found become exponentially more expensive to fix- it's at least 30 times more costly to fix software in the field versus during development.


For some fascinating data on software testing, take a look at the Software Integrity Risk Report, a commissioned study conducted by Forrester Consulting on behalf of Coverity.

I learned a lot from this forum and I am glad I walked away with the key points I've mentioned. But most importantly, I have been able to reconfirm what I've known for a while now: static analysis is definitely "King" in the world of software testing. There is simply no substitute.

Software Architecture: is it an art or is it a science?

bridgwatera | No Comments
| More

This weekend I have been looking for a new house, enjoying the British summertime rainy monsoon season and looking at the real nature of software architecture.

So what is software architecture?

The Institute of Electrical and Electronics Engineers (IEEE) defines software architecture as the fundamental organisation of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution.

Architect.jpg

In looking into this subject, it appears that some commentators have suggested that software architecture is a combination of both art and science.

When is software architecture an art?

Software architecture may be considered an art (perhaps) when it is focused on the non-functional requirements (i.e the look and feel) of a system. Surely this could be considered the "artistic" side of architecture?

You might also consider software architecture to be an art in the context of its ability to reflect the business model of the company. The "mission statement" and "business proposition" of a company as reflected in software construction is, again, in the running for being considered on the artistic side of architectural construction of software.

When is software architecture a science?

Well, software architecture is always a science in many senses. The quantitative and qualitative analysis of system availability, maintainability, reliability, fault tolerance, forward extensibility and system backward compatibility is all fairly scientific.

At the time of writing this blog, I Tweeted the suggestion that software architecture is a blend of art and science and celebrated tech journalist Jon Honeyball responded that it was in fact, "A mixture of science and common sense."

... and you know what? That might just beat my theorising hands down in 7 sharp words ☺


Questions for cloud "experts", what would you ask the panel?

bridgwatera | No Comments
| More

I'm doing a cloud data integration and analysis lunch/panel-discussion next month at Gordon Ramsay's Claridges in front of a small group of industry experts. This means that right now, I'm doing some planning in terms of gathering discussion points.

Note to self: remember not to ask for brown sauce at Claridges.

F word.jpg

OK so the discussion will centre around the real issues companies are experiencing when they move to the cloud. Here's some of what I though I would ask.

  • What is the main barrier to cloud adoption in your business at the moment?
  • Which sections of your company do you feel are most opposed to and (conversely) most interested in bringing in cloud services?
  • Do you think we understand yet that old school approaches to architecture will not always work in the cloud?
  • What is really driving the creation of big data?
  • SMBs understand the cloud's advantages in terms of cost savings and effeciency, big enterprises don't (to the same degree) - discuss?
  • Paradigm shifts in technology occur roughly every 5 years, where will cloud be in 2017, or at the end of the decade at least?
  • Cloud data integration analysis project day 1, we're all in the "situation room" ... what are the first 3 items we need on the agenda?
  • Do companies see cloud computing as a means to competing in their specialist markets with greater strength?
  • Do companies understand that they need to build (or use) a formal "framework" for adoption of cloud services if it is to be done efficiently?

So there you have it -- and the reason for my blog? I hope it's obvious. Do you think this covers off some of the most pertinent issues in the cloud industry or have I missed something obvious?


Perforce: make me one (version control system) with everything

bridgwatera | No Comments
| More

Not content with building an (arguably) established name for itself in the software version control market, Perforce now wants to take over the world and see its revision control systems used in every aspect of human life.

I'm kidding obviously -- but the company is expanding its outreach.

Let me explain.

At the close of its 2011 user conference the company's chief said that he saw versioning being on the verge of rippling into wider markets. The vision, as CEO Christopher Seiwald put it, is "version everything".

It's a sensible proposition in many respects and Perforce knows how this works, the company has been extending version control quite naturally outward from its core job of looking after source code files for some time now.

Think about a modern video game development programme -- there are a huge amount of data assets in there, from graphics to the game engine to artwork rendering, to text, voice, interconnectivity controls etc.

This market has traditionally been strong for Perforce, its version control system is suited not just to looking after the code that runs the video game, but also the other elements of a project that are held electronically.

"The rest of the world doesn't yet know why version management is so important to them, just as software developers didn't know 30 years ago about the importance of configuration management," said Seiwald.

Bush_Dalai_Lama.jpg

Dalai Lama: Make me one with everything
George Bush: We're fresh out of anchovies and spicy pepperoni Mr Lama

So is Perforce abandoning its core software programmer audience in search of fresher pastures?

"All of our customers will benefit from our development improvements as we continue to focus on our core technology. While versioning source code is the tip of the iceberg and we plan to unleash Perforce for many other uses, current customers can still use Perforce as they always have. They don't have to change the way they work, but they may want to," Seiwald told Computer Weekly.

The Computer Weekly Developer Network blog aims to report further on this story and future stories are likely to feature:

• AppJunction, for open source and other solutions and tools for Perforce.
• Perforce Cloud, for multitenant version control.

Poor patch practice presents professional performance problems

bridgwatera | No Comments
| More

If you've spent more than a decade covering software application development news, then the old "survey says" xyz% of software projects are likely to fail, come in over budget and/or be late is not a story worth telling any more.

If however, you hear this story told from a patch and software update perspective, then perhaps there's room for a new angle here?

SMB IT solutions provider GFI Software has indeed released such a survey.

The company suggests that its results "reveal" that half of businesses have suffered at least one business critical IT failure as a result of installing a bad software patch.

Having just completed a software-debugging feature for the government's National Skills Academy I have testing very much in mind this week.

As I noted in my feature, the software testing role itself is multifarious in nature. How can all these software engineering roles possibly let through so many software bugs and incompatibilities introduced by badly developed software updates?

• Test Designer/Architect
• Unit tester
• Test Manager/Test Team Leader
• Automation Developer
• Test Administrator/Test Process Manager
• Systems Integration Tester
• Software Quality Assurance Manager
• Software Validation/Verification Engineer

With these formally delineated testing roles in mind, GFI Software's survey has said that companies are commitment to deploying critical updates quickly -- 90% of those surveyed applying patches within the first two weeks after they are released.

So does this represent trust in patches -- and so its fine for the testing team not to be involved?

Band_aid_sheer.jpg

According to GFI Software, for many firms this process remains a manual one, with 45% not using a dedicated patch management solution to distribute and manage software updates. This lack of automation is a major contributing factor that explains why 72% of surveyed decision makers do not deploy within the all-important first 24 hours after a critical patch is released to the public.

"The stark figures revealed by this research reinforce the importance of testing patches before deploying them in a production environment. Patch management solutions help keep the balance between maintaining productivity - testing patches to make sure they do not interfere with the business environment - and applying security patches in a timely fashion to avoid compromising security," said Cristian Florian, product manager at GFI Software.

"Patch management solutions such as GFI LANguard 2011 can also roll back problematic patches and get the company back to work in a fraction of the time compared with a manual uninstall process or, worse still, a PC rebuild," Florian added.

Additional key findings:

• 51% of those surveyed said their organisations did not have a rigid policy regarding the installation of critical software updates
• 25% of respondents have suffered multiple IT failures as a result of buggy patches or compatibility issues created by a software update
• The personnel sector is the biggest user of dedicated patch management solutions, due to the lack of dedicated on-site IT support in most recruitment offices

Serena's ALM Nirvana: happy programmers & quantifiable business return

bridgwatera | No Comments
| More

Serena Software is known for its 'orchestrated' application lifecycle management (ALM) and process management (PM) products. The company has emerged from a brief hiatus over the last few years during which it reinvented (Serena would probably prefer "refined") its technology proposition.

The company now sits in what is arguably a comparatively well-informed position to comment on software code optimisation and application delivery.

I worked on a Q&A with Serena senior VP or worldwide marketing David Hurwitz earlier this week, which in its entirety is not appropriate for this blog -- so here's a couple of highlights

How is "devops" developing?

NB: A definition: devops is generally considered to be defined as the set of communication, collaboration and integration methods used between the software application development and programming teams and the IT team that looks after administration and infrastructure.

"I think the future for devops will be around getting greater recognition within the application development side for [operations team's] issues such as release management. Currently, this sits outside the application development team at a lot of the customers that I speak to, yet it fundamentally affects the success of application development projects. This is especially important for the business side, as they only care that things are in place and being used successfully," said Hurwitz.

Serena-OALM-Dashboard-1.jpg

Are we getting better, faster and more efficient with application delivery?

The initial answer to this is yes with a "but". Application development is getting quicker, but a lot of this is not really due to improvements in process. For example, employing the use of faster hardware will generally make an application run faster and that includes during the "build phase". It does not improve how that app was actually put together -- and [therefore] it does not translate into a marked difference in performance," said Hurwitz.

Serena's position is that optimisation of code is something that has to be considered as part of a long-term process and methods like Agile do make this approach something that is easier to take up. The barrier here for businesses wanting to take up Agile software development methodologies is that while it benefits the development team, it's hard to quantify what benefits it will deliver to the wider business.

Hurwitz finishes by saying that in reality, "While happier coders and continuous improvement of the quality of software that is delivered should add up to a business return, it's very hard to put a figure on."

Data Dungeons & Unstructured Data Dragons

bridgwatera | No Comments
| More

Technical evangelist Courtney Claussen at Sybase IQ has this week posted a blog titled "Text Analytics - Slaying the Unstructured Data Dragon".

Claussen describes the etymology of the word "dragon" as being traceable back to the original Greek word meaning of "sharp-sighted one", saying that the dragon is purported to have unusually acute vision.

As we all know, dragons are scary beasts, but behind them we usually find temples and hidden treasures.

dragon.png

"Today's dragon in the world of data is the massive amount of unstructured text that originates from an extensive array of sources: web pages, email, news, blogs, social media sites, surveys and every kind of document imaginable. Unstructured data, like a dragon, is a big scary, fire-breathing beast -- overwhelming to face and seemingly impossible to vanquish. Yet like a dragon, it is the guardian of an enticing treasure trove of information," writes Claussen.

As we now aim to tame the beasts of unstructured data using text analytics focused data processing, Claussen suggests that there are multiple steps or phases needed to make sense of the chatter and help acquire business insight.

Phases here will include:

  • Collecting and preparing the data
  • Cleansing and 'tokenising' the data
  • Categorisation of the data
  • Running analytics on the repository of now 'enriched' data
  • Reporting and delivery on the data

Claussen rounds out by commenting that text analytics as exercised by machines is not nearly as sophisticated as the functions possible inside our human brains.

"But computers are superior at processing large volumes of data quickly. With strong algorithms, an extensive knowledge base and some human involvement to drive and refine the search, they can be very effective at locating and analyzing the unstructured data that matters to you," says Claussen.

"Sybase IQ 15 incorporates text analytics capabilities with its handling of large objects, specialized indexing for locating and scoring terms and phrases, and an integration layer for plugging in language processing libraries. Sybase IQ is an analytics platform that offers you serious artillery in your battle against the unstructured data dragon," she added.

Microsoft aims Kinect at healthcare & science 'motion sensing' apps

bridgwatera | No Comments
| More

The beta release of Microsoft's Kinect Software Development Kit (SDK) is now available as a free download for noncommercial applications.

Microsoft says it hopes that the Kinect motion sensing input device will now garner interest from academic researchers with an interest in:

  • depth sensing,
  • human motion tracking
  • and voice and object recognition.

Microsoft says that that developers working with the new toolkit are expected to build concept applications across a range of scenarios including healthcare, science and education.

"The Kinect for Windows SDK, which works with Windows 7, includes drivers, rich APIs for Raw Sensor Streams, natural user interfaces, installer documents and resource materials. The SDK provides Kinect capabilities to developers building applications with C++, C# or Visual Basic using Microsoft Visual Studio 2010," said said Anoop Gupta, distinguished scientist, Microsoft Research.

Kinect Ade.jpg

Virtualisation: A Manager's Guide

bridgwatera | No Comments
| More

There are a lot of technical books published every year. I was even approached to write one once, but it was not to be. The pay is comparatively low, so does that mean that software programming guides from the likes of O'Reilly Media, APRESS and Wiley are always written by purists who really know their subject?

It's no guarantee is it?

The only way I can help promote any single technical book with any degree of accuracy is if I were to know the author personally -- and luckily, in the case of O'Reilly's Virtualisation: A Manager's Guide, I do!

Book.gif

Virtualization: A Manager's Guide (to use the Americanised Z) is written by
Dan Kusnetzky, a true geek, a true technology guru and a true gentleman.

Kusnetzky presents a considered overview of how managers might first approach the use of virtualisation techniques across corporate network operations. Taking an explanatory approach to detail the concepts involved in this process, this book details the steps a company will need to take whether it is managing extremely large stores of rapidly changing data, scaling out an application, or harnessing huge amounts of computational power.

"This guide provides an overview of the five main types of virtualisation technology, along with information on security, management and modern use cases," say the promotional notes.

Topics include (and please excuse the American Z's):

• Access virtualization -- Allows access to any application from any device.
• Application virtualization -- Enables applications to run on many different operating systems and hardware platforms.
• Processing virtualization -- Makes one system seem like many, or many seem like one.
• Network virtualization -- Presents an artificial view of the network that differs from the physical reality.
• Storage virtualization -- Allows many systems to share the same storage devices, enables concealing the location of storage systems.

Right ingredients, no quantities = a recipe for chaos...

bridgwatera | No Comments
| More

In this guest post for the Computer Weekly Developer Network, Keith Hughes, senior consultant at SQS Software Quality Systems writes on methods to help with the code errors posts.

Some programming languages will automatically initialise variables for you. If you are unfamiliar with the syntax and rely on this feature, you run the risk of misusing the language. To avoid creating performance issues, maintenance problems and potential bugs, be sure to correctly initialise data - personally.

Picture this. You're at home in the kitchen, recipe book in hand. You've decided to bake your favourite cake. The list of ingredients is long. It's a small kitchen with limited workspace so you decide to get your ingredients ready in advance: a bowl with some flour, a bowl with a few eggs, a bowl with butter and so on.

As you fumble around, you accidentally tip a cup of coffee over the ingredients page. It's unreadable now! Thankfully you'd prepared though, you know what the ingredients are.

On the next page, you're instructed to add half of the eggs to the flour. Half of the eggs? How many eggs though? You've got eggs ready, but no specific amount. If only you'd measured how much of each ingredient you needed.

The bowls, represent the variables of your programme, the ingredients the variable types and the amount of each represents the initial values. Without specific or programmatically assumed initial values, we cannot make any logical reference to our variables at runtime.

All programmers are aware that with most languages, you must declare a variable before spontaneously using it in your code. While some languages allow you to avoid this, most programmers also know that it is poor coding practice.

Declaring a variable involves more typing, but it creates code that runs faster and is easier to maintain. In some of the more dynamic languages, if a compiler encounters a variable that has not been previously declared, it creates it for you. The problem with this occurs when you declare a variable and misspell it later in your code. Now you've got two variables - and a potential bug.

A VBA example:

Dim myString as String;
mySting = "Hey"
String myPhrase = myString & " don't do that";

Here, our compiler would tell us that myString has no initial value, but even if we had initialised myString properly, we would now have two variables (myString and mySting) for the price of one.

A case for staying in control instead of relying on the compiler.

A feature of many compilers is the provision of an initial value (auto-initialisation) for your freshly declared variable. These issues do not appear threatening on a small scale, but on a large scale, could have a serious impact on performance. In languages such as C# or Java, our compiler must take a more preventative approach to our un-initialised variables.

An example in Java:

void simpleFunction() {
int i;
i++;
}

In this example we simply declared a variable and it's type, and tried to get to increment it straight away. But what are we incrementing? It's mathematically impossible. This makes more sense:

void simpleFunction() {
int i=1;
    i++;
}

You can also initialise non-primitive objects in this same way. If Depth is a class, you can insert a variable and initialise it like so:

class Measurement {
Depth o = new Depth();

You can even call a method to provide an initialisation value:

int i = f();

A very simple and frequent occurrence of our un-initialised variable problem is where a conditional statement encases the value assignment. Because the condition itself may never occur, to use the variable we must give it something to offer in all cases:

String color;
String fruit = "Apple";
If (fruit == "Banana") {
      color="yellow";
}
System.out.println(color);

We immediately get "The local variable colour may not have been initialised". In my Java IDE, this comes with the suggestion of initialising this on our behalf. In this case our String gets set to null, which is valid value, and a safe and economical assumption that allows the program to compile, but not the correction of a potential bug.

Of course you may be asking the compiler "If you're so clever why don't you do it", but It's highly likely that something has been forgotten here by the programmer, and providing default values automatically could cover up a multitude of bugs. Forcing the programmer to provide a value is a much safer option.

So, our friend the compiler is there to remind us each time, but we ought to decide what the initial values should be. We should because those default values can, and will, lead to further bugs. This is because they are nothing more than a computer generated workaround. An example of auto-initialisation taking place en masse would be if we were to code a reusable class with a high number of uninitialised variables:

class Measurement {
   boolean t;
   char c;
   byte b;
   short s;
   int i;
   long l;
   float f;
   double d;

void printValues {
                           System.out.println(t);
                           System.out.println(c);
                           System.out.println(b);
                           System.out.println(s);
                           System.out.println(i);
                           System.out.println(l);
                           System.out.println(f);
                           System.out.println(d);
           }
}


This looks like trouble. Let's call it and see what happens:

Measurement d = new Measurement();

d.print();

the output from this is:
           false

           0
           0
           0
           0
           0.0
           0.0

Because it is a reusable class and we initialise our class with 'new', the variables get automatically initialised. While it's one less job for the programmer it's also one more aspect they're not in control of.

In summary, whether it's to satiate the compiler of a powerful language such as C# or Java, or to avoid careless scripting, the task of declaring and initialising our variables is
something we can avoid initially, but it will catch up with us one way or another.

Tips:

  • Plan ahead, name your variables carefully and uniquely.

  • Prefix variable names according to type (e.g. strResult is a string).

  • It's good practice to always declare the variables you will use.

  • Declare the variable type if it is reasonable to do so for the language.

  • It's good practice to always declare a variable initial value.

  • Is mobile software development about to get more 'customer-centric'...?

    bridgwatera | No Comments
    | More

    A little known name from Silicon Valley caught my eye this week with a new mobile application framework specifically engineered for developers interested in building public cloud applications.

    Appirio (sounds like a fruit-based drink more than a software company to me) is the company behind this new product.

    By using the Force.com cloud platform, the company says that its new framework allows developers to build custom (we say bespoke in the UK please), native mobile apps without writing native iOS device code.

    "Unlike web apps that just 'happen' to run on mobile devices, mobile cloud apps give companies a better way to centralise the control and the maintenance of apps while taking advantage of the features, performance and offline capabilities that only native apps offer," said Narinder Singh, chief strategy officer at Appirio.

    tablet.JPG

    Logically focused on Customer Relationship Management applications due to its Salesforce.com underpinnings, the company also asserts that its new offering enables iOS apps to be updated in real-time, in the field without needing to redeploy. 



    Appirio says it has already built templates for field surveys and location-based services, as well as time and activity tracking functions.

    The moral of this story? Working professionals in all fields will increasingly be seen using tablet PCs, toughbooks and other mobile devices to perform customer-centric tasks.

    OK yes, British Gas engineers already turn up with toughbooks and DHL use nice mobile handhelds too. But their systems generally just log back to base. Will the next generation of mobile (often embedded) software have a much more customer centric flavour?

    Are developers (and us users!) ready for the 'Internet of Things'..?

    bridgwatera | 2 Comments
    | More

    The BBC is running a lead story this morning that talks about the move from IPv4 to IPv6. Somewhat disconcertingly, the 'Beeb' only gets around to defining the difference between the two protocols in a box out somewhere down the page, but that is not the point of my blog.

    But let's just pay lip service to IPv4 and IPv6 first.

    IPv4 is the 32-bit system designed to identify unique connections to the network (for the most part, the Internet) and in its 12-digit form (e.g. 172.16.254.1) it provides just over four billion addresses.

    IPv6 being 128-bit is written in hexadecimal and so gives a maximum of 340 undecillion possible addresses.

    Undecillion is 10 to the power of 36 and it comes in just under duodecillion (10 to the power of 39) tredecillion (10 to the power of 42) and quattuordecillion (10 to the power of 45).

    So what you say? What will an undecillion network addresses give us?

    Fridge.JPG

    It's the so-called 'Internet of Things' right? The point at which your milk carton has an RFID tag on it to let your electronically enabled fridge know that it has gone off. Your fridge (which is Internet-connected of course) then automatically orders your online shopping account to send you new milk and your auto-payment system takes that out of your bank account. All you have to do is pick your milk up at the door, open the carton and pour it into your PG Tips -- nothing more.

    But are we ready for all this? Are developers ready for this? Are web developers ready for this? Are manufacturers ready for this? Are the world's major IT vendors ready for this?

    Answer: umm, probably not.

    Today is actually World IPv6 Day as we test the water.

    What will developers do with 3000 new Mac OS X Lion APIs?

    bridgwatera | No Comments
    | More

    As you will know, Computer Weekly reported news of Apple's next operating system Mac OS X Lion last night, in line with events at the company's annual developer conference.

    Readers will have noted that Apple is delivering 250 new features and 3,000 new developer APIs with the new product.

    So what will that mean? -- and what is an API anyway?

    For the record - an API (or an application-programming interface) is basically a set of software-to-software programming instructions that work on the web. APIs allow one piece of web-based software to talk to another in a process prescribed by the API itself. If a company releases its API to the developer community (as Apple has done this week) then this represents a channel for programmers to use the services accessed via the API. One API communicates with another API via a series of "calls" and the API itself is essentially just a portion of XML-based software code.

    Anyway, reminder lesson over -- there's too much talk of APIs without enough comment on the "guts" of the system isn't there?

    Lion OS.png

    As you'll have already guessed by now if you've read the earlier report by Jenny Williams, many of Apple's new APIs will push developers to build gesture-based apps and full-screen apps too.

    Programmers will also no doubt have an eye on incremental services that they might have in mind to work with the new Lion Mail app, which has what is described as an elegant widescreen layout and enhanced message threading.

    Elegant Mail from Apple you say? You wouldn't expect anything else. Would you?

    Business focused developers will be looking at the new options to perform "whole-disk" encryption for both the startup and external disks -- as well as the new wipe capability for all data. Surely these are attractive new "power options" to take advantage of in terms of new third party management software development.

    Finally, Mission Control is likely to be of key interest. This tool is designed to allow users to instantly access everything running on a Mac at once -- kind of like Exposé, Dashboard and Spaces all wrapped up one unified experience.

    It's important to remember why Apple releases new versions of its operating system; obviously the company wants to be seen to be dynamic, constantly innovating and following a roadmap, but some of it is about throwing new features out there and seeing which ones stick with both users and developers.

    As a Mac addict myself (although I am firmly cross platform embracing Linux and Win 7 too), I for one never got on with Spaces.

    So did Apple get it right this time? We'll soon see right?

    Microsoft invites programmers to Windows 8 'BUILD' conference

    bridgwatera | No Comments
    | More

    Microsoft has been positively 'chatty' over the last ten days on the topic of Windows 8. So much so that the company's next big developer convention is now being promoted.

    The conference formerly known as PDC (Professional Developers Conference) and at one point also known as WDC ("Windows Developer Conference") has now been named BUILD.

    BUILD is scheduled for September 13-16, 2011 in Anaheim, CA and registration is already open.

    Build.png

    "Attendees can expect to see new capabilities for Windows Azure, Microsoft's tools emphasis for HTML5 support, new development opportunities on Windows Phone and our commitment to interoperable environments," says Microsoft.

    Microsoft senior VP developer division Soma Somasegar has used his own blog to commeny, "Today, everyone can be a developer; the most tech-savvy generation we've ever seen is fueling demand for new tools and technologies. Many of the developers building web sites and apps that make an impact have no formal education in computer science or engineering. BUILD will be a gateway to new opportunity for all developers."

    About this Archive

    This page is an archive of entries from June 2011 listed from newest to oldest.

    May 2011 is the previous archive.

    July 2011 is the next archive.

    Find recent content on the main index or look in the archives to find all content.