September 2011 Archives

CERN & the big bang "bug" theory

bridgwatera | No Comments
| More

The world's largest physics lab and everyone's favourite particle accelerator has been in the news more than once this last week.

Firstly, CERN scientist Antonio Ereditato revealed that recent large hadron collider results suggest subatomic particles may have gone FASTER than the speed of light.

Seemingly aware of the impact upon science (and the Universe) if this is a mistake, the team presented its work to scientists.

As reported by the BBC, "The speed of light is widely held to be the Universe's ultimate speed limit, and much of modern physics - as laid out in part by Albert Einstein in his theory of special relativity - depends on the idea that nothing can exceed it."

"We tried to find all possible explanations for this," said Ereditato. "We wanted to find a mistake - trivial mistakes, more complicated mistakes, or nasty effects - and we didn't."

Question 1: In light of this news, should we now dig deeper into CERN's approach to data accuracy?

Question 2: How deep into its software application development and code/data analysis should we peer?

BigBangTheoryTitleCard.png

News also circulated last week of CERN's deployment of Coverity Static Analysis to improve the integrity of the source code found across a number of projects analysing data from CERN's Large Hadron Collider (LHC).

Since integrating Coverity's solution, CERN has eliminated more than 40,000 software defects that could otherwise impact the accuracy of its pioneering particle physics research.

One of LHC's core software ingredients, ROOT, is a software program used by CERN's physicists to store, analyse, and visualise petabytes of data about the LHC experiment.

"Better quality software translates to better research results," said Axel Naumann, a member of CERN's ROOT Development Team. "Like CERN, Coverity finds the unknown; its development testing solution, Coverity Static Analysis, discovers the rare, unpredictable cases that can't be recreated in a test environment."

According to a Coverity-generated press statement, the integrity of ROOT's software is integral to the research conducted at CERN. Every second, scientists at CERN oversee 600 million particle collisions that will help to redefine the way we view the Universe. The collisions, which involve trillions of protons travelling at almost the speed of light, take place in the LHC, the world's most powerful particle accelerator. The experiments conducted around the LHC generate approximately 15 petabytes data per year, equivalent to 15,000 standard disk drives. Given the size and scale of these experiments, CERN has implemented a number of processes to ensure data generated by the LHC experiments is accurate and as bug free as possible.

"ROOT is used by all 10,000 physicists, so software integrity is a major issue," added Naumann. "A bug in ROOT can have a significant negative impact on the results of the LHC experiments and physicists' data analyses."

Within the first week of implementing Coverity Static Analysis, CERN's ROOT development team found thousands of possible software defects that could have impacted software integrity and research accuracy, including buffer overflows and memory leaks, with very few false positives. To improve the integrity of CERN's source code, the ROOT team spent just six weeks on resolving the errors and continues to use the solution in production daily to prevent further software defects from occurring.

So is there a big bang bug botheration going on here or not? Is CERN about to accelerate its next particles around without keeping its data centric software back end in order?

Looking for an answer let us turn to Dr Sheldon Cooper to see what he would say...

Leonard: So, tell us about you.
Penny: Um, me? Okay - I'm a Sagittarius, which probably tells you way more than you need to know.
Sheldon: Yes - it tells us that you participate in the mass cultural delusion that the sun's apparent position relative to arbitrarily defined constellations at the time of your birth somehow affects your personality.

...and now over to Ulrika with the weather, API

bridgwatera | No Comments
| More

Online climatological specialist Weather Underground has launched its Weather API platform at this week's TechCrunch Disrupt conference in San Francisco.

But why would developers want access to "reliable and in-depth" weather data?

Location-based mobile apps of course!

"With the recent surge in the number of location-based applications, we hope that our new API platform will further disseminate all of the weather data and proprietary forecasts that we generate for wunderground.com," said Alan Steremberg, president of Weather Underground.

This new API service offers three so-called feature plans that denote the depth of weather data required by the programmer. Data is free for developers that select the lowest usage plan, which means that wunderground.com's data is accessible to programmers on a budget.

weather.png

In addition to providing access to what is described as "the world's largest network" of personal weather stations, Weather Underground's API product also includes free analytical tools that provide usage metrics and reports.

Apart from Alice Springs and Antarctica (where we assume weather reports are largely redundant), this looks like it could be a really interesting tool. The wunderground.com site is enjoyable to use from desktop browser too.

Serena xChange 2011: violins, miniskirts and orchestrated IT

bridgwatera | No Comments
| More

Serena Software president and CEO John Nugent kicked off the main section of this Las Vegas located customer/developer show here this week with a thank you to the 'Bella' all female electric violin quartet.

Here's the link if you want to see how the company woke the audience up here.
http://www.youtube.com/watch?v=yw9dbMOrbzU In truth, the violins were there to make a point about application process orchestration, the miniskirts were an optional extra.

So, down to IT -- Nugent described the concept of the "IT dilemma" being prevalent among software development shops today, so what's that?

Disparate unconnected teams, silos of brittle application systems that are badly connected, multi-vendor environments, legacy systems, badly annotated software change management data, requirements management functions that have never been properly aligned to business processes (let alone business goals and financial objectives), ... the list goes on...

No surprise then to hear an ALM (application lifecycle management) focused company talk about all the 'trouble' factors that the development and IT operations teams face. Talking about integrating tools and processes puts Serena logically into competition with a big name -- IBM.

Serena likes to talk about "Bluewashing", well, if your biggest competitor is IBM then you would wouldn't you? You'll be hard pressed to find any links to this term on the web though. Perhaps now I guess?

So what's in Serena's bright and shiny toolbox now? There is Serena Requirements Manager, Development Manager, Business Manager, Request Manager, Release Manager and Service Manager (in no particular order) now comprising the much enhanced "solution" set.

So to Serena Business Manager (SBM), this is a tool which the company says has been adopted by 1600 customers to date and used to develop 10,000 applications so far. With SBM applications are "composed visually", from which point they can then be integrated fully into an appropriate ERP system such as might be provided from Oracle or SAP for instance.

"Serena has made the move from ALM into IT operations," CEO Nugent proudly asserts.

His company's tools now boast the ability to handle "release control" plus with an appreciation for "release automation", so that a "release calendar" function for major, minor and emergency releases now exists... Serena also extends onward to process controls which help to accommodate for workflow notifications and audit trails as they are needed on an ongoing basis.

Of all the company's web site product info, this paragraph is among the most defining, "From requirements management to issue and defect tracking, incident management, change request management and release management, IT organisations use SBM to deliver new products and services on time and within budget by automating and optimising delivery lifecycle processes. IT service management. By leveraging SBM in lieu of the highly complex, inflexible and expensive packaged applications of old, [Serena can] deliver tailored processes that aid with service desk management, asset management and infrastructure provisioning."

But if Serena is the self-proclaimed "Titan of ALM" -- can the company really cross the chasm into operations (ops) as a ALM company... and so evolve into a full-scale "end to end" top tier IT player as it says it will?

According to Serena's marketing division, "We believe this is what is needed for IT -- i.e. an orchestrated joining of apps and ops as release cycles are now so rapid -- and given the fact that IT operations (of which ITSM is a piece) needs to be reengineered within the new "application evolution" (i.e. a world where the new speed of development driven by mobile devices and web apps is just so darn fast now), which in and of itself is helping to drive the popularity of Agile methodology."

Does Serena have the pedigree for this? Its spokespeople openly state, "What IT needs to do in ops, we have already done in apps."

So why is the term "orchestration" so important?

Isn't that just marketing-speak for "connected work flows and process" really? No says Serena, it is (and I quote carefully) "a fundamental architectural approach that has at its core process automation", so that what can be automated should be automated.

During the show, Serena asked its audience who among the crowd still does requirements management by hand (and a lot of hands went up) -- and "by hand" Serena means dumping data into standard "productivity applications" such as Word and Excel. What Serena advocates is more automated, managed approach using tools that can capture and manage... Automation then, should also thoroughly embrace release management, now the reality is that some development shops will need to perform releases as often a more than a hundred times a month in some scenarios.

These are not Serena's words -- but Facebook is rumoured to "release" (in some arm or division or other) every 20 minutes. Release management then is a cash cow, a market ripe for the plundering (sorry, that should be "leveraging for optional profit") and Serena is out for the win.

So why does Serena think it has the USP here?

The problem of addressing release management is real. The difficulty is that developers think it's the operations team's fault -- but the operations team thinks that it's the developer's fault. According to Serena, the truth is that it's both.

Serena: S in ITSM stands for Service - and "support"

bridgwatera | No Comments
| More

ALM company Serena Software loves to talk about three things: customers, software system orchestration and acronyms.

Customers should not be a surprise; what IT vendor worth its salt does chant the "customers, customers" mantra deeply, repeatedly and intensively? Software systems orchestration should not be a surprise either; this indeed is the term that Serena uses to detail its approach to Application Lifecycle Management (ALM) and its now widened set of developer and "DevOps" (IT operations) focused product offerings. That just leaves acronyms...

So what does the company mean by ITSM, a term its uses repeatedly...

ITSM of course means Information Technology Service Management. In itself this phrase seems to cover a multitude of turpitudes (actually that's wrong, they're all positive things), most of which exist in the so-called "back office", an arena that Serena (arguably) is positively thriving in, having successfully reinvented its core technology proposition and started to post healthy profits as it has.

help.png

But meeting Serena execs here in Las Vegas this week at the company's xChange annual user and customer event, one is left with the impression that the company places the CAPITAL S "Service" element of ITSM in DevOps at a rather more sophisticated level than mere service.

Serena suggests that it is thinking about the process behind support, where traditional ITSM providers are missing the trick perhaps. Put simply, the company says it looking at ITSM to encapsulate support because it leads through to fuller service management and business requirements.

According to Serena's publicity function, "Take the typical service desk call - it'll be an application not working, or hardware failing. Fixing that problem is what the traditional service desk product does. Serena's approach is based on looking at the wider process around that problem - so automating stuff getting fixed... and then allowing reporting on the ITSM side of things to get sent back into the business. Linking tools together where it works and providing what else is required as and when."

So the S in ITSM goes deeper than help desk service does it? Yes says the company, this is not just help desk management -- this is business driver-aware management that embraces:

• Asset management
• Software release management
• Knowledge management
• Configuration management
• Database management

So much focus does the company place on "support" that it ran a Help Desk Haiku competition recently. A Haiku if you were as new to this term (as I was) is a Japanese poem consisting of 17 syllables broken up into a 5/7/5 word group -- here was my attempt

"PC say no way man. Help desk hell is here. Give me tech help heaven."

A better example is perhaps...
"Try turning it off. Now try turning it on. You're very welcome."

So do we need to more clearly define support and service and clarify the full breadth of their functions when we examine software application development lifecycle management?

In a word, yes.

"On-Demand" does not translate, or does it?

bridgwatera | No Comments
| More

So I was at an IT show recently -- no need to mention who it was or which vendor I was speaking to as it's "symposium season" and invites to conference centres are abundant just now -- but a communications person from a country famous for sausages, beer and oompah bands suggested the following statement...

"Well, of course, ON DEMAND does not really translate."

What she meant was that there is no German term for on demand, which I can well understand. It is, after all, a very Americanised IT term that we might imagine being used "intact" in the original English, once in use by a German speaker.

Unsurprisingly, I used this experience to spend a happy 15 minutes on Yahoo! Babel Fish seeing what comes up when you try and translate ON DEMAND into various languages.

translate.png

German - Bedarfs (I have no idea)
Italian - a richiesta (seems to make sense, sounds like "request")
Spanish - a pedido
Russian - по требованию
French - sur demande (surely this is OK?)
Dutch - op bestelling
Korean - 주문으로
Japanese - 請求あり次第

Enough already right? I'm sure most of the above don't translate properly. But it's fun to investigate. The subject of IT-related English words pervading into other languages as a whole is another subject (I hope) for another day.

Large Hadron Collider has Intel Inside

bridgwatera | No Comments
| More

Intel's news this week has been dominated by the company's work with Google to optimise future releases of the Android platform for Intel's family of low power Atom processors.

Coinciding with the Intel Developer Forum which ran this week in San Francisco, the chip maker has clearly been happy to lay its open source credentials out here. As CEO Paul Otellini puts it, he wants to grow Intel's business in adjacent computing market segments... i.e. smartphones on open platforms in this case.

So when Intel makes big announcements (especially software related ones) and you want the software developer/programmer line, it is worth Googling James Reinders who is a multi-core evangelist and Intel's director of software products.

Reinders used his Intel Developer Forum blog post as a chance to highlight the fact that CERN's Large Hadron Collider is built with support from Intel's MIC (Many Integrated Core) technology to run extreme computing programs which help process results from the collider itself.

CERN_LHC_tunnel.jpg

Not to detract from the jolly nice Google-related news which is already out there in the ether, I think Reinder's opening blog comment is excitingly geeky in all the right ways...

"In Justin Rattner's keynote this morning at IDF, we got to see another example of how we make programs, for multicore processors, run on many-core processors. Andrzej Nowak from CERN openlab demonstrated "Track Fitter," on a Intel MIC software development platform, which looks for tracks of particles in the data from a particle detector. This online processing near the detectors on the Large Hadron Collider uses advanced algorithms to determine what the real data from the detectors means. The code scales well on multicore processors and it scales well on our Intel MIC software development platform (which we call Knights Ferry). The code demonstrated required no source code changes in moving from running on multicore systems to running on a many-core system. These results by the team at CERN openlab using our tools, show very well how our investments in helping software development stay clear of detours."

Reinders' "detour" comment relates to his piece's title "Extreme computing is a journey, not a detour." A comment that in and of itself resonates with Otellini's "computing is a process of constant change" message - and of course's Andy Grove's 'Strategic Inflexion Points' concept in his wonderful book Only The Paranoid survive.

Is SAP HANA a data analytics 'ace in the hole'?

bridgwatera | No Comments
| More

SAP has predictably saved up news relating to its HANA product for this week's TechEd show here in Las Vegas, but what is it? Described variously as the company's in-memory computing platform with a strong emphasis on ERP, it is perhaps more directly described as a high-performance analytical appliance with a supporting software and a hardware consideration to match.

SAP CTO Vishal Sikka said some time back now that HANA would be central to the company's future plans; indeed, this week has seen the revered Stanford doctor talk effusively about the product. Notwithstanding the fact that he unable to talk about HANA without repeatedly mentioning the words "innovation", "future" and "innovation" (sorry was that innovation twice?), SAP has clearly polished its baby carefully before this public outing.

This week sees two new solutions built on HANA, namely SAP Smart Meter Analytics software built on HANA and SAP COPA Accelerator software. Billing these products as applications designed to give users real-time insight into "big data", SAP is vying for a cemented position in the data analysis market with this offering.

As SAP puts it, this is, "Analysis, planning, forecasting and simulations in a fluid, natural way versus traditional approaches that are rigid, sequential and time-consuming."

But HANA may end up being much more important for SAP than it is even showing us now, so could this product really be the company's ace in the hole?

According to Forrester Research, SAP has emerged as the leading advocate of in-memory computing technology as a key pillar of its innovation strategy: "[SAP] HANA allows SAP to develop innovative new applications that can consume and analyse massive volumes of data in near real time while also providing a cost-effective, elastic computing platform."

HANA's future may see it rise to even loftier heights. HANA uses a slender columnar structure as opposed to the more complex 'star schema' employed by relational databases.

Essentially it is an in-memory database with a flat two-dimensional structure built to take advantage of modern processing power and memory capabilities. Amit Sinha, SAP's vice president of in-memory computing and HANA solution marketing has suggested that relational databases today employ comparably archaic techniques shaped by hardware and software limitations dating back as far as two decades. HANA, arguably, moves us forward...

Where this puts HANA for the future is potentially in a position of power. SAP would clearly love to "positively disrupt" the database market and send shivers up Oracle's spine -- the fact that it now has Sybase under its increasingly albatross-like corporate wing doesn't do it any harm at all here either does it?

"There is a massive simplification happening all around us. Layers are being dissolved at an unbelievable pace; people, businesses, data and machines are becoming more directly connected. This virtuous cycle of connectedness leads to disintermediation of layers, which drives end-users to become more empowered and demand better user-experience -- challenging us to create more connectedness," said SAP CTO Vishal Sikka.

SAP doesn't shirk at suggestions of massive change in the data market. The company is happy to suggest that boundaries between the application layer and the database layer are dissolving.

Sorry - I forgot to mention, when Sikka is not saying "innovation and the future", he can usually be found happily repeating "disintermediation of layers" until fade. I do the gentleman a disservice; he is an entertaining speaker and true intellectual.

So as these layers dissolve, which to some degree they surely must, we will see some of the work (i.e. the calculations) previously performed by the application layer being executed at the database layer. Remember what Forrester said? "SAP has emerged as the leading advocate of in-memory computing technology..."

So how does it work?

In practice, this in-memory computing means data doesn't have to travel between so many layers inside the IT stack, this means calculations are executed faster, this means real-world transactional workloads happen faster, this allows accurate data intensive analytics to happen faster, this means corporate information dashboards are updated faster, this means executives can take actions faster, this means companies are glad they used SAP technology.

Well, that's the theory - and the company appears to be cooking up a strong enough argument.

Sure SAP will face competition from Oracle, IBM, and Microsoft who are all working to develop their own in-memory databases technologies. But if we have to follow the scent of a leader just now; it just might have to be SAP for the time being.

An ace in the hole? Maybe. Strongest hand at the table? It's not for me to say. Worth a gamble? Well, this is Las Vegas after all...

Pair_of_Aces.jpg

What to expect from SAP TechEd 2011 + Sybase TechWave

bridgwatera | No Comments
| More

The age of consolidation in information technology has led us to a time when big brand "acquisitions" mean that, increasingly, post-buyout vendors still exist under their former names under their new parent brands.

Sun sits under Oracle, Cognos sits under IBM and Sybase sits under SAP as Sybase, an SAP company.

As such, reporting from SAP TechEd 2011 is tough. This is because Sybase TechWave is going on in the same building (the Venetian Hotel & Casino Las Vegas) at the same time.

So in an impassioned attempt to be even handed, allow me to ask the question at both levels -- what can we expect from SAP TechEd 2011 and Sybase TechWave 2011?

I like to call these events conventions or symposia; SAP likes to call this annual experience its "premium technical education conference" series. Now in its 15th year, SAP TechEd is held in Las Vegas, Bangalore, Madrid and Beijing between now and the end of November.

Star of the boardroom this year will be SAP's CTO Vishal Sikka who heads the company's technology and innovation practice. Sikka is rumoured to be talking about "advances in mobile technology and in-memory computing" -- so think Sybase for the former product group and SAP HANA for the latter.

SAP's TechEd workshops and lectures will focus on the SAP NetWeaver, Sybase Unwired Platform and SAP BusinessObjects solutions. Hitting a total of what is claimed to be 19,000 attendees across the four world conferences, SAP insists that it is bringing a new focus to SME this year and says that it will also focus on on-premise, on-demand and on-device applications as a whole.

Attempting to build a little tangential topicality this week is a conference theme driven by an initial keynote presented by game research and development guru Dr. Jane McGonigal. Discussing the power and future of gaming and how its "collaborative and motivational aspects are being used to solve some of the most difficult challenges facing humanity", Dr Jane will draw on her famous TED talk which you can watch below. The connection? Attendees will be able to experience gamification in action with Knowledge Quest, a game layer added to the event this year that will combine SAP TechEd content with points and challenges.

... and as for general look at feel?

Well, Sybase TechWave was run on a comparitively small scale in 2009 and 2010. So if Sybase has been whipped up a notch into an SAP world of (and I'm going to list here) experts, partners, employees, customers, community members, mentors, peers and event speakers... then we have to expect big things. Let the week commence.

Enemies in the cloud: virtualisation is the enemy of visibility

bridgwatera | No Comments
| More

Cloud computing seems to have created more confusion in its formative years than we might have naturally expected given its essential form being no more than Iernet-based services delivery.

Transaction performance management company Precise appears to have recognised this fact. The company has thus issued a report detailing the priorities and concerns that companies may have when moving enterprise applications to virtual and cloud-based environments.

The company's recent IT survey has found the following:

• In 2011, 39% of organisations moved email and collaboration systems to virtual infrastructures, followed by IT management (33%) sales & marketing (20%) finance/HR/ERP (21%) and security (13%).
• In 2012, 33% of respondents report that they will move finance/ERP /HR applications to the cloud, followed by e-mail and collaboration software (23%) and IT management applications (21%).
• Over time, 37% of companies say they will migrate 61% or more of their applications to a private cloud environment, while only 6% of companies will do the same on a public cloud service.

Now here's the bad news -- resolving application problems gets more complex in the cloud.

Slow application performance is the biggest problem and also the most costly, according to the survey. After slow performance (41%), top problems reported by IT managers include slow time to identify the root cause of issues (24%), followed by inter-application shared resource contention and multi-tenant storage contention (both 18%).

In some regards, moving applications to the cloud will ease performance issues, giving IT the ability to quickly move a high-priority application to a more optimal resource when performance begins to suffer. A majority of the survey respondents (26%) report that they expect application performance will improve in the cloud, yet most predicted that it will take longer to pinpoint the causes of problems after applications move to the cloud (37%).

Public_Enemy_textlogo.png

"When a problem occurs, virtualisation is the enemy of visibility," says Zohar Gilad, executive VP with Precise. "Compounded with dynamic provisioning in the cloud and server cluster architecture, it's difficult to determine which server, VM, or application instance is to blame when troubleshooting."

Yes that's very nice, but then Precise sells transaction monitoring software doesn't it? So the company is naturally bound to highlight the need to plan for the highly dynamic nature of the cloud.

Allowing quick access to historical performance data during troubleshooting would be a good idea. Does Precise sell that service? Of course it does. You can see why this survey was cooked up in the first place.

In fairness, there's some good points highlighted here. The question of how we "peer in" to the cloud has, arguably, not be fully raised yet as we still discuss "getting to" the cloud more than anything else.

Cloud monitoring specialists, we'll no doubt hear from you next.

Is "technical debt" breaking the software development bank balance?

bridgwatera | No Comments
| More

This is a guest blog written by Rutul Dave of Coverity. The company is focused on software developer defect tracking issues relating to software integrity; as such, its products and services tackle source code analysis tasks.

I recently read a LinkedIn discussion from the Agile Alliance group. This is a global non-profit organisation that works to advance Agile development principles and practices to make the software industry more productive and to deliver better products.

The discussion posed a question around the concept of "technical debt" and asked if it was still a valid metaphor in today's global software development world.

The short answer: yes!

100000-dollar.jpg

Technical debt, software debt, design debt (call it what you will) is a real and growing problem for development organisations. We are hearing this from many of our customers as a growing concern, not only due to the associated maintenance costs, but also due to the risk of delayed time to market and lost customer satisfaction. Its impact is contrary to many of the proposed benefits of Agile.

Blog editor's technical note:

For those unfamiliar with "technical debt" - the term was coined by Ward Cunningham, an American computer programmer credited with developing the first ever wiki. Technical debt is the 'residue' that spills out of 'quick and dirty' software development projects leaving clunky code here and there that often needs to be 'tidied up' later on. Like financial debt, technical debt needs to be repaid -- in this case in the form of extra developer 'clean up' efforts. Also like financial debt, sometimes it is worth taking on technical debt in order to seal a deal and/or make a tight deadline and take advantage of a market opportunity. Follow the financial metaphor through fully (if you will), and you can easily see that companies (or their software development functions) can get lazy and let their technical debt get out of control - and at this point technical debt really does start impacting the financial debt incurred by the business.

Back to Rutul...

Developing software quickly is a growing challenge. Maintaining legacy applications is burdensome for many organisations due to software complexity. Technical debt can lead to a system so brittle and difficult to maintain that we see development teams who are afraid to even make a simple change for fear of breaking something else in the process.

Inheriting technical debt via open source is also a rising concern. As open source is proliferated into commercial development projects, what can be a source of innovation, speed and cost advantage in the short term could pose a hefty technical debt burden over the long run.

I believe technical debt will be increasingly used as a term to engage a dialogue between development and the business. Technical and non-technical people alike can easily grasp this concept. It is a way to associate and quantify a direct cost to the business due to decisions made during development -- and provide a common ground for development and the business to make decisions and trade-offs together.

Short-term speed may come at the price of long-term delays and cost. So does your development organisation track technical debt as a metric today? If so, how do you calculate it? These are questions that should be addressed.

Our friends at voke are conducting research for an upcoming report on "Agile Realities", asking businesses whether Agile is hype or helpful. It's a good time to write about the subject, as Agile techniques are a very popular topic these days.

Although Agile has been around for a while now, what is arguably relatively newer is the expectation that development tools should support Agile development -- and to bake in automated code testing tools such as static analysis and unit tests into short, iterative development processes.

The benefits of discovering code problems early are valid regardless of the development method.

Tighter integration of static analysis and code testing within Agile - now that's new!

Subscribe to blog feed

About this Archive

This page is an archive of entries from September 2011 listed from newest to oldest.

August 2011 is the previous archive.

October 2011 is the next archive.

Find recent content on the main index or look in the archives to find all content.