July 2010 Archives

IBM responds to EU mainframe investigation

| No Comments
| More

IBM has issued a statement following the extension of the EU competitions authority investigation into its mainframe business.

 

Here is IBM's response to the investigation:

 

Not long ago, numerous high tech pundits and IBM competitors declared the mainframe server dead -- a dinosaur, extinct. At that time, many companies were abandoning mainframe servers, and others were developing distributed alternatives to centralized computing. In the face of these predictions and fundamental marketplace changes, IBM made a critical decision: to invest billions of dollars in its mainframe technology to bring unprecedented levels of speed, reliability and security to the enterprise market. These investments reinvigorated the mainframe server as a vital competitor in a highly dynamic marketplace.

Today's computer server market is clearly dominated by Intel-based servers from HP, Dell, Oracle and many others, as well as Unix servers. The numbers speak for themselves: mainframe server sales today are a tiny fraction of worldwide servers -- representing just 0.02% of servers shipped and less than 10% of total server revenues in 2009, according to IT industry analyst firm IDC -- and shrinking from 2008. That is in sharp contrast to Intel-based servers, which represented more than 96% of all server shipments and nearly 55% of total server revenues in 2009, according to IDC. The first quarter 2010 server share reports from IDC and Gartner show this trend continuing, with mainframe server revenue declining and Intel-based servers growing. Today, the mainframe server is a small niche in the overall, highly-competitive server landscape, but it remains a source of great value for those IBM clients who value its high levels of security and reliability. Yet even with all of its substantial innovations, the migration of certain customers and workloads away from mainframe servers to other systems remains common.


Certain IBM competitors which have been unable to win in the marketplace through investments in fundamental innovations now want regulators to create for them a market position that they have not earned. The accusations made against IBM by TurboHercules and T3 are being driven by some of IBM's largest competitors -- led by Microsoft -- who want to further cement the dominance of Wintel servers by attempting to mimic aspects of IBM mainframes without making the substantial investments IBM has made and continues to make. In doing so, they are violating IBM's intellectual property rights.


IBM intends to cooperate fully with any inquiries from the European Union. But let there be no confusion whatsoever: there is no merit to the claims being made by Microsoft and its satellite proxies. IBM is fully entitled to enforce its intellectual property rights and protect the investments we have made in our technologies. Competition and intellectual property laws are complementary and designed to promote competition and innovation, and IBM fully supports these policies. But IBM will not allow the fruits of its innovation and investment to be pirated by its competition through baseless allegations.

Are these really baseless allegations? The new zEnterprise hardware certainly looks impressive. But the value of mainframe computing is not just in the hardware, it is about the whole ecosystem?

Show me the proof

| No Comments
| More

In an age of increasing computerisation there are times when we need to actually question what the computer is telling us. Take spreadsheets for example. It is not uncommon to discover that the built-in functions can behave differently to the underlying macro and programming languages.

An example of this is the simple rounding problem. The built-in function may round up on a 5 while the equivalent function in code may round down. Use a mix and they may balance out and they may not. You also have to be aware of the number of decimal places that you are working with in the underlying data and then how many places you display on screen.

In a complex spreadsheet you can find many examples of rounding but in a BI environment where you are simply importing the data into a spreadsheet you may have, literally, hundreds of small errors. The rounding error is something that you can fix through settings but strangely enough, few people do. Much more serious is the complex functions that few users want to understand beyond the simple entry of numbers and the delivery of a result.

So why isn't this constantly raised as an issue? Actually, it used to be. Visicalc, SuperCalc, Lotus, Smart, Excel, PlanPerfect, Quattro Pro, OpenOffice - all of these products have had and still have this issue. But, over time, as vendors have refused to acknowledge or fix it, people have given up trying to get a solution.

That solution would require any spreadsheet vendor to send their function library to an independent source to have it properly validated. This has been suggested but every vendor has resisted it because, ironically, the last thing that they want is to be told they are doing things in the wrong way.

The impact on a database world might seem negligible but it isn't. User Defined Fields in some classes of databases use functions to calculate values based on other fields. The more complex the underlying data, the less chance of users actually examining that data to check for errors.

In financial and scientific fields these function errors can be serious. For example, if you are doing computational fluid dynamics (CFD) for a Formula 1 racing car and you have a built in function that is calculating a key value wrongly, your car could be slower than the competition. It doesn't take much performance degradation to cost you places and waste large amounts of money.

In aircraft, rocket or missile technology, a mistake could take a long time to be spotted given the underlying complexity of the work. For those in financial market, there are often a lot of other systems that may provide the checks and balances but those systems are becoming ever more tightly integrated which means that data may be a calculated field in one system but entered as a value in the next. Now the problem cannot be traced to its source and has assumed a more serious proportion.

There are instances, for example, where such errors are unlikely to have much impact at all on the decisions made. For example, in a retail environment when looking at the sales and stocking levels of goods. However, if you were looking at gross margin, a product could move from marginal to drop.

BI products are aimed at allowing users to extract data from these underlying sources, do a range of manipulations on that data and then use the results to make business decisions. In some cases, that data is returned to the databases and used by other users and teams as source data for their work. As in the financial market example, the error is now compounded.

Recently, I spoke with Aster Data, a vendor of Massively Parallel Processing database solutions and we talked about a recent press release in which they announced that they had added over 1,000 MapReduce-ready functions to their data analytics suite. Such a massive increase in the number of functions without any third-party validation has to be of concern.

Aster Data say that they have not had any requests from customers to validate their functions or prepackaged applications. They do see a lot of requests from customers for more functions and applications and believe that customers are already starting to build their own use cases to validate data.

While Aster Data customers might be doing their own checks there is no widespread evidence that this is a common practice among users of spreadsheets, BI tools or databases. With the currently explosion in end-user BI, it's time for software vendors to up their game and prove that their functions are consistent, accurate and fit for purpose.

 

Archived comments:

---------Giles Thomas | July 23, 2010 7:15 PM

This is an interesting post; spreadsheet integrity is an increasingly critical issue, especially in the financial markets. Traders, for example, use spreadsheets to create pricing models to help them quickly decide on what positions to take, and when to take them. The spreadsheets are then routinely shared, cut and pasted and adapted across trading desks or even whole trading floors. As a result, they can easily become cumbersome, botched-together "Frankensheets", and contain inherently difficult-to-spot inaccuracies.

While working at a large investment bank, I learned just how widely-used spreadsheets really are, and how frequently they end up being a source of frustration (or financial loss) to their users. It after several years of this that I went on to build my own spreadsheet, Resolver One. In it, the formulae you put into the code are compiled down into code in the Python programming language before being executed, while the equivalent of macro code is already in Python -- so the built-in functions are identical in both, so (obviously) can't behave differently.

A separate problem with spreadsheet accuracy -- one you don't touch on, perhaps because it doesn't happen so much in databases, but which I've seen in the work of spreadsheet users who've reached the dangerous stage where they have started writing macros but don't have much experience in they yet -- is the difference between the "functional" model on the grid, where you can (in theory) recompute the cells in any order that respects their mutual dependencies, and even skip cells whose dependencies haven't been changed since the last recalculation, and the "imperative" model in the macro and user-defined function (UDF) language, where a function can change global state and have side-effects. Because you can write a UDF that returns a different value each time it's called, based on global state, it's easy for an inexperienced macro developer to write UDFs that make the spreadsheets that use them generate inconsistent results.

---------Marc | October 7, 2010 2:49 PM

I would agree 100% with Giles comments, that users writing their own macro's can get themselves into hot water.

I can't see the connecton between the author's dicussion about producing sound macro's in excel, and his conclusion that Aster Data systems would (may ?) suffer from the same issues.

Maybe this article is just a rehack of the old garbage in, garbage out adage.

MapReduce is used by google to index the world wide web, is that not some proof?

TIBCO Silver Spotfire pubishes BI to the Cloud

| No Comments
| More

From the smallest home office business to the largest enterprise, the amount of data that businesses accumulate continues to grow. Using that information effectively is often challenging because users do not possess the tools or the knowledge on how to make the most of their data.

Small and even mid-sized enterprises often lack the resources to acquire BI tools, skills and training when compared to larger enterprises. This puts them at a disadvantage when it comes to competing and, more importantly, gaining a better insight to their business and market.

According to a recent press release TIBCO Silver Spotfire is targeted at the SME as "a fully functional on-demand offering designed to enable anyone to create,publish and share custom dashboards or reports for business analytics and business intelligence (BI) in the Cloud."

For the first year, those companies who want to try this can get free access to TIBCO Silver Spotfire. It comes with an authoring client and expansive web-based sharing and hosting for the user's favourite personal or business Spotfire application. After the year is up, TIBCO says that there will be a range of monthly hosting options for those who want to continue using the product.

TIBCO also describes this as not just BI as Software as a Service (SaaS) but as "social BI". The idea is that individuals can quickly create and share information across the business as part of an ad-hoc corporate analytics knowledge base. Any data created through TIBCO Silver Spotfire can also be integrated into a range of social media, blogs and online articles.

At the heart of all of this is the TIBCO Silver Cloud Computing platform on which the company has now made provision for Spotfire users which was updated in May 2010 as a hosted platform for TIBCO customers along with the Silver Spotfire beta.

A free one year subscription will be attractive to many customers. However, it is important to note that this is not the full Spotfire Enterprise product that TIBCO sells but a reduced functionality product. Customers who want to move to the full Enterprise version later will be able to do so and, at the same time, pull in the work that they have published on Silver Spotfire.

TIBCO is very clear about the target audience here. This is about extending the reach of BI into small companies, small branch office, departments who need a simple BI tool and where TIBCO currently has no presence. By using a free hosted Cloud based platform with customers having no initial costs, TIBCO believes that many companies will be tempted to try BI for the first time.

As a software developer tools vendors, TIBCO is also hoping to build a community of developers who want to build dashboards and other applications on top of the TIBCO Silver Cloud Computing platform and in particular on Silver Spotfire. This would allow TIBCO to attract an every increasing set of customers and make the Silver platform more attractive to third party Cloud hosting vendors who are looking for a value-add solution in order to attract more business.

The users work a local copy of the Spotfire client to create their data visualisation and then upload the data to the server . The maximum single file size is 10MB of compressed data. That might not sound a lot but TIBCO believes that it is more than enough for 300,000-500,000 rows of data depending on the level of data redundancy and the type of visualisation used.

This is a little disappointing but signals where TIBCO currently is with this product. While a fully fledged Cloud BI platform would have been nice, this is about hosting your results not hosting your BI. What will be interesting over the next year is if TIBCO can not only build the complete BI Cloud platform but then sell that as part of the Silver Cloud Computing platform to third party vendors. Success in this area would be a significant market changer but would also need to be linked to a host of other components such as virtual machines, a fully fledged SaaS platform and an active developer community.

As the work is all done locally, when the data is published to the Silver Spotfire platform, the files are not linked to the underlying data sources. This is important as it means that the files are not going to auto update as the core data changes and users will need to build their own local processes to recreate and republish.

In this first release the data will be hosted in the US and there is no Geo Locking. This means that you need to carefully control any data published through the platform to ensure that you do not inadvertently breach any data protection rules. With one of the goals of Silver Spotfire to make it easier to use social media for publishing data there is also a real risk to data leakage

Stopping this is more challenging than many companies realise so it is important that companies step up their data management training for users. This does not preclude using Silver Spotfire but it is something that must be taken into account especially as there is no guidance on data protection policies on the TIBCO website

Another missing element here is federated security. This is something that TIBCO has said it will be working on over the next year as it builds momentum with Silver Spotfire. At present, it is talking to the early adopters and will talk to any new customers about what they want in terms of security.

Despite the security and data protection concerns this looks like a very interesting opportunity and one that is well worth spending some time investigating.

TIBCO Silver Spotfire pubishes BI to the Cloud

| No Comments
| More

From the smallest home office business to the largest enterprise, the amount of data that businesses accumulate continues to grow. Using that information effectively is often challenging because users do not possess the tools or the knowledge on how to make the most of their data.

Small and even mid-sized enterprises often lack the resources to acquire BI tools, skills and training when compared to larger enterprises. This puts them at a disadvantage when it comes to competing and, more importantly, gaining a better insight to their business and market.

According to a recent press release TIBCO Silver Spotfire is targeted at the SME as "a fully functional on-demand offering designed to enable anyone to create,publish and share custom dashboards or reports for business analytics and business intelligence (BI) in the Cloud."

For the first year, those companies who want to try this can get free access to TIBCO Silver Spotfire. It comes with an authoring client and expansive web-based sharing and hosting for the user's favourite personal or business Spotfire application. After the year is up, TIBCO says that there will be a range of monthly hosting options for those who want to continue using the product.

TIBCO also describes this as not just BI as Software as a Service (SaaS) but as "social BI". The idea is that individuals can quickly create and share information across the business as part of an ad-hoc corporate analytics knowledge base. Any data created through TIBCO Silver Spotfire can also be integrated into a range of social media, blogs and online articles.

At the heart of all of this is the TIBCO Silver Cloud Computing platform on which the company has now made provision for Spotfire users which was updated in May 2010 as a hosted platform for TIBCO customers along with the Silver Spotfire beta.

A free one year subscription will be attractive to many customers. However, it is important to note that this is not the full Spotfire Enterprise product that TIBCO sells but a reduced functionality product. Customers who want to move to the full Enterprise version later will be able to do so and, at the same time, pull in the work that they have published on Silver Spotfire.

TIBCO is very clear about the target audience here. This is about extending the reach of BI into small companies, small branch office, departments who need a simple BI tool and where TIBCO currently has no presence. By using a free hosted Cloud based platform with customers having no initial costs, TIBCO believes that many companies will be tempted to try BI for the first time.

As a software developer tools vendors, TIBCO is also hoping to build a community of developers who want to build dashboards and other applications on top of the TIBCO Silver Cloud Computing platform and in particular on Silver Spotfire. This would allow TIBCO to attract an every increasing set of customers and make the Silver platform more attractive to third party Cloud hosting vendors who are looking for a value-add solution in order to attract more business.

The users work a local copy of the Spotfire client to create their data visualisation and then upload the data to the server . The maximum single file size is 10MB of compressed data. That might not sound a lot but TIBCO believes that it is more than enough for 300,000-500,000 rows of data depending on the level of data redundancy and the type of visualisation used.

This is a little disappointing but signals where TIBCO currently is with this product. While a fully fledged Cloud BI platform would have been nice, this is about hosting your results not hosting your BI. What will be interesting over the next year is if TIBCO can not only build the complete BI Cloud platform but then sell that as part of the Silver Cloud Computing platform to third party vendors. Success in this area would be a significant market changer but would also need to be linked to a host of other components such as virtual machines, a fully fledged SaaS platform and an active developer community.

As the work is all done locally, when the data is published to the Silver Spotfire platform, the files are not linked to the underlying data sources. This is important as it means that the files are not going to auto update as the core data changes and users will need to build their own local processes to recreate and republish.

In this first release the data will be hosted in the US and there is no Geo Locking. This means that you need to carefully control any data published through the platform to ensure that you do not inadvertently breach any data protection rules. With one of the goals of Silver Spotfire to make it easier to use social media for publishing data there is also a real risk to data leakage

Stopping this is more challenging than many companies realise so it is important that companies step up their data management training for users. This does not preclude using Silver Spotfire but it is something that must be taken into account especially as there is no guidance on data protection policies on the TIBCO website

Another missing element here is federated security. This is something that TIBCO has said it will be working on over the next year as it builds momentum with Silver Spotfire. At present, it is talking to the early adopters and will talk to any new customers about what they want in terms of security.

Despite the security and data protection concerns this looks like a very interesting opportunity and one that is well worth spending some time investigating.

IBM assimilates the competition

| No Comments
| More

Porting a database from one vendors offering to another has always been difficult. To try and ease the pain vendors have product porting guides, third party tools companies have products that will take your schemas and stored procedures and recreate them for the new target database and software testing companies have products that will allow you to create a series of acceptance tests against the newly ported data.

Despite all of this, the way we embed code inside applications today means that we often don't find the problems until it is too late and the help desk starts taking calls. One of the main reasons that embedded code causes us difficulty has been the divergence of SQL from a single standard into three main variants with a fourth - parallel SQL - starting to make its mark as databases become ever larger and queries more complex.

User Defined Fields are another challenge. They are often used to create a complex field type that the developer didn't want to break down into multiple fields for some particular reason associated to their code. It may also have been used to hold an unsupported data type from another database during a previous port.

Challenges go far beyond field and SQL constructs. Big database vendors are designing in features to support their own developer tools and key data driven applications. These end up as elements inside the database which often have no correlation in another vendors products.

But no matter how many challenges you identify, people still want to port their databases. It may be financial, it may be that the new DBA or IT manager has a preference for a different vendor or it may be that someone outside of IT has decided that we should now be moving all our tools over to a new supplier.

IBM has decided that it is time to change the landscape. Alongside its existing migration documents and professional services engagements, IBM is now allowing developers to run native code from both Oracle and Sybase against DB2 9.7.

All of this comes at a time when Oracle is still digesting Sun and Sybase is being bought by SAP. By allowing native code to be run IBM believes that those customers who are unsure about what the future holds for Oracle and Sybase can quickly move to DB2 without having to cost in the rewrite of thousands of lines of application code.

And come those customers have. IBM is claiming that is has been able to take banking customers from Sybase who were worried about systems optimisation. At the same time, they have picked up over 100 SAP implementations that were either looking at or already deployed on Oracle.

There is another reason for IBM to make itself the universal database target. While its competitors spend large sums of money buying application solutions and then rewriting them to run on their databases, IBM is able to simply focus on the underlying database technology. This allows it to focus its R&D on database performance and optimisation and then consume any application tier.

These are not the only two databases that IBM is targeting. It has both MySQL and Microsoft SQL Server in its sights although, at present, those are still part of a migration rather than a native code solution and IBM is unable to say when there will be native code solutions. For Microsoft SQL Server, this should be relatively simple as the T-SQL it uses has not diverged much from the Sybase T-SQL from which it was derived.

To help developers understand more about this there are a number of very interesting articles up on the IBM DeveloperWorks web site.

 

http://www.ibm.com/developerworks/data/library/techarticle/dm-0907oracleappsondb2/

http://www.redbooks.ibm.com/abstracts/sg247736.html?Open

http://www.ibm.com/developerworks/wikis/display/DB2/Chat+with+the+Lab

http://www.ibm.com/developerworks/data/downloads/migration/mtk/

http://www.ibm.com/developerworks/data/library/techarticle/dm-0906datamovement/

 

IBM assimilates the competition

| No Comments
| More

Porting a database from one vendors offering to another has always been difficult. To try and ease the pain vendors have product porting guides, third party tools companies have products that will take your schemas and stored procedures and recreate them for the new target database and software testing companies have products that will allow you to create a series of acceptance tests against the newly ported data.

Despite all of this, the way we embed code inside applications today means that we often don't find the problems until it is too late and the help desk starts taking calls. One of the main reasons that embedded code causes us difficulty has been the divergence of SQL from a single standard into three main variants with a fourth - parallel SQL - starting to make its mark as databases become ever larger and queries more complex.

User Defined Fields are another challenge. They are often used to create a complex field type that the developer didn't want to break down into multiple fields for some particular reason associated to their code. It may also have been used to hold an unsupported data type from another database during a previous port.

Challenges go far beyond field and SQL constructs. Big database vendors are designing in features to support their own developer tools and key data driven applications. These end up as elements inside the database which often have no correlation in another vendors products.

But no matter how many challenges you identify, people still want to port their databases. It may be financial, it may be that the new DBA or IT manager has a preference for a different vendor or it may be that someone outside of IT has decided that we should now be moving all our tools over to a new supplier.

IBM has decided that it is time to change the landscape. Alongside its existing migration documents and professional services engagements, IBM is now allowing developers to run native code from both Oracle and Sybase against DB2 9.7.

All of this comes at a time when Oracle is still digesting Sun and Sybase is being bought by SAP. By allowing native code to be run IBM believes that those customers who are unsure about what the future holds for Oracle and Sybase can quickly move to DB2 without having to cost in the rewrite of thousands of lines of application code.

And come those customers have. IBM is claiming that is has been able to take banking customers from Sybase who were worried about systems optimisation. At the same time, they have picked up over 100 SAP implementations that were either looking at or already deployed on Oracle.

There is another reason for IBM to make itself the universal database target. While its competitors spend large sums of money buying application solutions and then rewriting them to run on their databases, IBM is able to simply focus on the underlying database technology. This allows it to focus its R&D on database performance and optimisation and then consume any application tier.

These are not the only two databases that IBM is targeting. It has both MySQL and Microsoft SQL Server in its sights although, at present, those are still part of a migration rather than a native code solution and IBM is unable to say when there will be native code solutions. For Microsoft SQL Server, this should be relatively simple as the T-SQL it uses has not diverged much from the Sybase T-SQL from which it was derived.

To help developers understand more about this there are a number of very interesting articles up on the IBM DeveloperWorks web site.

 

http://www.ibm.com/developerworks/data/library/techarticle/dm-0907oracleappsondb2/

http://www.redbooks.ibm.com/abstracts/sg247736.html?Open

http://www.ibm.com/developerworks/wikis/display/DB2/Chat+with+the+Lab

http://www.ibm.com/developerworks/data/downloads/migration/mtk/

http://www.ibm.com/developerworks/data/library/techarticle/dm-0906datamovement/

 

The problem with Apple

| 2 Comments
| More

The media seems to love the iPhone and iPad. TV gadget shows, celebrities and  journalists feed the hype over a new Apple product. An iPhone or iPad launch is a big event, which drives more and more people to  buy products on Day One of launch, before anyone has even reviewed the product.

 

This means that products are not properly beta tested, such as the left-handed problem on the iPhone 4. It is entirely Apple's fault - for not running extensive quality assurance and product tests. If Apple is such a great brand, the it should offer customers the very highest quality products. Unfortunately, this is not hw the Apple marketing machine currently works. Let's hope that today its execs have to admit they were wrong, and agree to recall millions of products to fix the ridiculous iPhone 4 problem, which could have been spotted by any beta tester.

Terradata and ESRI combine to map data

| No Comments
| More

At the ESRI User Conference in San Diego, CA this week, Terradata and ESRI announced a new collaboration aimed at storing business data and Geographical Information System data in the same database. The goal is to enable businesses to better understand where there customers are in order to do more focused marketing.

This use of GIS and BI data is nothing new with an increasing number of marketing departments building their own solutions over the last 20 years. What is new is that all the data from both systems is being stored in the same database. The advantage for users is that rather than have to build complex queries against multiple data sources, they can write simpler, faster queries against a single database.

There are other advantages here for users. With both sets of data inside the same database, new data can be automatically matched with GIS information as it is entered. For any retailer doing overnight updates of their store data into a central BI system this provides them with the ability to create "next day" marketing campaigns aimed at individual stores. For large retailers, such as supermarkets, this is going to be highly attractive.

As well as retailers, Terradata and ESRI are targeting a number of other vertical markets such as telecommunications, utilities, transportation and government departments. In all these cases, being able to map usage and consumption to GIS data will mean the ability to deliver better services as need arises.

One area that will benefit highly is emergency response to situations where the mapping element of the GIS data will enable response teams in the field to immediately match population location with access routes. It will also provide them with the ability to create safe zones based on local geography without the problem of trying to match conditions on the ground with remote operations staff.

All of this marks a switch away from the integration plans of IBM, Oracle, Microsoft and others who believe that the future is in being able to access multiple data sources at query time and it will be interesting to see how long it takes for the others to follow Terradata's example. One company who could move quickly down this route is Microsoft by using its Bing Maps data but they currently have no plans to do so.

Archived comments:

Matt | July 16, 2010 5:32 PM

"What is new is that all the data from both systems is being stored in the same database. The advantage for users is that rather than have to build complex queries against multiple data sources, they can write simpler, faster queries against a single database."

How is this new? It has been possible to do this for years with Oracle/Spatial or PostgreSQL/PostGIS. The main stumbling block for ESRI users has been ESRI's proprietary data structures and expensive middle tier server technology and its historically poor support for truly enterprise class RDBMS.

Terradata and ESRI combine to map data

| No Comments
| More

At the ESRI User Conference in San Diego, CA this week, Terradata and ESRI announced a new collaboration aimed at storing business data and Geographical Information System data in the same database. The goal is to enable businesses to better understand where there customers are in order to do more focused marketing.

This use of GIS and BI data is nothing new with an increasing number of marketing departments building their own solutions over the last 20 years. What is new is that all the data from both systems is being stored in the same database. The advantage for users is that rather than have to build complex queries against multiple data sources, they can write simpler, faster queries against a single database.

There are other advantages here for users. With both sets of data inside the same database, new data can be automatically matched with GIS information as it is entered. For any retailer doing overnight updates of their store data into a central BI system this provides them with the ability to create "next day" marketing campaigns aimed at individual stores. For large retailers, such as supermarkets, this is going to be highly attractive.

As well as retailers, Terradata and ESRI are targeting a number of other vertical markets such as telecommunications, utilities, transportation and government departments. In all these cases, being able to map usage and consumption to GIS data will mean the ability to deliver better services as need arises.

One area that will benefit highly is emergency response to situations where the mapping element of the GIS data will enable response teams in the field to immediately match population location with access routes. It will also provide them with the ability to create safe zones based on local geography without the problem of trying to match conditions on the ground with remote operations staff.

All of this marks a switch away from the integration plans of IBM, Oracle, Microsoft and others who believe that the future is in being able to access multiple data sources at query time and it will be interesting to see how long it takes for the others to follow Terradata's example. One company who could move quickly down this route is Microsoft by using its Bing Maps data but they currently have no plans to do so.

Archived comments:

Matt | July 16, 2010 5:32 PM

"What is new is that all the data from both systems is being stored in the same database. The advantage for users is that rather than have to build complex queries against multiple data sources, they can write simpler, faster queries against a single database."

How is this new? It has been possible to do this for years with Oracle/Spatial or PostgreSQL/PostGIS. The main stumbling block for ESRI users has been ESRI's proprietary data structures and expensive middle tier server technology and its historically poor support for truly enterprise class RDBMS.

Microsoft Patch Tuesday: 13th July 2010

| No Comments
| More

With this July Microsoft Patch Tuesday Security Update, we see a moderate number of security updates with 4 updates to Windows XP, Windows 7 and Office, including three updates rated as 'Critical' and one rated as 'Important'. Unfortunately, all patched released this month will most likely require a reboot of the target system. In addition, all of these Microsoft Security Updates relate to Remote Code Execution vulnerabilities.

The ChangeBase AOK Patch Impact team has updated the sample application database to now more than 2000 unique application packages. All of the applications in this large sample application portfolio are analysed for application level conflicts with Microsoft Security Updates and potential dependencies.

Based on the results of our AOK Application Compatibility Lab, only one of the July Patch Tuesday updates is likely to require significant application level testing;

·         MS10-044 Vulnerabilities in Windows Kernel-Mode Drivers Could Allow Elevation of Privilege

We have included a brief snap-shot of some of the results from our AOK Software that demonstrates some of the potential impacts on the OSP application package with the following image:

Patch Tuesday1.JPG

In addition to this high level summary, we have also included a small sample of one of the AOK Summary reports from a smaller sample database;


Patch Tuesday2.JPG

Microsoft Patch Tuesday Update Testing Summary

MS10-042 Cumulative Security Update of ActiveX Kill Bits

MS10-043 Cumulative Security Update for Internet Explorer

MS10-044 Vulnerabilities in Windows Kernel-Mode Drivers Could Allow Elevation of Privilege

MS10-045 Vulnerabilities in Microsoft SharePoint Could Allow Elevation of Privilege 


Patch Tuesday3.JPG


Security Update Detailed Summary

Patch Tuesday4.JPG

Patch Tuesday5.JPG

Patch Tuesday6.JPG 






Oracle announces BI 11g

| No Comments
| More

Last week, Oracle announced the availability of Oracle Business Intelligence 11g. This is a major release for Oracle and comes at a time when Microsoft and SAP have both made major announcements of their own.

The emphasis at the launch was on Integration with Charles Phillips, President, Oracle stressing the depth of the Oracle stack from storage to applications and all points in between. Much of the stack has come from the Sun acquisition and it is too soon to be sure that the emphasis on integration that Phillips kept stressing is really there.

Phillips was keen to point out that it was not just the ability to present an integrated stack that set Oracle apart from the competition but its focus on standards. Phillips told attendees at the launch that Oracle "supports standards, helps define standards and is standards driven". One of the challenges for Phillips here is that Oracle has been particularly coy on what is happening with the whole Java standards process.

The key message for Oracle BI 11g alongside integration was that as the industry moves forward we will begin to see BI embedded in all our processes. This makes a lot of sense. To many people, BI is still all about sales and competitive edge. Yet some companies are already looking at BI tools to see what else it can provide such as a better understanding of IT management in complex environments or the performance of software development. At present, however, Oracle has no stated intention of addressing either of these markets.

There is little doubt that Oracle is keen to counter the messaging from Microsoft about BI everywhere. In the launch, there was a ken focus on the end user experience and the ability to not only access BI from any device but to ensure consistency of what you were working with.

This is important. One of the criticisms of the Microsoft TechEd announcements was the loss of synchronisation and control of data as users saved into SharePoint and created a lot of disconnected data. Oracle is keen to ensure that it can keep control of the data and there was a significant focus on security and data management.

Yet despite all this talk of access from everywhere, Oracle has decided against embedded BI tooling inside Oracle Open Office and instead opted for seamless integration with Microsoft Office. This has to be a mistake and those who believed that Oracle has no interest in Open Office will see this as the smoking gun they have been waiting for.

If Oracle really wants to see Open Office and the enterprise version become a significant competitor to Microsoft's Office then embedding the BI tools is a requirement. This will also make it easier for developers to create applications that are part of the daily toolset used by end users and make BI just another desktop function.

Phillips presentation was just the warm up. It was left to Thomas Kurian, Executive Vice President, Oracle to make the full technical presentation of Oracle Business Intelligence 11g.

Kurian wasted no time in presenting the Common Business Intelligence Foundation upon which the Common Enterprise Information Model is built. This is designed to manage all of the integration between applications/devices and the data sources.

There were several features that stood out for me. The first is the reporting and Oracle BI Publisher. Lightweight, able to access multiple data source formats and output into a wide range of file formats it is also scalable. What was missing was any announcement that Oracle was going to release it as a BI appliance. This would be a game changer, especially if those appliances could be deployed to remote offices.

The second key feature is the integration with WebCenter Workspaces allowing users to collaborate on BI reports. What isn't fully clear yet is whether this has the same potential for data explosion as Microsoft allowing users to save to SharePoint.

The third feature is the Oracle BI Action Framework which can be manual or automated and will appeal to developers looking to build complex applications. It will tie into the existing Oracle Middleware and uses alerts to detect changes to key data. Users can not only ensure that they are working with the latest data but developers can use those alerts to call web services and trigger workflows.

Security is a major problem with BI. As data is extracted from multiple sources and stored locally, it is possible to lose data and end up with unauthorised access to data. The BI 11g Security model uses watermarking of reports as well as encryption along with role-based security. One potential issue here will be how that encryption will be deployed throughout the business chain when you are distributing data to suppliers and customers.

Finally, not only is Oracle looking to provide a range of packaged BI applications within the BI Enterprise Edition but it is ensuring that the BI capability is embedded inside its existing applications. This two pronged approach should mean a tighter link between the BI tools and the applications.

This was a big announcement from Oracle and one that requires some digestion. There are missing elements such the Oracle Open Office integration and the failure to announce a BI appliance. However, it is clear that Oracle is determined to stamp its control on the BI market and as it continues to integrate Sun into the product strategy, we can expect to see complete end to end solutions in the coming months.

You can watch the keynotes of both Phillips and Kurian as well as download their presentations at: http://www.oracle.com/oms/businessintelligence11g/webcast-075573.html

Oracle announces BI 11g

| No Comments
| More

Last week, Oracle announced the availability of Oracle Business Intelligence 11g. This is a major release for Oracle and comes at a time when Microsoft and SAP have both made major announcements of their own.

The emphasis at the launch was on Integration with Charles Phillips, President, Oracle stressing the depth of the Oracle stack from storage to applications and all points in between. Much of the stack has come from the Sun acquisition and it is too soon to be sure that the emphasis on integration that Phillips kept stressing is really there.

Phillips was keen to point out that it was not just the ability to present an integrated stack that set Oracle apart from the competition but its focus on standards. Phillips told attendees at the launch that Oracle "supports standards, helps define standards and is standards driven". One of the challenges for Phillips here is that Oracle has been particularly coy on what is happening with the whole Java standards process.

The key message for Oracle BI 11g alongside integration was that as the industry moves forward we will begin to see BI embedded in all our processes. This makes a lot of sense. To many people, BI is still all about sales and competitive edge. Yet some companies are already looking at BI tools to see what else it can provide such as a better understanding of IT management in complex environments or the performance of software development. At present, however, Oracle has no stated intention of addressing either of these markets.

There is little doubt that Oracle is keen to counter the messaging from Microsoft about BI everywhere. In the launch, there was a ken focus on the end user experience and the ability to not only access BI from any device but to ensure consistency of what you were working with.

This is important. One of the criticisms of the Microsoft TechEd announcements was the loss of synchronisation and control of data as users saved into SharePoint and created a lot of disconnected data. Oracle is keen to ensure that it can keep control of the data and there was a significant focus on security and data management.

Yet despite all this talk of access from everywhere, Oracle has decided against embedded BI tooling inside Oracle Open Office and instead opted for seamless integration with Microsoft Office. This has to be a mistake and those who believed that Oracle has no interest in Open Office will see this as the smoking gun they have been waiting for.

If Oracle really wants to see Open Office and the enterprise version become a significant competitor to Microsoft's Office then embedding the BI tools is a requirement. This will also make it easier for developers to create applications that are part of the daily toolset used by end users and make BI just another desktop function.

Phillips presentation was just the warm up. It was left to Thomas Kurian, Executive Vice President, Oracle to make the full technical presentation of Oracle Business Intelligence 11g.

Kurian wasted no time in presenting the Common Business Intelligence Foundation upon which the Common Enterprise Information Model is built. This is designed to manage all of the integration between applications/devices and the data sources.

There were several features that stood out for me. The first is the reporting and Oracle BI Publisher. Lightweight, able to access multiple data source formats and output into a wide range of file formats it is also scalable. What was missing was any announcement that Oracle was going to release it as a BI appliance. This would be a game changer, especially if those appliances could be deployed to remote offices.

The second key feature is the integration with WebCenter Workspaces allowing users to collaborate on BI reports. What isn't fully clear yet is whether this has the same potential for data explosion as Microsoft allowing users to save to SharePoint.

The third feature is the Oracle BI Action Framework which can be manual or automated and will appeal to developers looking to build complex applications. It will tie into the existing Oracle Middleware and uses alerts to detect changes to key data. Users can not only ensure that they are working with the latest data but developers can use those alerts to call web services and trigger workflows.

Security is a major problem with BI. As data is extracted from multiple sources and stored locally, it is possible to lose data and end up with unauthorised access to data. The BI 11g Security model uses watermarking of reports as well as encryption along with role-based security. One potential issue here will be how that encryption will be deployed throughout the business chain when you are distributing data to suppliers and customers.

Finally, not only is Oracle looking to provide a range of packaged BI applications within the BI Enterprise Edition but it is ensuring that the BI capability is embedded inside its existing applications. This two pronged approach should mean a tighter link between the BI tools and the applications.

This was a big announcement from Oracle and one that requires some digestion. There are missing elements such the Oracle Open Office integration and the failure to announce a BI appliance. However, it is clear that Oracle is determined to stamp its control on the BI market and as it continues to integrate Sun into the product strategy, we can expect to see complete end to end solutions in the coming months.

You can watch the keynotes of both Phillips and Kurian as well as download their presentations at: http://www.oracle.com/oms/businessintelligence11g/webcast-075573.html

SAP and Sybase - who gains?

| No Comments
| More

When SAP announced its intention to acquire Sybase in May 2010, it immediately raised a number of questions. Seven weeks on and neither side seems particularly interested in publicly talking about the rationale behind this acquisition.

At first glance, this appears to be a smart move by SAP and a business saver for Sybase. 

A decade ago, ERP and CRM applications were seen as only relevant for large enterprises. Today, with the explosion of hosted services, even the smallest of companies can buy access to such software. This means that vendors need to be quick to respond and be able to support a much wider spread of customers.

SAP has led this market for a number of years but the acquisition of Siebel by Oracle, the consolidation by Microsoft of its Dynamics division and the success of Salesforce.com have started to make inroads into the business. SAP has not been idle. It has built a strong developer community and has established hosting deals with a number of companies such as T-Systems who host over 1.5m SAP seats.

Despite all of this, SAP has one part of the cycle that it does not own and its competitors do - the underlying database. The ability for customers and developers to tune their applications for maximum performance is critical and the best way to do this is to own all the components.

This presents SAP with a real challenge. It has done very well out of IBM, Microsoft and Oracle, all of whom have invested significant sums of money in building consultancies capable of tuning SAP on their database products. Oracle recently set a new benchmark for SAP performance running on top of its own database products so it might seem that there is little need for SAP to buy its own database product.

Sybase has been the fourth largest database vendor for some time now but the last two decades have not always been kind to it. In the 1990s, not only did it rival Microsoft with its Rapid Application Development tools but at various points was seen as the market leader. When the RAD tools market took a dive, Sybase was hit very hard and has really struggled to reinvent and reposition PowerBuilder as a mainstream development tool.

The well publicised split between it and Microsoft that left Microsoft with SQL Server did allow Sybase to concentrate on the Enterprise market while Microsoft built a product that could compete with it. While no longer being a significant player in the general database space, Sybase does have a serious position in the high-end database market, mobile services and low-end portable databases. Sybase also has its own Business Intelligence and Analytics tools.

All of these appeal to SAP. They can use the high-end database product which includes in-memory and cloud versions to extend their hosted platform offerings. The mobile services platform means that they can position themselves into the operator and payments arena. Finally, the low-end portable database market means that their development community can build applications for mobile workforces where data can be collected on devices such as Smartphones, PDAs and laptops that will synchronise easily into the enterprise solutions.

Taken together, this would appear to give SAP a complete set of offerings and enable it to compete with those competitors who have a complete stack from developer tools, through ERP/CRM and database.

However, there are issues that need to be resolved. The first is that the large percentage of SAP sales come through the professional services teams at IBM, Oracle and even HP. By having its own database and tools, it will need to prove that it is not intending to abandon customers using other databases.

While the Sybase tooling looks good it is still far from perfect. Building tools for a wide range of mobile devices is not easy and the current tools are very Microsoft focused. Despite talking Rich Internet Applications for several years, Sybase has failed to deliver any serious RIA tooling.

The BI tools are not widely used and SAP is going to have to make decisions as to how to integrate them with Crystal Reports to create a single powerful end to end reporting and analytics engine.

So, is this a wise move? Provided SAP is prepared to drive Sybase and not allow it to just operate as a fully autonomous business unit, this makes sense. But if it treats Sybase in the same way as EMC did VMware for several years, any benefits will be slow in maturing.

SAP and Sybase - who gains?

| No Comments
| More

When SAP announced its intention to acquire Sybase in May 2010, it immediately raised a number of questions. Seven weeks on and neither side seems particularly interested in publicly talking about the rationale behind this acquisition.

At first glance, this appears to be a smart move by SAP and a business saver for Sybase. 

A decade ago, ERP and CRM applications were seen as only relevant for large enterprises. Today, with the explosion of hosted services, even the smallest of companies can buy access to such software. This means that vendors need to be quick to respond and be able to support a much wider spread of customers.

SAP has led this market for a number of years but the acquisition of Siebel by Oracle, the consolidation by Microsoft of its Dynamics division and the success of Salesforce.com have started to make inroads into the business. SAP has not been idle. It has built a strong developer community and has established hosting deals with a number of companies such as T-Systems who host over 1.5m SAP seats.

Despite all of this, SAP has one part of the cycle that it does not own and its competitors do - the underlying database. The ability for customers and developers to tune their applications for maximum performance is critical and the best way to do this is to own all the components.

This presents SAP with a real challenge. It has done very well out of IBM, Microsoft and Oracle, all of whom have invested significant sums of money in building consultancies capable of tuning SAP on their database products. Oracle recently set a new benchmark for SAP performance running on top of its own database products so it might seem that there is little need for SAP to buy its own database product.

Sybase has been the fourth largest database vendor for some time now but the last two decades have not always been kind to it. In the 1990s, not only did it rival Microsoft with its Rapid Application Development tools but at various points was seen as the market leader. When the RAD tools market took a dive, Sybase was hit very hard and has really struggled to reinvent and reposition PowerBuilder as a mainstream development tool.

The well publicised split between it and Microsoft that left Microsoft with SQL Server did allow Sybase to concentrate on the Enterprise market while Microsoft built a product that could compete with it. While no longer being a significant player in the general database space, Sybase does have a serious position in the high-end database market, mobile services and low-end portable databases. Sybase also has its own Business Intelligence and Analytics tools.

All of these appeal to SAP. They can use the high-end database product which includes in-memory and cloud versions to extend their hosted platform offerings. The mobile services platform means that they can position themselves into the operator and payments arena. Finally, the low-end portable database market means that their development community can build applications for mobile workforces where data can be collected on devices such as Smartphones, PDAs and laptops that will synchronise easily into the enterprise solutions.

Taken together, this would appear to give SAP a complete set of offerings and enable it to compete with those competitors who have a complete stack from developer tools, through ERP/CRM and database.

However, there are issues that need to be resolved. The first is that the large percentage of SAP sales come through the professional services teams at IBM, Oracle and even HP. By having its own database and tools, it will need to prove that it is not intending to abandon customers using other databases.

While the Sybase tooling looks good it is still far from perfect. Building tools for a wide range of mobile devices is not easy and the current tools are very Microsoft focused. Despite talking Rich Internet Applications for several years, Sybase has failed to deliver any serious RIA tooling.

The BI tools are not widely used and SAP is going to have to make decisions as to how to integrate them with Crystal Reports to create a single powerful end to end reporting and analytics engine.

So, is this a wise move? Provided SAP is prepared to drive Sybase and not allow it to just operate as a fully autonomous business unit, this makes sense. But if it treats Sybase in the same way as EMC did VMware for several years, any benefits will be slow in maturing.

About this Archive

This page is an archive of entries from July 2010 listed from newest to oldest.

June 2010 is the previous archive.

August 2010 is the next archive.

Find recent content on the main index or look in the archives to find all content.

Archives

Category Archives

 

-- Advertisement --