For most large enterprises, the answer to all of the above will be, yes, of course. But suppliers never tire of finding new ways to ask the same old availability and scaleability questions. We're all quite familiar with the tirade of announcements and marketspeak that signals the end of summer, and the long, slow descent into winter. But somehow the rhetoric seems particularly highly charged among the high-end server suppliers this year.
The feeds and speeds are re-emerging with a vengeance, inviting the beleaguered data centre manager to take a peek under the covers, when all he really wants is a box that works, 24 hours a day, seven days a week.
There are two main reasons why the market is buzzing so loudly at the moment. First and foremost, of course, there is a palpable need for very powerful servers, to handle consolidation of distributed systems (both corporate applications and web/intranet servers); to host large-scale knowledge management and 'enterprise portal' applications; and to provide a long-term, scaleable growth path for the highly unpredictable e-business, ERP, and CRM systems that are growing up on all sides.
The second reason for the buzz is that some key techologies are emerging, which are changing the dynamics of the enterprise server business: the first wave of Intel IA-64 processors, in the shape of Itanium and later McKinley; the inexorably upwardly mobile Linux; and the still unproven Windows 2000 Datacenter Server. Add in 64-bit Unix and the NUMA-Q architecture, and we see a large number of suppliers standardising on a relatively small number of technological components.
This convergence is good and bad news. On the whole, it's good for the customers, because it should mean less complexity in the longer term, which is one of the biggest problems facing enterprise data centres. And as the market becomes more commoditised, of course, prices tend to be driven down.
But it's not such good news for the suppliers themselves; for while they espouse open standards, they continue to search for proprietary benefits which will give them an edge over the competition, and allow them to keep their margins up.
And that's why there's such a lot of shouting going on at the moment. Each supplier in the large server space has a different unique sales proposition and a different strategy for software and services, and each one is desperate to be heard. Quite apart from the problem of providing a robust migration strategy for their existing customers, they need to increase their share of a highly sought-after market.
Currently at the head of the pack is Sun, moving into an impressive second generation of 64-bit SPARC in the shape of UltraSPARC III, with an enviable marketing track-record, but a somewhat isolationist determination to stick rigidly to Solaris. If Sun is to maintain its strong market share, it needs to manage its prices carefully and promote the manageability benefits of the Unix-only route.
For IA-64 co-designer Hewlett-Packard and its new Superdome, there is a clearly defined range of service offerings for various levels for mission-critical delivery, bold new service partnerships on the horizon, and a tripartite approach to software - with NT, Linux, and HP-UX running flexibly in physical or logical partitions.
Unisys, with the ES/7000, is king of the Cellular MultiProcessing (CMP) architecture, using its mainframe experience to help Microsoft scale its software up instead of out (and, more to the point, making Windows 2000 Datacenter live up to its name). CMP, which also has Compaq and Fujitsu/ICL on board as resellers, is a force to be reckoned with: the idea of 32-way Windows servers with mainframe credentials is still pretty mind-blowing - in fact, I might need a decade or two to get used to it - but it's coming fast, and with a price/performance level that needs careful attention.
And that leaves IBM, as multi-faceted as ever. It has servers to fit all shapes and sizes: a continuing investment in Power3 and Power4 systems, more than a passing interest in Itanium, a hand in the high-end NT camp, and a fascination with Linux scaleability; all of which need to be balanced against its core S/390 and AS/400 architectures.
Talking of S/390 and AS/400, though, many large corporate users will still feel that the best way to provide large-scale server consolidation and virtually limitless scaleability for e-business-critical apps is to move them all onto the mainframe. This approach has a great deal to recommend it, particularly for those who can run to a Parallel Sysplex configuration, and who don't yet have a pressing need for 64-bit apps. But for all their strengths, and dramatically improved price/performance, IBM's top-end platforms will have some stiff competition to face in the coming months from suppliers desperate to make their name in tomorrow's data centre.