No one can predict the future. Knowing which technologies are likely to shape the business world can help you select an infrastructure that works today and grows tomorrow
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
If you are in charge of directing the IT structure for your company, choosing the right infrastructure for moving data around an organisation will have a huge effect on the future of your company in terms of how you grow, what your support overhead will be and, ultimately, the core functionality of your network.
Forward thinking companies are now benefiting from low-cost upgrade strategies that offer substantial improvement in functionality and quality of service, while those who choose unwisely are trying to justify costly and complex upgrades. Knowing where each type of architecture fits best into your organisational structure and what market future trends are likely to affect the day-to-day running of your business.
During the 1970s, use of the more versatile and less expensive mini-computers became common, since users demanded the computing power close to the area where the work was being performed. In addition, for some applications, users were sharing files, programs, storage devices and peripherals. Data needed to be exchanged across not only departments and buildings, but also across large geographic distances. An increased need for communications from computer to computer was apparent. Data exchanges required a higher transmission rate than the earlier dumb terminal to computer connections. When minicomputers were connected together in a network they could be used to replace the central computer. By segmenting applications, minicomputer networks handled the processing more economically, and together they provided more processing power than the central computer ever could have.
Network design was geared toward the individual jobs that made up each working environment. Minicomputers were placed where the work was being performed, each processor manipulating a clearly defined job set. Neighbouring processors and applications transferred data back and forth via the network. Networks could be expanded and reconfigured easily to meet changing needs, office configurations, and expansion. Since the networks were fairly modular, a failure in a certain part of the network would effect a very limited segment of the entire operation.
In the 1980s and 1990s networks were recognised for their advantages in many different environments including offices, laboratories, and factories. It is now common for systems to be located at the site of the application for data processing, process control, word processing, email, database management, and graphic design.
Ethernet - here and now
The office environment for most info-centric businesses today is still physically the same as 20 years previous; people at desks communicating with others via various methods with the ability to affect data held within company files. As industrialisation reduces the number of people required to produce goods, IT has filled the gap making data a commodity in itself. With structured cabling (CAT5) installed along with mains sockets in every modern office building, Ethernet has become the pipe for the majority of the desktop PCs in the offices around the globe.
The migration from 10 BaseT up to 100Mb-switched is currently underway for many organisations. With this trend, several large companies that traditionally offered networking solutions across a broad base of technologies are now repositioning themselves as Ethernet-only companies.
The biggest of these, Hewlett-Packard, now see Gigabit Ethernet to the desktop as a real cost-effective option for business with voice, video and email-over-IP as the driving factors. Gigabit Ether is still only being deployed as backbone architecture, but, as in the move from 10Mb to 100Mb, once the necessity arrives, companies will also migrate. Changing boards in switches is not enough; certification of cabling will almost certainly be necessary for a reliable migration to 1000Mbit.
Business likes IP, it understands how it works and more and more peripheral manufactures are producing kit that simply hangs on the corporate intranet via IP. Email and web servers, video conferencing kits, telephones and, of course, printers - are all IP compliant.
1000Base-T is a form of Gigabit Ethernet technology that has been designed to operate on up to 100 metres of a customers' existing category 5 Unshielded Twisted-Pair cabling. This technology uses Digital Signal Processing to provide low-cost, high-performance connectivity between switches and to the desktop. By taking advantage of the vast installed base of category 5 UTP, 1000Base-T offers the opportunity to create a cost-effective solution utilising Gigabit Ethernet technology by substantially reducing the cost of a network connection.
Unfortunately, the new high speed Ethernet paradigm is still flawed by quality of service issues and the inefficient nature of Ethernet. Recent figures from ODS Networks points out that the 1000 megabit pipe offered by Ethernet only offers a true 400Mbit performance level. These criticism are being addressed in the new range of switches from Cisco, 3COM and Hewlett-Packard. HP's new ProCurve Switches 8000M, 4000M, 1600M and 2424M now ship with policy based quality of service modules.
Allowing network managers to configure their networks so business-critical applications and servers are given priority during periods of high bandwidth demand. CoS enables the prioritisation of traffic at the desktop switch level by creating differentiated levels of priority using criteria such as IPToS bits, IP Address, VLAN or protocol information. Packets are then tagged, switched and forwarded in queue according to their prioritisation. These features use 802.1p/Q standards and are completely interoperable with other existing switches and routers.
Cisco uses its edge devices to provide quality of service and traffic classification across the network. The network core acts as a transport, acting on the classification assigned at the edge of the network. The Catalyst 5509 also provides IP Type of Service classification, a feature that distinguishes both Cisco's and 3Com's product lines.
Although Gigabit Ether is touted as an alternative ATM, once you get to ISP and Carrier level, the trend to outsource is often a more cost-effective alternative. With the extensive investment in ATM by the telecommunications companies over the last 10 years, only smaller start-ups with small, localised markets seriously consider GigaEther as a challenger to ATM.
ATM - the carriers favourite
ATM is not the stagnant pool that some claim. Although more complex and more difficult to implement, ATMs widespread acceptance by the telecommunications companies coupled with the emergence of higher bandwidth OC-48 switches that are able to cope with IP and ATM protocols. For most businesses, ATM implementation is "someone else's problem", but where ATM intersects with the network edge, innovative products need to be deployed.
The likes of Nortel, Newbridge and Cisco are evolving to provide more IP-over-ATM service, but some of the smaller players like General DataCom, Madge and Xylan are offering switches and routers that acknowledge the multimedia requirements of larger enterprises. General DataCom's Apex DV2 has plug-in modules for highly compressed MPEG2 video, data and voice all over IP. Groundbreaking Products like DV2 are becoming the norm as both large enterprises and carriers upgrade the infrastructure to accommodate IP.
DSL integrates in the tele workers
Within businesses, the decentralised tele and mobile worker need to be catered for within the network infrastructure. Although currently a small number, this is likely to grow as businesses become data-centric and not location-centric. DSL, in all its flavours, looks able to extend the corporate intranet into the SOHO sector.
Asymmetric Digital Subscriber Line (ADSL) is now also known as G.992.1, and it supports up to 8 Mb/s bandwidth downstream and up to 1 Mb/s upstream. The asymmetrical aspect of ADSL technology makes it ideal for Internet/intranet surfing, video-on-demand and remote local area network access. Users of these applications typically download more information than they send.
ADSL requires a voice/data splitter, commonly called a Plain Old Telephone Service Splitter, to be installed at the consumer's home or business premises. This device separates voice from data transmissions. For simultaneous use of the telephone and data access, additional phone wires may need to be installed within the premises. Full rate ADSL provides service up to a maximum range of 18,000 feet from the telecommunication provider company's central office to the end-user.
One of the pioneers in the field of DSL is 3Com. European broadband development manager, Mikko Summala, predicted DSL's future in a recent interview. "DSL will eventually replace ISDN as the last mile solution, but instead of evolutionary, it will be revolutionary, providing bandwidth up to 100 times greater than conventional modem technology". DSL in the UK is still a long way off for the mass market due to prohibitively high initial pricing.
Whether you're looking at implementing Ethernet, ATM or DSL, some say the move towards a single transportation protocol is inextricable. Once you have a common method for moving audio, video and data around an organisation, city or even country, creating new ways of using this data becomes much simpler. If, as an organisation, you lay the ground work for this down now, when the revolution comes, you will be in the perfect place to take advantage of the "Everything-Over IP" world.