Computer Weekly Buyer's Guide features list 2020

Here is our 2020 schedule of up-and-coming in-depth articles that will cover carefully selected topics to help IT leaders select the right technology for their organisation

Computer Weekly Buyer’s Guides map the IT buying cycle of our readership onto relevant editorial that will inform and educate readers and help them in making the right buying decision.

On a three-week cycle, the publication runs a series of articles focused on a particular category of software/hardware/IT service. Articles appear in the features section of the Computer Weekly ezine, which can be downloaded as a PDF or viewed as an SEO-optimised Buyer’s Guide page on the Computer Weekly website.

The Buyer’s Guide PDF downloads point readers to the online Buyer’s Guide, where they will be able to access all the articles in one place, along with additional content, such as blog posts and related articles.

The editorial team updates the Buyer’s Guide schedule on a quarterly basis to ensure the chosen technologies are topical and to respond to short-term commercial opportunities.

Buyer’s Guides comprise three separate features, which combine to become a standalone piece of evergreen content that readers can refer back to.

Each part includes a written article, plus relevant background material, as well as exclusive online-only multimedia content and infographics.

Format of Computer Weekly Buyer’s Guides

Market overview
This is an introduction to the topic covered in the Buyer’s Guide. The article will examine the nature of a given software/hardware/IT services product category, look at where it fits in the business, why users need it and which companies sell products in this category.

Analyst perspective
Here, Computer Weekly invites leading IT analysts to submit relevant research that can help readers narrow down product choices with a shortlist of products they may wish to investigate further.

Case study
At this stage in the buying cycle, the reader has a shortlist and may have given his/her technical people a brief to research the products in more detail, such as by following up customer references from the supplier. Computer Weekly supports this research with an in-depth case study, selected for its uniqueness, which illustrates best practices, technical and business drivers, lessons learnt and future plans of a successful IT project using one of the products shortlisted.

Please email Cliff Saran for further details.

The proposed schedule for H1 2020 is as follows:

Computing at the Edge

Jan 7 - Jan 27

In this series of articles we will explore how IT architectures are being adapted to make use of local processing combined with cloud-based processing. For instance, heavy machinery may require local processing to run AI algorithms in order to make real time operational decisions, but  this data is fed up to the cloud, for advanced analytics. What are the use cases, IT architectural issues, networking concerns, security/privacy implications of combining edge computing with public cloud computing?

Onsite/offsite and cloud-based disaster recovery

Jan 28 - Feb 17

The big question for any organisation in terms of DR is what Recovery Time Objective (RTO) and Recovery Point Objective (RPO) they need and what level of performance they need when they recover. This governs whether disaster recovering is on-site, off-site, cloud or something that makes use of all of these. In this series of articles we will look at options for synchronously mirrored data centres; failing over to a virtual environment and second site; failing over to the cloud and a hybrid approach.

Zero trust

Mar 10 - Mar 30

By eliminating trust on a network, IT security is simplified. In essence, it means that there are more trusted interfaces, no more trusted users, no more trusted network packets and no more trusted applications. But to achieve this, the security architecture needs to ensure that all networked resources are accessed in a secure manner and access control is strictly enforced on a need to know basis. Tight monitoring of the environment is the third pillar of the zero trust model. We look at how organisations are moving from traditional network security to a zero trust architecture.

Network upgrades for cloud native infrastructure

Mar 31 - Apr 27

As networking equipment rapidly evolves to being software-defined and cloud-native, there is growing interest in products and services for automating deployment, scaling resources and improving lifecycle management in order to accelerate service delivery, improve end-user experiences, and reduce infrastructure and operational costs in service delivery applications. We look at how computing is driving a revolution in networking.

Application modernisation

Apr 28 - May 18

Unlike digital first organisations, traditional businesses have a wealth of enterprise applications built up over decades, many of which continue to run core business processes. In this series of articles we investigate how organisations are approaching the modernisation, replatforming and migration legacy applications. We look at the tools and technologies available, change management and the use of APIs and containerisation to make legacy functionality and data available to cloud-native applications.

Modernising public sector 

May 19 - Jun 8

As they become more digitised, the public sector is having to rethink the role of IT, shifting from an outsourced model and migrating applications from on-premise datacentres to the cloud. In this series we explore how public sector IT-powered services are moving from back-end processing to citizen facing digital services, making greater use of web transactions and apps, such as the rollout of online online patient records e-prescription in the health service. They are also modernising procurement, shifting away from large multi-year deals with the major public sector IT providers to make it easier to work with SME IT providers.

Datacentre cooling technologies

Jun 9 - Jun 29

Every tweet or social media update uses CPU cycles, which uses a tiny amount  of electricity and generates a tiny amount of heat. But multiply this a billion times, and datacentres can quickly overheat.  The industry has shifted to and from air-cooling and water cooling methods of keeping the processors in datacentres at operating temperatures. Air-cooling is not a best fit for HPC installations, supercomputing, or machine learning workloads. There are also questions over the effectiveness of water-cooling and the environmental impact of waste datacentre water, leading to renewed interest in refrigerants. In this series of article we investigate how the datacentre industry is keeping chips cool

Data quality

Jun 30 - Jul 20

Organisations are becoming more data-driven, which means that data is increasingly being used to gather new insights. Data is also use as inputs to drive automation and to improve the accuracy of machine learning and predictive analytics. Poor quality data results in poor decision-making and biased or inaccurate data models. In this series of articles we explore how machine learning, interactive visualisation and predictive/prescriptive analytics are now being used to improve data quality.

Read more on IT strategy

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close