Create Date: 5/03/2024
Partnered with:
For all the excitement around AI, what has accompanied one of the most important technological breakthroughs in a generation are the risks involved in developing and deploying it quickly today.
AI, as critics point out, can be flawed in many ways. It can inject bias when analyzing data or fail to take enough care to protect privacy. At the same time, some AI decisions are becoming difficult to explain with increasingly complex systems.
For businesses, not addressing these issues can have far-reaching consequences. Losing customer data, for example, would not only damage one’s reputation but also bring legal repercussions, especially in highly regulated industries such as finance or healthcare.
Understandably, businesses now face a dilemma – they have to invest early in AI yet grapple with many still-evolving challenges.
The answer to this is in finding the right balance between the powerful capabilities of modern AI and the need for trustworthiness, accountability, and responsible development and deployment.
As a guide, there are four areas that businesses can address today – algorithmic bias, data privacy, model transparency, and validation and monitoring.
Addressing the bias
One of the first challenges is mitigating the algorithmic bias that may be introduced into AI models. This may be added through poor training data that does not have sufficient diversity or representation, thus leading to unfair or inaccurate outputs.
For example, training an AI model on data from a particular location could lend bias to, say, the demographics of the location.
To overcome this, businesses can turn to synthetic data generation techniques to create datasets that are diverse and bias-free, so results derived from them are fairer and more accurate.
Another way to reduce bias is to rely on transfer learning. This means having an AI model learn from a task that is re-used, so that its performance is boosted over time.
For example, if an AI is trained to recognize, say, images of cars, it can be finetuned to pick out trucks as well. This enables businesses to target use cases more fairly without adding bias from unrepresentative datasets.
Federated learning boosts privacy
Data privacy and ownership are important for businesses as they use increasing amounts of data to train their AI models. Federated learning, which involves training AI models on data from various sources without actually “touching” or accessing them directly, is one way forward.
In a nutshell, each client takes a foundational AI training model and runs its raw data through the model before encrypting and sharing the results to a server.
This collaborative model means data is not stored centrally and it is also encrypted securely while in transit, so the risks of data loss are lessened.
Having transparency
As AI systems become more complex, especially in the case of generative AI, it is becoming harder to explain how an AI comes to a result or delivers an insight. Businesses that base their decisions on AI need to be able to explain how they get there.
What’s useful here are XAI (Explainable AI) methods, such as feature importance analysis, prototype examples, and attention visualization, to improve the interpretability of model decisions.
Feature importance analysis, for example, calculates a score for the input features in an AI model, thus showing the weightage of the variables involved in coming up with a result.
In the coming years, transparency will be even more important. Businesses cannot simply point to an AI “black box” when asked by stakeholders and customers. They have to be able to explain how AI helps them come to a decision to take an action.
Validating and monitoring
Like many transformative technologies today, AI involves a continuous path of improvement. There needs to be comprehensive validation and ongoing monitoring to ensure that AI models remain safe and ethical, while being aligned with business objectives.
Businesses have to ensure they are using authoritative, high-quality data sources for AI models to keep up accuracy and reliability. Diverse datasets are needed to capture the full spectrum of a target user population.
While much of today’s AI work is on the cloud, on-premises deployments may make sense for organizations in highly regulated sectors.
Healthcare, finance or government organizations can make use of the additional privacy and security that comes with mitigating the risks of external data sources and cloud-based services.
They can also turn to confidential computing technologies, such as those provided by Nvidia's H100 and H200 graphics processing units (GPUs), to ensure data and model protection during training and inference.
Moreover, with the rapid advancements in AI infrastructure, solutions like the Dell PowerEdge XE9680 are transforming the landscape of AI implementation. This flagship eight-way GPU-accelerated server, showcased at events like the NVIDIA GPU Technology Conference (GTC), epitomizes Dell's commitment to pushing the boundaries of AI capabilities. With support for NVIDIA's latest architectures, including the H200 Tensor Core GPUs and the groundbreaking Blackwell-architecture GPUs, the PowerEdge XE9680 offers unparalleled performance for generative AI training, model customization, and large-scale inferencing.
Organizations leveraging the PowerEdge XE9680 gain not only immense computational power but also enhanced data integrity and security, crucial for industries requiring stringent regulatory compliance and data protection.
Using AI “the right way”
With a technology that is moving so fast, it would be foolhardy to cast in stone one “right way” to make use of AI. More risks could emerge with the growing power of AI, while emerging guardrails that are more effective and precise could help moderate these risks at the same time.
Businesses, however, should not wait until the dust has settled before taking advantage of AI – the gap could be far too huge to make up in future.
What they have to do is take clear steps to manage the risks that AI brings. Having a framework to figure out the many moving parts of the technology today is the best plan for the future.
Dell Technologies: https://www.dell.com/en-sg/dt/solutions/artificial-intelligence/generative-ai.htm
NVIDIA: https://www.nvidia.com/en-sg/ai-data-science/generative-ai/
