start portlet menu bar

HCLSoftware: Fueling the Digital+ Economy

Display portlet menu
end portlet menu bar
Close
Select Page

The buzz about artificial intelligence (AI) has become impossible to ignore—and has been matched by the rising adoption of enterprise AI. According to McKinsey reports, the global AI adoption rate has increased steadily to reach 35%—up four points from the year before—and is expected to reach 40% by year’s end, meaning that four organizations in ten will be using AI in at least one business area by the end of 2023.

The blindingly fast improvement in AI technologies has even serious stakeholders struggling to keep up, and the daily drumbeat of new breakthroughs has crossed over from industry publications to the mainstream press. Generative AI, in particular, is front and center on everyone’s mind. But while AI is unquestionably having a zeitgeist moment, there are still roadblocks to full acceptance—one of the greatest of which is the “black box” problem.

In AI, black-box models refer to AI algorithms that make decisions or predictions without providing clear explanations or reasons for their outputs. This inability to explain the workings of AI systems can present significant challenges—notably in sensitive areas like health care or finance where the possibility of harm or abuse is especially high.

How Explainable AI (XAI) addresses the “black box” problem

Enter Explainable Artificial Intelligence, aka XAI. XAI seeks to address the problem of “black box” models in AI systems by providing interpretable and understandable explanations for AI system outputs—enabling users to see just how and why a particular decision or prediction was made.

The lack of transparency and interpretability in traditional AI models can be broken down into four main types.

  • Lack of trust:

    Users, stakeholders, and regulatory bodies may hesitate to adopt or accept AI systems if they can’t understand the reasoning behind the decisions made by the model. Trust is crucial when deploying AI—especially where people see a possibility of biased or incorrect outcomes but have no clear way to interpret and verify the results.
  • Bias and discrimination:

    Complex AI models can inadvertently learn biases present in the training data they use to learn about the world—biases which then inform their decisions and predictions. Without visibility into the decision-making process, it becomes challenging to identify and mitigate biases, potentially leading to unfair or discriminatory outcomes.
  • Legal, regulatory, and ethical requirements:

    Various laws and regulations, such as the General Data Protection Regulation (GDPR) in the EU, require that individuals be informed about automated decision-making processes that affect them in significant ways. In order to fulfill these legal obligations, the ability to explain AI decisions is essential.
  • Debugging and error analysis:

    In scenarios where an AI model makes incorrect predictions or fails in other ways, understanding why and how these errors occurred becomes crucial for improving the model’s performance and reliability.

 

The principles behind XAI—and the advantages of using it

Explainable artificial intelligence (XAI) is exactly what it sounds like: a set of processes and methods that allows human users to understand and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact, and potential biases. This can involve techniques such as generating textual or visual explanations, highlighting relevant features or data points, or using rule-based or symbolic representations to capture the reasoning behind the AI’s decisions.

XAI researchers have identified four principles that must underlie these efforts:

  • Explanation: XAI systems should deliver accompanying evidence or reasons for outcomes and processes.
  • Understanding: XAI systems should provide explanations that are understandable to individual users.
  • Accuracy: XAI systems should provide explanations that correctly reflect the system’s process for generating the output.
  • Knowledge limits: An XAI system should only operate under conditions for which it was designed and in which it produces sufficient confidence in its output.

The advantages for businesses of addressing the “black box” problem through XAI are clear. XAI is auditable and explainable. By offering alternatives to legacy black box algorithms across multiple industries, it can improve trust, increase productivity, accurately predict customer behavior, enhance business outcomes, mitigate risk, and maintain regulatory compliance. In fact, a 2020 report from McKinsey predicted XAI would be one of the top 10 technology trends in coming years. The World Economic Forum, too, has identified XAI as one of the key technologies needed to address the challenges of the Fourth Industrial Revolution.

Consider these industry-specific examples:

  • Banking: A robust XAI neural network model offers transparency to customers and employees in making credit decisions. The customer knows the “why” behind a credit rejection. If the system issued a denial, the reason can be understood easily by banking staff. This transparency also gives financial institutions a way to show that their decisions are fair, impartial, and unbiased. Developers, too, have a basis for improving the system by identifying the factors used to render the decision.
  • Healthcare: XAI offers robust AI analytics in a diagnosis. If an MRI is done, the technology explains every variable in the image to determine a disease state. The machine assists in a prognosis by giving physicians decision lineage data to help them understand how the conclusion was reached—and, potentially, to amend that conclusion through human interpretation of the data.
  • Retail and CPG: Unlike black box algorithms, XAI reveals to business decision makers the weighted factors driving predictions and other algorithmic outcomes—providing insights that can then inform personalization, customer retention strategies, product recommendations, return processes, and much more. Explainable AI can be a critical contributor to understanding consumer preferences, creating enhanced customer experiences, and moving toward a frictionless retail flow—from supply chain operations to personalized customer engagement.

The bottom line for XAI

According to McKinsey, companies that establish digital trust among consumers through practices that make AI explainable are more likely to see their annual revenue and EBIT grow at rates of 10 percent or more. With XAI, companies can offer actionable insights, analyze AI decision-making, manage risk, and better meet regulatory compliance standards in any industry.

Comment wrap

Start a Conversation with Us

We’re here to help you find the right solutions and support you in achieving your business goals.

HCLSoftware | March 2, 2023
Update from this Year's Mobile World Congress
Speaking at the HCLTech booth at MWC, Kalyan Kumar, Chief Product Officer at HCLSoftware, says the question is no longer around why organizations should use 5G, but how they can accelerate adoption and generate revenue from those new solutions and services.