Focus on these three areas when developing AI best practices

By Jason Contant | September 17, 2019 | Last updated on October 30, 2024
3 min read

Financial institutions looking to adopt and develop best practices on the responsible use of artificial intelligence (AI) should focus on three areas: explainability, bias and diversity, experts say.

A new TD Bank Group survey of 1,200 Canadians found that a majority of Canadians (72%) are comfortable with companies using AI if it means they’ll receive better and more personalized service, but 68% admit that they don’t understand the technology well enough to know the risks. In addition to surveying Canadians about their attitudes toward AI, TD also engaged a cross-section of experts – from financial services, technology, fintech, academia, and public and not-for-profit organizations – to participate in a roundtable discussion to better understand the risks associated with AI in financial services.

The findings were presented in the report Responsible AI in Financial Services, released at an Economic Club of Canada event Sept. 12.

The report identified three areas of focus as financial institutions look to the future evolution of AI:

  • Explainability – how AI experts and business leaders should approach the inherent limitations of the technology as it relates to explaining how AI models arrive at conclusions
  • Bias – control for bias and re-examine the concepts of transparency, fairness and accountability in an AI-first world
  • Diversity – promote the role that diversity and inclusion should play at every level of AI adoption, from executive leadership to the teams building AI models, to the data used to inform decisions.

The roundtable analyzed future-state scenarios that presented instances where AI resulted in unintended consequences for customers. It found that communication barriers that may exist between executives and engineers, or between companies and customers, can be linked back to ‘explainability’ in AI – or how an AI system arrived at a conclusion.

Recommendations included that when addressing explainability, companies should implement processes and standards that evolve alongside their models and continuously test for inconsistencies. Experts also said that technologists, government and business leaders need to come to a “clear and agreed upon understanding of the technical capabilities and limitations of AI models so that realistic expectations can be set around explainability, transparency and accountability.”

For bias, experts noted that it can mean different things in different contexts and to different people. Generally, the concern stems from human bias, which can lead to unfair treatment or discrimination, but also statistical bias, which can be useful in an AI model. “The roundtable participants noted that when one characteristic – such as gender, age or ethnicity – is removed from data to eliminate biased outcomes, machine learning models will often create proxies for that same characteristic.”

Lastly, roundtable participants reflected on diversity and inclusion as it relates to the adoption and implementation of AI. The following areas were identified as critical for organizations to consider:

  • Building diverse AI teams – From the engineers to the executives, teams should represent the customers they serve
  • The need for representative data sets – Diversity and representation matters when choosing data sets to train AI models, and the people being served by AI systems must be represented in the data
  • Canada can lead – With a multicultural society and workforce, Canada has an opportunity to play a leading role on global stage when it comes to fostering diversity and inclusion in the AI sector.

When asked for their views on factors that are important when it comes to how companies use AI, those polled cited the following: control over how their data is used (70%), transparency about the use of the technology (55%), and that decisions made using AI are easy to explain and understand (28%).

Jason Contant