Can AI Be Ethical? The Finance Industry Offers a Handbook…of Sorts!

Christina Trampota
7 min readAug 15, 2023

--

Artificial intelligence technology has evolved at an exponential rate over the past decade. While AI has been part of watercooler conversations for the past decade, things took a dramatic turn on November 30, 2022, when OpenAI launched ChatGPT. Overnight, it was as if Skynet — albeit a not-so-evil and sometimes clumsy version of it — woke up! ChatGPT found 100 million users within a span of 2 months, heating up the AI chatbot development space to red-hot levels. Rumors surfaced of similar projects — Google’s Bard, Microsoft’s Bing Chat, AppleGPT, etc. — in the pipeline at various tech giants. Needless to say, ChatGPT has been the most disruptive AI yet and has completely caught the imagination of hundreds of millions of people worldwide.

As millions of people interacted with ChatGPT, they began to take note of its innumerable limitations, biases, safety concerns, threats to privacy, and other problems. Meanwhile, organizations embedding AI into their operations were discovering that it wasn’t the silver bullet it was thought to be. They were learning AI applications’ operational, utilitarian, and cost-benefit limitations. Soon, it was evident that AI could transform many industries, but not all and in different levels and layers of organizations.

While users obsessed over AIs, tech observers were sounding alarm bells on the conspicuous silence of lawmakers on the matter of AI. Faced with the relentless proliferation of AIs and the simultaneous lack of AI regulation, regulators were forced to rely on existing laws to regulate AI. Threats to jobs, privacy issues, systematic bias against minorities, misuse, fairness, and numerous other concerns arising from AI solutions took the spotlight. As noticed previously in the web2 world with GDPR, Europe is looking at this more cautiously, including consumers, based on a recent study with online search as the top interest.

Even as the world remained mesmerized by what ChatGPT can do, the question at the back of everyone’s head was: what does a generative AI future look like? From Bicentennial Man to the Terminator franchise, Hollywood has explored a broad spectrum of possibilities. We also know it has been present for a long time in many tech and software companies, only advancing their data and insights and intelligence to the next level today. The financial services industry has also had some fast applications in this area. It must continue to accelerate to keep up with the overall changes in tech, communication and consumption that will impact all of us.

How is the Financial Industry Using AI?

Forrester estimates that 100% of organizations will be using AI by as early as 2025! That’s a startling claim to make by any measure. Even if the prediction falls short of its number, AI adoption would still be overwhelming by all accounts. As the race for AI adoption heats up, businesses and governments should proactively understand the potential benefits and applications of AI so that they can respond effectively to the challenges they encounter.

A Harvard Business Review study likens the world of finance to a “laboratory” of sorts for exploring the potential of AI. The study discovered that AI could disrupt most industries at unprecedented speeds. However, its most significant impact would be felt in areas dominated by large structured data sets, where quantitative analysis is possible. The rise of passive fund managers instead of active fund managers attests to this claim. For instance, forecasting, anomaly detection, budget optimization, risk analysis, and more are areas where AI is thriving.

On the other hand, areas where information availability is limited, sentiments become relevant, human capabilities need to be judged, macroeconomic forces converge, etc. AIs have limited applications. AIs would still be relevant in such areas but only offer incremental benefits rather than disruptive transformations. Simply put, AIs can take over tasks and jobs that deal exclusively with hard data and play a more supportive role in decision-making in areas where hard data is limited.

As a result, an estimated 300 million jobs across industries, especially clerical and low-skilled jobs, are threatened by the AI revolution.

Biggest Ethical Challenges of AI

The financial services industry is best suited for the AI revolution owing to its enormous data collection, analyses, and data-driven innovation. That also means it is a fertile ground for early detection of AI challenges. Here are some of the challenges presented by AI:

Explainability

AIs are notoriously opaque in their decision-making. As they crunch large data sets and throw up useful information, they do not reveal how they arrived at their decisions. They can operate in a kind of “black box,” which has raised questions about the validity of their outputs. Business leaders and government organizations need a strong understanding of how the AI actually arrived at a decision if they’re to trust it.

Biases

AIs rely on large datasets to train themselves. Any biases inherent to those datasets are absorbed into the AI’s decision-making process. AIs in facial recognition and lending risk assessment have well-documented cases of bias against minorities and underrepresented, underbanked profiles and people over the years, and this extra review and monitoring cannot stop now. Instead, the watch of fair and equitable access to financial health and solutions for all should continue as a top priority globally.

Data Privacy and Security Concerns

The EU laws are more stringent regarding data compliance and have led the way for many follow-on regulations globally. However, the US has a more lenient data privacy legislation, which has given AI creators unfettered access to consumer data, whether they consented to share that data or not. While EU citizens can grant and revoke access to their personal data and even allow organizations to use their data for specific purposes only, the same is not true in the US. The Equifax hack has demonstrated the ills of unfettered access to citizens’ personal data. You don’t need an active imagination to theorize how similar issues could erupt from AI accessing consumer data on a large scale.

Regulation and Self-regulation are Essential for Making AI Ethical

When ChatGPT threatened Google’s very existence, Google rushed its chatbot development and released it to the public. Although Google had been developing Bard since 2015, ChatGPT’s arrival triggered a war in an increasingly competitive space. Employees reportedly begged the tech giant not to launch it, which sidelined its ethical concerns to avoid being left behind in the chatbot race. A report claims that Google is prioritizing business over safety with Bard! In all likelihood, Google isn’t the only company flouting privacy and safety concerns. You can bet that any profit-motivated corporation with the resources to create a rival chatbot is doing the same. AIs require strong regulation — and quickly — to ensure that they operate in a way that’s ethical and useful to society at large.

At the same time, AI’s inherent issues can expose its early adopters to litigation and associated costs if their solutions are non-compliant with existing regulations. So, even profit-making organizations have some incentive to implement ethical AI that’s compliant with current regulations. Here’s how to address these challenges:

Adoption of XAI

AI creators are adding a layer of Explainability in AI (XAI), which demystifies AI decisions and helps users understand how it arrived at a decision or conclusion. This is crucial in the BFSI sector, where regulatory compliance requires high transparency of operations and decision-making. In addition, XAI helps organizations identify and root out biases, unfairness, and unethical results from their AI solutions.

XAI also fosters trust between organizations, their customers, and their employees. It’s a win-win for everyone when applied correctly.

Addressing Known Biases

For all practical purposes, AI holds a mirror to our society. All discriminatory practices that were part of the human decision are now exported to AIs as they gobble up vast amounts of data generated from those decisions. As AI models train themselves on such data, they are bound to follow in the footsteps of humans. That will open up the organizations to discrimination accusations and, worse cases, lawsuits. So, organizations must actively find ways to help AIs overcome human biases in the data they consume.

Proactive Self-regulation

The AI space is still the wild west of the tech industry with little regulatory framework to govern it. However, the SEC has already indicated that they plan to regulate AI, considering its far-reaching potential, such as its ability to potentially destabilize markets. Other regulators may follow suit. Organizations may find it in their best interest to implement AI solutions ethically in ways that legislators won’t find questionable.

Build the Guardrails

AI will give extraordinary capabilities to organizations. For instance, facial recognition can be a game-changer for any business requiring customer identification verification. However, that also exposes customers to an increased risk of data theft. Naturally, organizations must take strong data protection measures to protect customer data. Likewise, they must implement other guardrails that won’t expose them to litigation under the current law, which courts can interpret in unexpected ways to protect the citizens.

The Nuts and Bolts of an Ethical AI

At the end of the day, organizations should use AI to figure out better ways to serve their customers and users. It’s all about people and their relationships with organizations. Anything that threatens people’s trust in organizations has the potential to jeopardize the latter’s future. So, whether AI can be ethical or not is not the question because there’s no room for any other kind of AI!

— Christina shares candid insights and ideas based on her work, network, and passion for mobile, payments, and commerce. She focuses on the latest innovations in products and growth for people during the day while teaching students and mentoring entrepreneurs at night. Connect with her on LinkedIn or Twitter/X. All views are my own. —

--

--

Christina Trampota

Product and Growth for the Digital Customer by day, Professor at night. Global Innovation Leader, Startup Advisor, Public Speaker, Board Member