AI and Compliance: Where and When They Connect

Christina Trampota
6 min readSep 24, 2023

--

In 2015, long before the pandemic galvanized rapid digital transformation and AI adoption everywhere, Elon Musk, Stephen Hawking, Steve Wozniak, and 1000+ AI researchers co-signed an open letter to the United Nations urging it to ban the development of AI-based weapons. They reasoned that autonomous weapons effectively shift control over deadly weapons away from human decision-makers to AIs, which could precipitate global security nightmares that may take any form, from the mass culling of dissidents by governments to an existential threat to all of humanity.

Fast forward eight years and the netizens are happily yapping away with ChatGPT. While I do not intend to raise any “man vs. AI” alarm bells just yet — primarily because we are far from an AI capable of outsmarting us — I’d like to point out the obvious: regulation, ethical standards, and responsible research have typically lagged innovation by at least three to four years in the West.

For instance, right now, ChatGPT is gobbling up colossal amounts of information about you, me, and everyone else it can find on the internet without any regard for our privacy. In fact, this very reason prompted Italy to temporarily ban ChatGPT from collecting data on its citizens. The EU is moving ahead quickly with legislation with decisions such as the AI Act, likely to be the first of many and ahead of other global regions, as they have been in other matters historically (environmental, privacy, etc.)

Across the pond, things have taken a more laissez-faire turn in the US. Earlier this year, legislators introduced a bill to constitute a commission to review AI technologies’ current state and their risks. It will be long before a comprehensive regulation governing AI’s development is passed in the US. Even with compliance regulations, organizations still need AI to meet compliance requirements. In short, organizations are using AI to make sure they are not breaking any serious laws. Ironically, there’s no comprehensive regulation to ensure that AI is not yet breaking any rules.

Today, we’ll explore the dual nature of the “AI and compliance” confluence, which has enormous ramifications for the global economy, citizen rights, and humanity.

The Urgent Need for AI Regulation

A McKinsey report puts AI adoption at an unbelievable 55%. Furthermore, the AI industry is expected to explode by 13x over the next seven years! Whether you like it or not, AI is coming everywhere. Our smartphones already have AIs — Google Now, Siri, Alexa, etc. — on them. Soon, almost every organization around you will use a wide variety of AI solutions. Naturally, you’d want these AIs to have the same kind of accountability and checks and balances that the people and organizations employing them do if not more. However, the current reality is a very different story.

  • Data Privacy

OpenAI fed some 300 billion words of data to train ChatGPT’s algorithms. And, while doing so, they never asked us whether we’d like our data to be fed into its AI. Instead, it requires following the updates closely and knowing how, when, and where you can find the steps to opt-out, blocking your content from being scanned, or other precautions that are not clearly displayed to the average user. Tech giants’ commercialization of personal data has prompted regulators worldwide to regulate data collection, use, security, and management. However, AI developers have so far evaded those regulations, putting the privacy rights of individuals worldwide on the back burner.

In some cases, even IP-protected works have been reproduced by generative AI without regard for the laws protecting them, potentially opening their developers to a deluge of lawsuits.

  • Data Accuracy

Generative AIs often gather information from various online resources, which may or may not be accurate. So, it’s imperative for the individuals using them to decide how much they should trust them. However, when AIs collect inaccurate information on individuals and store it indefinitely, it directly contradicts their “right to be forgotten”, as enshrined in multiple data protection regulations worldwide.

  • Cybersecurity

As artificial intelligence becomes more competent, so will its ability to mimic humans, be it our voices, faces, behavior, writing style, or other characteristics. Malicious individuals with access to advanced AI technologies can utilize them to execute scams, identity thefts, and other fraudulent activities on a large scale, escalating the fight against cyber threats. They can manipulate the data fed into the AIs to surreptitiously manipulate them into doing their bidding, unbeknownst to the developers.

  • Bias from All Angles

The racist, sexist, and minority biases inherent to AIs have been studied extensively. AIs learn to make decisions based on the data they consume, which in turn is created by humans. Therefore, all the overt and hidden biases that were part of human decision-making get transferred to AIs. Once these AIs start making decisions, these biases get systematized across the board unless there are guardrails to prevent precisely that from happening.

  • Explainability

Most AIs out there are opaque black boxes when it comes to their decision-making process. Although they are increasingly adept at responding like an actual human and exhibit human-level intelligence to some degree, they need to explain the reasoning behind their answers effectively. That poses serious compliance risks for organizations.

If organizations intend to employ AI for mission-critical operations, they should be able to demonstrate and explain that the AI performs its duties in compliance with relevant laws. Since most of the AIs out there cannot explain their decision processes or those processes are so complex that their users cannot put them into formats that humans can understand, there needs to be a way to know whether these solutions are actually compliant or not.

This has triggered the parallel development of explainability in AI, which seeks to explain the decision-making process behind AIs. In turn, there’s now a burgeoning demand for talent that can understand the logic utilized by AI tools.

The Frenzied Demand for Compliant AIs for Compliance

Considering the potential risks associated with AIs, it may sound sensible to put off their large-scale adoption by organizations until a more robust legal foundation is in place. However, the sheer advantages offered by AIs, especially in the domain of compliance, are so far-reaching that they are considered well worth the risks they present.

  • Financial Transactions Monitoring

Strong financial infrastructure is a nation’s first line of defense against malicious actors, including terrorists, smugglers, prohibited substance networks, money launderers, etc. Therefore, regulators require financial institutions to submit detailed suspicious activity reports (SARs) after painstakingly reviewing customer transactions and analyzing their risk typology.

AIs can be trained on a wealth of past SARs to generate such reports at scale. Then, the investigators would take on a more quality control role, reviewing hundreds of such AI-generated reports for errors and ensuring they are not false positives. For AIs to be used this way, organizations should be able to explain why the AI flagged a particular customer or transaction as suspicious and how it arrived at that decision based on the information it gathered legally.

  • Enhanced Tax Audits

The IRS has been facing flack for the abysmal rates of audits among the super-rich for years. They are fielding an AI solution to address these concerns. The AI will automatically monitor, identify, and flag high-end tax issues. As the pressure on the current administration increases to show results, there will be increased utilization of AI to identify and audit the wealthy taxpayers skirting the laws.

  • Governance

We’ve all heard stories of college students using ChatGPT to “write” their assignments. While that’s frowned upon in the academic setting, corporations may prefer using such tools to create detailed company documents. For instance, a generative AI can continuously monitor, scan, and combine regulator updates, legal journals, and notifications to develop complex compliance documents for organizational review.

These reports can be treated as early drafts, which individual reviewers or suitable prompts can then fine-tune and supply to the AI to help improve its drafts. Once again, a compliance manager may want to know where the AI got its information or how it interpreted a specific law or statement.

Closing Thoughts

Unlike the US and the EU, China has been at the forefront of AI regulation. Although the regulation is far from as stringent as it was expected to be, the watered-down regulations do not come as a surprise to observers. After all, if the laws are too harsh, they’ll stifle competition and leave the country lagging in the race for AI technological dominance with The West. For instance, the language on personal data utilization has been softened to encourage the development of generative AI instead of emphasizing the punitive aspects.

It’s clear that AI is the future, and it’s going nowhere, regulation or not. The only question is: how long of a leash can they be afforded, and at what cost?

Each country and economy will find its own answer to this question, and their collective answers will determine the quality of life humanity will enjoy in the near future. We all can see this is the current wave, hype and investment trend — what is next?

--

--

Christina Trampota

Product and Growth for the Digital Customer by day, Professor at night. Global Innovation Leader, Startup Advisor, Public Speaker, Board Member