Responsible Use of AI: How to Mold It Into a Force for Good While Avoiding its Dark Side

Christina Trampota
6 min readOct 21, 2023

--

force for good

Earlier this year, Robert Mata sued Colombian airline Avianca for alleged injuries he suffered when their serving cart injured him during a flight. When the airline asked the judge to dismiss the case, Mr. Mata’s legal team objected to it and produced a brief citing at least half a dozen cases of precedents. The only problem? Nobody—not Avianca’s lawyers, or the judge — could find the decisions or quotations cited in the case and reference them!

It turns out that Mr. Mata’s lawyers had used ChatGPT to perform legal research, and the generative AI had promptly invented the cases, decisions, and citations supporting the case. In short, ChatGPT had hallucinated it all, and the lawyers considered the AI reliable enough not to cross-verify the data it produced, ultimately attracting a $5,000 fine from the judge on the case.

As artificial intelligence takes over everything from scientific research to recruitment to national defense, it becomes increasingly critical that it is not only efficient (cost-wise) for its creators to use it, but also accurate enough to have a net positive impact on society at large.

However, the current generation of AIs, although remarkably efficient, need to be more accurate in order to replace human decision-making. A fact that Zillow learned the hard way when it had to write down $305 million and cut down 25% of its workforce because of an AI error just two years ago.

Nevertheless, AI is like a fright train. Like it or not, it’s coming and there’s no stopping it. If you’re not using it, then you can bet that your direct competitors, indirect competitors, and bootstrapping disruptive startups that totally mean to replace you will be using it. Even governments cannot ignore the potential of AI. So, what’s the way forward? Bet everything on AI and hope it all doesn’t come crashing down, or steer clear of it and recede into oblivion?

Perhaps there’s a third option — one that’s less risky and offers better promise. Concerns surrounding the dark side of AI have spurred a global discussion on the development and deployment of responsible AI. This fresh take on AI could offer organizations the tools needed to use AI responsibly while avoiding its pitfalls.

What is Responsible AI? Why it Matters?

Responsible AI is an approach to the development and deployment of AI on a solid foundation of ethical principles and legal frameworks.

Proponents of responsible AI urge that AI projects should be human-centric — transparent, accountable, fair, free of biases, reliable, safe, and secure for all stakeholders.

The cost of irresponsible AI development — as discovered by Zillow — can be catastrophic to users, whether humans or organizations. According to a PwC report, AI projects will add in $15.7 trillion to the global economy by as early as 2030, and we’re just getting started. Considering the potential rewards associated with AI, it would be a grievous mistake for human civilization if AI’s future is jeopardized because of irresponsible early development.

Key Tenets of Responsible AI

Since there’s so much riding on the success of AI right from the early stages, here’s how enterprises, NGOs, and governments can approach AI development in ways that will ensure precisely that.:

Equality

Humans harbor inherent Biases. Due to their unemotional nature, AI solutions should be unbiased in theory. However, machine learning systems are trained on real-world data generated by humans. This way, human biases unwittingly make their way into AIs too. Eventually, AIs started rejecting job seekers based on their age, ignoring black patients needing high-risk care, labeling members of Congress as criminals based on race, and spewing hate speech.

There’s only one way to tackle this problem: at the source. Organizations should make sure that the data they use for AI training is free of bias. They can ensure that by actively asking themselves what kind of biases would be relevant for specific use cases of their AI. Subsequently, they must ensure that the data is free of those biases when feeding it to the AI.

Enforce

Systems are not perfect. They are functional within certain limits of acceptable inaccuracies. The risks from AI will never be fully eliminated, but they can be minimized to acceptable levels. This begs answers to 3 questions:

  1. How to minimize AI errors?
  2. How to minimize the fallout from AI errors that pass all checks and balances?
  3. How can we ensure that AI is not abused by vested interests under the garb of inadvertent mistakes?

The first two concerns can be addressed with a strong emphasis on Reliability and Safety. Firstly, organizations must consider an exhaustive range of scenarios to which the AI will be exposed and anticipate its responses. Secondly, they must consider how the AI will behave in unforeseen circumstances and enforce certain restrictions or tolerance limits in its responses. Thirdly, organizations must put human safety first and find ways to make timely adjustments to the AI algorithm when it’s not performing as per acceptable standards. These steps are a continuous exercise as the evolving context requires frequent “tuning up” of the AI algorithms.

Lastly, the concern surrounding vested interests hijacking AI solutions can be addressed with Accountability. Organizations must clearly identify the roles and responsibilities of those responsible for ensuring AI compliance with established standards, best practices, and laws. Higher the AI autonomy, the higher should be the order of responsibility on those responsible for its compliance. Everything from governance models to emergency response mechanisms should be designed to maximize the accountability of people responsible for AI compliance.

Empower

The adage, “information is power,” could never have been truer than now. An informed, aware, and empowered user knows the limits of the AI solutions they are using, as Mr. Mata’s lawyers would discover in court. Knowing the limitations of AI’s decision-making capabilities helps users use it only within its tolerance limits and not give it undue importance in their lives.

Transparency goes a long way in ensuring that users are aware of an AI’s limits. Interpretability and Explainability (XAI) are essential elements of a transparent AI project. AI systems operate as black boxes of information that throw up answers without explaining how they arrived at a particular conclusion, judgement, or answer, as we saw in the case of Mr. Mata vs Avianca.

AI developers should be able to explain their AI’s decision-making logic and outputs to users, law enforcement officials, and other stakeholders. For instance, when a self-driving car is forced to sacrifice a pedestrian in favor of its passenger, it should be able to explain why. And the reasoning behind such decisions must be communicated to users beforehand, whenever possible, so that their trust in the system is unwavering.

Additionally, everyone who works with AI systems should be equipped with tools and resources that help them approach AI development in a responsible and compliant way. Educating them on privacy and safety priorities helps, too. For instance, when feeding medical data to AI, preprocessing it to anonymize it is critical to ensure compliance with HIPAA guidelines. Of course, it goes without saying that users should have complete control over how and when their data is utilized, including the ability to withdraw their consent from data processing by AI.

Responsible AI: It’s the Only Way Forward for Humanity

The sheer magnitude of AI’s benefits is unimaginable, and it has the potential to usher in a new wave of innovation and economic prosperity for all. Its potential cannot be ignored, should not be stifled, and not be exploited to benefit only a few. The way forward is AI development for the benefit of all — for the greater good. And responsible AI is how we do it.

— Christina shares candid insights and ideas based on her work, network, and passion for mobile, payments, and commerce. She focuses on the latest product innovations and growth for people during the day while teaching students and mentoring entrepreneurs at night. Connect with her on LinkedIn or Twitter/X. All views are my own. —

--

--

Christina Trampota

Product and Growth for the Digital Customer by day, Professor at night. Global Innovation Leader, Startup Advisor, Public Speaker, Board Member