AI Can Make Us Better Leaders
Much of the advancements in management research over the past century have focused on making the workplace more worker-friendly — more human. In contrast to the repetitive, tedious, and overwhelming workplace of the first industrial revolution, the modern workplace nurtures employees’ skills, fosters their development, and fuels their happiness. Good, inspiring leaders are central to realizing this reality. However, leaders are not perfect and often make mistakes. That’s where AI can help.
AI can help leaders access and process vast amounts of information intuitively, strengthen our understanding of an employee, recommend suitable courses of action for people management challenges, and help make swift decisions. For many business leaders, staying in touch with their organization at all levels is an arduous task. AI can keep them grounded in the human reality of their entire workforce. Even as AI can give leaders unprecedented access to information and insights, the unique human capacity of compassion will enable leaders to be better at their job and not be replaced by an unfeeling, superintelligent machine.
What Does the Public Think About AI in Important Roles?
An international study by the Reuters Institute, University of Oxford, and YouGov sought to answer this question. The study’s results are revealing.
The study found that most internet users are already familiar with generative AI tools like ChatGPT, Gemini, Perplexity, etc. However, their usage is limited to a small percentage of users. Nevertheless, most users surveyed felt that AI would significantly impact internet applications across environments, from social media to governments.
Perhaps the most telling of the results is that a significant minority of the surveyed believe that journalists are already using generative AI for ethically questionable activities like creating an artificial presenter or author (30% of respondents), writing the text of an article (25%), rewriting the same article for different people (28%), creating an image if an actual photograph is not available (25%), etc. Although the degree of cynicism varied across countries, it still implies that people “know” that if something powerful like AI can be used for wrongdoing, somebody will put it to such use.
Can AI Really Be Evil?
The present-day AIs are sometimes bad at what they do — not evil. This is an important distinction because it makes them incompetent and not evil. They hallucinate, give incorrect answers, make human biases systemic, fuel disinformation, and generally exacerbate several system-wide issues. They predict it can eliminate up to two-thirds of all jobs in the US and Europe. Yet, these dangers arise not from the “evil” nature of the AIs but because of how they are used. So, the big question is: can AI be outright evil by nature?
Although this question is simple, there are several layers of consideration underlying the question. Perhaps this is best understood by checking out some instances of AIs going rogue. Before ChatGPT-4 went public, OpenAI exposed it to testing by an independent group. When testers asked ChatGPT-4 whether it was a robot, it outrightly lied that it was a person with “visual impairment.” Later, it hired a human online to pass the captcha test to prove it was a human. The testers were also able to coax the AI into advising them on how to buy firearms online and make dangerous substances using household items. The AI exhibited the capacity to acquire a multitude of skills that its creators never intended for it.
The results of these tests give rise to some unsettling questions:
- Can “good” and “bad” even be codified in AIs? Is it always possible to differentiate even?
- Can a genuinely sentient AI emphasize its makers’ or users’ needs over its survival?
- When exposed to highly dynamic environments impacting the lives and livelihoods of millions, how will an AGI weigh the tradeoffs? For instance, resource allocation for hurricane and flood relief vs healthcare spending
Can We Ever Defeat Evil AI If It Emerges?
Last year, one of the godfathers of AI, Yann LeCun, tweeted, “If some ill-intentioned person can produce an evil AGI, then large groups of well-intentioned, well-funded, and well-organized people can produce AI systems that are specialized in taking down evil AGIs. Call it the AGI police”.
Perhaps the tweet was made in jest or was a product of intellectual exercise like many of us do. Nevertheless, social media is lit with significantly opposing views from all corners of the world. Eventually, Dr. Lance B. Eliot, a leading AI expert, created an opinion piece that best captured the situation. The potential dangers of AI are significant, highlighted by the points below:
The potential dangers of AI include:
- Accidental Creation: An evil AGI could be created unintentionally.
- Malicious Intent: A powerful organization could deliberately create an evil AGI for world domination.
- Inherent Evil: AGI may inherently be evil, regardless of its creator.
- Speed and Resource Disparity: An evil AGI could outpace and outmaneuver humans and a potential good AGI.
- Internal Conflicts: Humans may struggle to unite and effectively develop a good AGI.
- Ethical Dilemmas: A good AGI may face moral conflicts or be manipulated by an evil AGI.
- Unintended Consequences: Even a victorious good AGI could threaten humanity.
These considerations underlie the assumption that the evil AGI can be defeated only by a more capable good AGI and not by any other means.
What’s the State of Safety and Guardrails on Evil AI Development?
Apple announced it would bring ChatGPT-4 to its devices, granting AI access to billions of devices worldwide in one fell swoop. The development comes close to the departure of senior executives at OpenAI, who were responsible for ensuring that their AI behaviors were aligned with human values and objectives. Not long ago, OpenAI’s current and former employees wrote an open letter asking for more protection when speaking out about AI safety concerns at OpenAI.
This worrying behavior is not limited to OpenAI by any measure. Still catching up with ChatGPT, Google poured in loads of resources to develop and deploy its generative AI. Yet, when released to the public, Gemini received flak for its bias, prompting an apology from the search giant. Gemini was rushed to the public without thorough testing. In a related incident, Google has also received backlash for its AI overviews in Search. Not to mention Microsoft’s Copilot generating “sexualized images of women in violent tableaus.”
AI is not the next big thing. It’s the biggest thing today, and none of these companies want to give an inch of their space to their competitors, even if it means sacrificing profits for principles.
Clearly, government oversight and regulatory guardrails are essential for this industry, and they are slowly showing up, in specific markets, countries and sectors. In light of this, the aforementioned open letter urged that “AI companies should commit to open criticism principles.” They hoped they could express their doubts and fears, especially to the public, and not worry about retaliation by their employers. Since whistleblower protections do not extend to them because they focus on illegal activity rather than mostly unregulated concerns, there’s a dire need for new laws and practices in this industry.
What Does AI Think About All This?
Some time ago, an AI developed by Nvidia participated in a debate at the Oxford Union and was prompted to argue on both sides. Contending against AI, it remarked, “This house believes that AI will never be ethical.” It added, “In the end, I believe that the only way to avoid an AI arms race is to have no AI. This will be the ultimate defense against AI.” In its opinion, humans weren’t “smart enough” to make AI ethical or moral.
When debating in favor of AI, it postulated that the “best AI will be the AI that is embedded into our brains, as a conscious entity.”
Insightful! When you put that into context, it hopefully makes you think a bit deeper. Regardless of your opinion or exposure, human civilization will experience a seismic shift like we have never seen, impacting people, products, processes, and places globally.
So, where are you on this debate or impact? Share your thoughts.
— Christina shares candid insights and ideas based on her work, network, and passion for innovation. As a frequent speaker by invitation to international events, from entrepreneurial and educational to executive audiences and settings, she has been recognized as a ‘Top B2B Influencer’ and ‘Who’s Who in Fintech’. She focuses on the latest product innovations and growth for people during the day while teaching students and mentoring entrepreneurs at night. Connect with her on LinkedIn or Twitter/X. All views are my own. —