The True Face of AI: Is it a Threat, a Fad, or an Opportunity?
Artificial intelligence development reached an inflection point in 2023, the breakout year for generative AI.
In the 2004 sci-fi movie I, Robot, Will Smith famously asks a robot, “Can a robot turn a canvas into a beautiful masterpiece?” with the obvious answer being, “No, it cannot!”
In 2022, Midjourney’s breathtaking art snagged an art contest, angering many artists! A year later, generative art topped $1 million in auction sales at Sotheby's. Artificial intelligence has clearly come a long way since its early days of simple automation.
Professionals and organizations worldwide are now using generative AI to create graphics, conjure images, write content, engage users in natural conversations, and do much more. In fact, a McKinsey Global survey discovered that one-third of the surveyed C-suite executives were using generative AIs for work, with many more investing heavily in AI development. As artificial intelligence gains the capability to perform increasingly superior cognitive tasks, organizations will find them suitable for a growing number of applications, including decisions with significant impact on the organization’s future.
So, is there no limit to what AI can do?
Does that mean AI will take over all functions and departments in organizations?
Is there still time to get onto the AI bandwagon?
And what does unchecked AI advancement mean for humanity?
Although they are getting closer to them, thinkers, organizations, and governments are increasingly asking these questions and finding no concrete answers.
Policing the Road to an AI Future
The threat of AI-driven autonomous weapon systems going rogue or falling into the hands of malicious state or non-state actors is a topic that has spurred multiple genres of movies and TV shows for several decades. However, we’re still away from those threats turning into reality. However, that doesn’t mean that the current generation of AIs is harmless. With increased adoption and deployment, even minor issues can prove catastrophic to populations at large. Common risks could include the following:
- Deepfakes flooding the internet with fake news and propaganda
- Biases absorbed from human-generated data
- Concentration of power by AI pioneers
- Exacerbation of socio-economic inequities
- Misinformation and manipulation
These are only some of the most pressing challenges presented by AI available today. With time, the nature, complexity, and severity of the threats emanating will only increase. Since the innovations will inevitably get saddled with increasingly critical roles in “running” our world, would that push our planet closer to the brink of danger? At least, that’s a question many AI skeptics, lawmakers, and technology experts are posing.
Indeed, at the beginning of this month, global leaders, computer scientists, and tech executives from across 28 countries converged at Bletchley Park for an AI summit to discuss the need for checks and balances on unbridled AI development. The choice of venue was interesting since Bletchley Park is where the World War II codebreakers broke the German Enigma Code. The summit ended with a declaration urging global collaboration and action to tackle potential AI risks.
The EU is Pioneering AI Regulation
The EU’s AI Act is the most comprehensive legislation on AI safety yet. The regulatory framework classifies AI based on usage and the potential risks they pose and imposes obligations on both the AI providers and users to manage the potential risks posed by their AI systems effectively. The Act classifies the risk they pose into four categories:
- Unacceptable Risk: They are deemed a threat and hence, their development is banned
- High Risk: AIs that have a strong negative impact on people’s safety and fundamental rights are classified into two groups and heavily regulated
- Generative AI: Subjected to high transparency requirements
- Limited Risk: AIs that pose minimal risk would be required to inform users when using AI and allow users to make informed decisions using the AI systems.
The ACT is expected to be finalized by year-end.
The United States
AI regulation in the US remains nascent compared to the EU. Indeed, there’s a lot of disagreement between US lawmakers on how to regulate AI, and a consensus or compromise on the matter is nowhere to be found on the horizon. Amidst such an atmosphere, several states and local governments have taken it upon themselves to ban or regulate some types of AIs within their jurisdiction.
The White House, for its part, had been engaging Big Tech to explore AI safety practices for several months, but it provided little more than promises and calls for voluntary commitments from the industry. However, to satisfy consumer groups, the White House eventually went ahead. It passed an executive order on AI at the end of last month, taking the most concrete step on AI regulation. The ambitious executive order addresses various potential risks AI poses, including national security, consumer safety, privacy concerns, etc.
In Congress, an AI law is still a faraway dream. Efforts are on to educate the congress members on AI risks and safety before tasking them with developing a regulation. Without a comprehensive law, regulators have relied on existing rules to enforce security, privacy, and legal compliance on AI providers.
AI Regulation in China
China has taken the lead over both the US and the EU in regulating AI. On August 15 this year, the PRC passed the world’s first legislation focusing on AI. While the government attempted the balancing act of creating an AI law that enforces high safety standards but doesn’t stifle AI innovation, the outcome was an extremely strict piece of legislation. The country’s ability to effectively compete with other economies and gain leadership in the AI sphere is at stake.
Eventually, the government had to defang the law and choose not to enforce some of the rule’s harshest elements to allow AI innovation to thrive. In fact, the government has made it abundantly clear that it would be flexible in enforcing its own AI laws to make room for AI innovation. What it means and how that plays out, only time will reveal.
Since this is an interim law that regulates only generative AI, and comprehensive legislation is STILL in the works, the experience from the law should provide the world’s governments with valuable insights into how AI legislation should balance AI innovation and AI safety concerns.
The AI Race
The hesitation exhibited by the world’s governments in regulating AI should not be construed as a lack of interest in or awareness of the threats. In contrast, the pervasive hesitation stems from concerns about being left behind in the AI race. Technological leadership in this all-important space has become a matter of strategic importance for the world’s leading economies. Besides organic competition, governments are implementing trade sanctions and export restrictions on AI-related technologies to prevent competing economies from gaining preeminence. In the future, businesses can expect more such restrictions to emerge between countries.
Innovate, Accelerate, or Hang Up?
Even as the AI legal landscape is in flux, the question before organizations remains: should we embrace AI or wait it out until there’s clarity on AI’s future?
The rapid proliferation of AI solutions worldwide may overwhelm business leaders into misconstruing that they’ve been left behind in the AI race and wondering whether it’s even worth getting back into it. However, the reality is that AI development, despite all the media attention, is still taking baby steps. We’ve only breached the surface when it comes to possibilities, and there’s an ocean of opportunities worth pursuing.
Although the threat of stifling AI regulation looms on the horizon, it also offers tremendous opportunities for farsighted business leaders. For instance, when GDPR was enacted, thousands of US-based websites revoked access to EU citizens, suddenly driving attention and web traffic to their GDPR-compliant websites. Likewise, AI restrictions can offer similar opportunities to better-prepared organizations that anticipate and prepare for the changing AI legal landscape in advance.
Interestingly, the President’s Executive Order on AI in the U.S. offers extensive guidelines on how organizations should develop and standardize the tools and tests required to ascertain AI solutions’ safety, security, and trustworthiness. This leaves a lot of room for AI developers to experiment with their creations and share the results of their experiments with the government. In other words, this is the beginning of the AI race, and organizations can jump in now and still occupy leadership positions in their respective fields.
Back to the Future
For now, AI regulation is practically non-existent, and AI development is the Wild West of the tech industry. It’s the new gold rush, although the time window for unbridled opportunities is closing quickly. Organizations still have the chance to establish their leadership in this space by joining the AI race now. However, developing an ethical framework and basing their AI development on it will undoubtedly offer them a shot in the arm when AI regulation takes shape. At the same time, their competitors struggle to grapple with the new reality.
How AI will affect humanity will depend on how the symbolic tug-of-war between lawmakers and AI developers plays out. Where do you think AI will go from here?
— Christina shares candid insights and ideas based on her work, network, and passion for mobile, payments, and commerce. She focuses on the latest product innovations and growth for people during the day while teaching students and mentoring entrepreneurs at night. Connect with her on LinkedIn or Twitter/X. All views are my own. —