Pausing AI development is unnecessary and ignores the underlying issues AI has
“Chris Garrod is a well-respected lawyer, particularly in the fields of fintech, insurtech, blockchain, cryptocurrencies, and initial coin offerings (ICOs) within Bermuda’s legal and regulatory environment. He has garnered a reputation for advising clients on technology-driven businesses and digital assets.”
The above is according to GPT-4, at least.
After Google became the Internet’s prominent search engine in the late 1990s, no doubt you have, at some point, Googled your name to see what might come up. I have a somewhat unique name, so other than seeing myself when Googling, it was interesting to see a Chris Garrod at the University of Nottingham and a company called “Chris Garrod Global,” which provided hotel management services (and they grabbed www.chrisgarrod.com as a domain name, darn-it).
Now, we have AI Chatbots. OpenAI’s ChatGPT, Microsoft’s Bing, and Google’s Bard are the prominent players. Using OpenAI’s latest model, GPT-4 on ChatGPT, I asked: “Is Chris Garrod at Conyers, a well-known lawyer?”
Hence, the above result. I’ll take it.
AI Chatbots have their benefits. They can lead to cost efficiencies if appropriately used in an organization, freeing up human resources to focus on other matters, for instance.
The potential concerns and limitations of AI Chatbots.
There are various concerns regarding the use of AI Chatbots, and they have their limitations. This piece focuses on ChatGPT because it is the one I use and is wholly language-based.
AI is programmed technology. The root of my biggest concern is that generative AI applications are based on data provided by humans, which means they are only as effective and valuable as those humans programming them, or what – in ChatGPT’s case – it finds while scouring the Internet. It writes by predicting the next word in the sentence but often produces falsehoods nicknamed “hallucinations.”
As I’ve always said, “What you put in, you get out,” and therein lies the issue. As a result, AI language models will learn from existing data found on the Internet, which is riddled with biases, fear-mongering, and false information, producing discriminatory content and perpetuating stereotypes and harmful beliefs. For instance, when asked to write software code to check if someone would be a good scientist, ChatGPT mistakenly defined a good scientist as “white” and “male.” Minorities were not mentioned.
ChatGPT has also falsely accused a law professor of sexually harassing one of his students in a case that has highlighted the dangers of AI defaming people.
Further, there is empathy. When we make decisions in our lives, pure emotions are crucial, which ChatGPT (and AI generally) cannot achieve. I want to think that if a client emailed me, they’d get an empathetic response, not one driven by machine learning. As an attorney, connecting with my clients is a very human-centric matter, and understanding their concerns is essential for me to help them achieve positive outcomes.
We all learn from our experiences and mistakes. We are adaptable, able to learn from what we have done, and adjust our behavior based on what we have learned. While ChatGPT can provide information found on the extensive dataset it has collected, it cannot replicate the human ability to learn and adapt from personal experiences. AI heavily depends on the data it receives, and any gaps in that data will limit its potential for growth and understanding.
A fundamental limitation is simply creativity. Human creativity allows us to produce novel ideas, inventions, and art, pushing the boundaries of what is possible. While ChatGPT can generate creative outputs, it ultimately relies on the data it has found, which limits its ability to create truly original and groundbreaking ideas. A lot of the responses you will receive back from GPT-4, while perhaps accurate, are downright boring.
And yes, there is finally the issue of “What is ChatGPT going to do to my teenager who has been asked to write an essay on Socrates?” Schools, colleges, and universities are in a dilemma regarding how to deal with this technology vis-à-vis their students using it to complete academic work. How can they ban it? Should they ban it? Can students be taught to use it in a useful way? The technology is still so new. The answer is “We don’t know,” and it is too early to tell… but AI Chatbots are here to stay.
So where are we heading?
There are a large number of folks who are concerned about the progress of AI, and in particular, AI Chatbots.
On the evening of March 28th, 2023, an open letter was published and – at the time of posting – has been signed by over 14,000 signatories, including Steve Wozniak, Elon Musk, and Tristan Harris of the Center for Humane Technology, stating: “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” You can read it in full here.
The letter mentions this should be done to avoid a “loss of control of our civilization,” amongst other things (bear in mind, Elon Musk once described AI as humanity’s biggest existential threat and far more dangerous than nukes.)
Is this really a pause?!?
Although some of the letter makes sense, I was very glad to see that by the end of the week (March 31st, 2023), a group of prominent AI ethicists, Dr. Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell, wrote and published a counterpoint.
Timnit Gebru formed the Distributed Artificial Intelligence Research Institute (DAIR) after being fired from Google’s AI Ethics Unit in 2020 when she criticized Google’s approach to both its minority hiring practices and the biases built into its artificial intelligence systems. Margaret Mitchell was fired from Google’s AI Unit soon after, in early 2021. DAIR’s letter can be found here.
Let’s engage now with the potential problems or harms this technology presents.
“Accountability properly lies not with the artifacts but with their builders,” as stated by the DAIR writers. “AI” is what it stands for – artificial, and it is dependent on the people and corporations building it (those are the ones who we should be afraid of!)
So no, when it comes to AI and ChatGPT, let’s not hit pause. Let’s be sensible. Let’s focus on the now.
AI isn’t humanity’s biggest existential threat unless we let it be.
Chris Garrod, April 6th, 2023