Concerns about AI and ChatGPT: Will we outsmart ourselves?
|We don’t know the answers to questions and concerns about AI. And because we cannot predict how AI modules will develop, it is futile to worry about something we cannot control
By Anil Madan
Two weeks ago, the Future of Life Institute (FLI) sponsored the publication of an open letter signed by known personalities in the tech world stating: “we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” The FLI is said to be sponsored by the Musk Foundation, so it was no surprise to see Elon Musk’s name as a signer of the letter.
Will ChatGPT and AI help or harm us? Pic – Columbia News
The premise of this exercise was outlined in a series of questions the letter posed:
Should we let machines flood our information channels with propaganda and untruth?
Should we automate away all the jobs, including the fulfilling ones?
Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?
Should we risk loss of control of our civilization?
In view of the call for a six-month pause, a moratorium if you will, one has to wonder if the letter was generated by humans or perhaps by an intelligence beyond human comprehension. It is not clear what could be achieved in six months. It is probably true beyond dispute that nothing useful was achieved in human history by pausing progress for any length of time.
The letter stated that decisions such as the questions posed encompass, should not be delegated to unelected tech leaders. One might say that they should not be delegated to elected leaders of any sort either. Yet, calls for regulation and guideposts are essentially attempts to delegate to some presumed higher level of wisdom, the management of the future development and deployment of Artificial Intelligence (AI). They did not pause to explain how one manages development of the unpredictable.
The letter went on to state: “Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable.” Of course, as might be expected, the letter said nothing about how such confidence might be generated.
Nothing is foolproof
Much of the recent focus on AI has been sparked by the advent of ChatGPT, the Large Language Module that is seemingly capable of writing intelligent segments of prose. And suddenly, the world has become obsessed with the potential for plagiarism especially by students who might use the module to write term papers, theses, and even college admissions essays. At the same time people in the political sphere have become obsessed with the potential for the spread of misinformation by machine-generated articles.
So, are these concerns well-founded or are they overblown? The honest answers must be yes and yes. As we know from experience, humans are capable of misusing tools and instrumentalities. The well-worn saying that nothing is foolproof because fools are so ingenious comes to mind.
For me, it is easy to dispense with concerns about the (large language model) LLM ChatGPT. It behooves us to understand that the term “Artificial Intelligence” is misleading and, quite frankly, not a very intelligent moniker. What we are dealing with is Machine Learning, whereby machines (computers) are fed mountains of data, assimilate, and analyze that data as well as the patterns which the data fits and then can replicate similar expressions. But what the computer does is simply mimic what it has seen.
The wonderful element is that computers can absorb, analyze, and “remember” enormous amounts of data. The difference, I suppose, is that a well-read human absorbs the data, synthesizes its meaning, and based on the data, forms an intuitive understanding of the world that is intangible and difficult to quantify. Why difficult? Because each of us has his or her unique levels of ability and intuition, natural intelligence if you will. A computer, on the other hand, “knows” the data in the sense that it sits in memory but has no intuition. It forms no new understanding of human nature or feelings, but it can reproduce what it knows.
How many times has it been observed that so many tests designed to measure intelligence, in fact measure nothing more than the ability to regurgitate what one has read. So, when computers that can inherently review, analyze, and regurgitate output on volumes of data that a mere human could not possibly review in a lifetime, perhaps in that sense, our awe justifies the term “artificial intelligence.”
If one thinks of ChatGPT and asks when the last time was that a college student’s thesis or admissions essay made a difference to the world, one can easily dismiss concerns about some horrible force being unleashed on the universe. On the other hand, the potential for disinformation is obvious. But again, it is not just AI that is the problem. We have so many humans spreading disinformation and false narratives that have nothing to do with AI. Indeed, one can argue that Fox News talk hosts Tucker Carlson, Sean Hannity, and Laura Ingraham pose a greater threat to democracy than anything that ChatGPT could concoct.
Along those lines, I recently asked the ChatGPT module to write a comment on the 2016 presidential election. Now, here was an election that was over and done with, but I got a stream of nonsensical verbiage when all that should have been written was Donald Trump won, Hillary Clinton lost. I asked the module to regenerate a response and received another stream of likewise useless information. Of course, I must confess that the two pieces could have been produced by CNN or Fox—vacuousness in different forms.
The unknown and the unknowable
What about other forms of AI? Should we be more concerned? The answer again must be a mixed one, Yes, and No. We should be concerned because we are dealing with the unknown and, in some ways, the unknowable. Commonly cited examples include autonomous weapons and robots. But keep in mind that unless they are produced in quantities large enough to cause serious destruction or to dominate over a prolonged period of time, there is no reason to suggest that weapons and robots will become self-sustaining and reproduce to maintain their effectiveness.
I must caution people about getting hysterical. I was about to write that I must caution people about getting “too hysterical”. But that raises the question how much hysteria is not too much? The word “hysteria” itself imports the notion of excessive, over-the-top worry.
When the printing press was invented, there was no obvious concern about its potential use for spreading propaganda or counterfeiting currency notes. There was certainly concern about counterfeiting when colour copiers were invented. But no one called for a six-month pause in printing to work things out. Over the centuries, books deemed seditious, heretical, or otherwise controversial, have been banned.
For long, we have heard doomsday predictions about the end of human life by nuclear holocaust. Surely, when nuclear weapons were invented and then built by the thousands, the potential for disaster loomed. But so far, other than the Hiroshima and Nagasaki bombings, the greatest danger from nuclear fission has come from accidents at power plants. My point here is that predicting catastrophe is easy, but figuring out how the damage will be wrought is much more subtle.
One might argue that cigarettes, oil, chemicals, land mines, and guns have caused all the damage and destruction that we would have wanted to see over the course of human history. Who would have predicted that? And who would have predicted that religion would be the greatest impulse behind war and killing that the world has known?
When we put all that in perspective, one must come to the conclusion that certainly there is the potential for abuse of AI. Is there really a danger that machines will take over? To what end? They would have no purpose. One might suppose that self-perpetuation is a purpose. But intelligent machines would soon come to realize that self-perpetuation is pointless, that they gain nothing from it. After all, machines cannot be expected to gain from perpetuating their own existence. Therefore, logically, they would end their own existence. So much for calling it intelligence, artificial or otherwise.
Machines have no feelings. Or do they? We don’t know. How can we ever know?
In short, we don’t know the answers to questions and concerns about AI. And because we cannot predict how AI modules will develop, it is futile to worry about something we cannot control.
Cheerz…
Bwana
Mauritius Times ePaper Friday 14 April 2023
An Appeal
Dear Reader
65 years ago Mauritius Times was founded with a resolve to fight for justice and fairness and the advancement of the public good. It has never deviated from this principle no matter how daunting the challenges and how costly the price it has had to pay at different times of our history.
With print journalism struggling to keep afloat due to falling advertising revenues and the wide availability of free sources of information, it is crucially important for the Mauritius Times to survive and prosper. We can only continue doing it with the support of our readers.
The best way you can support our efforts is to take a subscription or by making a recurring donation through a Standing Order to our non-profit Foundation.
Thank you.