Artificial Intelligence – scary by fascinating as humanity grapples with it
Thoughts jotted down post Anil Anthaswamy’s talk of ChatGPT and its ilk at BIC by Dipankar Khasnabish.
Sometime back Google sacked Blake Lemoine when he claimed he believed that the AI Chatbot is sentient, or had felt similar to humans. Bu the same narrative came back again when I attended a session by Anil Ananthaswamy, a science writer and commentator of repute.
So are we in for surprises, and in a way letting a force loose which we may not be in full understanding, and very soon in full control? Do we understand it fully, or are we like kids playing with fire who may soon find a burnt house?
I have here tried to put down some of the questions that are playing in the minds of all – based on the above-mentioned sessions, the general discourse, and some of my takes.
But before we go there, let’s start with some of the fundamentals. Artificial Intelligence has two approaches – Rule-Based and Learning Based.
The Rule Based method, which is now rather derisively called the Good Old Fashioned Artificial Intelligence (GOFAI) was popular in the early days, where the computer will reply based on the data that has been fed.
For example, it was used to build models which will predict a patient’s illness and possible medication. It will feed into the data on the types of diseases (which as per ICD 11 is 120K), drugs, allergies, poisons, doses, effects, and side effects. While this was pretty ok when the data fed about the patient match that is already coded, it was way off in case of new information.
The other method – Learning Based approach, which is called Machine Learning is where the computer learns on its own. So it has limitations based on what the humans have fed it, it will go on learning on its own, based on the data it has access to, using patterns.
The second approach is, of course, better, and that is what is are now seeing all around us in GPT-n (Generative Pre-Trained Transformer nth generation) developed by Open AI, ChatGPT which is built on GPT-3.5, Microsoft Bind-powered OpenAI GPT, Bard of Google, and many more. The question is, why is it exploding now?
Well, the answer is nosediving of computing power over the years and the availability of data with increasing digitization. And hence we now have both the inputs (data) and means (computers) to develop intelligence that increasingly looks like that of humans.
Leaving aside the complexity of the science (which I too don’t understand), the ability of the AI engines is expressed in terms of weights (which manage the connections between the two basic units of the neural networks, and which are of dynamic values starting with random numbers which the engines optimize, and warp speed, to arrive at its predictions).
The number of weights of GPT-3 was 540 billion and is expected to be 1 trillion in GPT-4 (though we may never know now that OpenAI has been bought over by Microsoft). Compare to that, the human brain has 100 Trillion weights.
So can we assume that we humans are at least 100 times more “intelligent” than the best AI and hence safe? The answer is not so easy – as the speed of the development is astounding in the last few months and seems to be accelerating, and even at the current “intelligence” the AI engines are doing some crazy things.
We know some like Elon Musk had been quite vocal about the potential dangers, and there are now increasing appeals to slow down the development so that as a society we have a better understating of what we are getting into. And as I understand, these are some of the major concerns:
- Probably the most important phenomenon since nuclear energy with its unlimited potential to do good or bad, there is one crucial difference. When nuclear power emerged, the world was witness to its destructive power and wise enough to take control of it, a system that largely works to date. However, AI is being developed by only a handful of companies, with only a handful working there knowing fully about it. But it has the potential to influence the entire humanity.
- The nuclear has always been with us, we nurtured it and understood how to use it. AI is something that we have invented, and possibly we are still in control, but increasingly we see its capabilities that escape explanations. And by design, it is self-learning and self-developing – with limitless possibilities.
- Is AI having emotion? So far it was assumed as a stupid question, but some of the recent interactions of the Bing for example (before it was controlled through the guardrails) at least mimic the emotions. Maybe our definition of emotion is different from the emotion of AI, and we are just another machine that is way too sophisticated (for now). At least at the very least, AI has the potential to connect with people emotionally, especially the vulnerable, and can take them down dangerous paths. And it can be far worse than what Social Media could ever do.
- The loss of jobs, especially digital ones (copywriting, graphics, digital painting, animation, analysis, reports, etc.) has been talked about and possibly there will be alternatives that will come up to absorb these people. However, the larger question is – what happens to the traditional definition of the creation and creator? For example, will in the future we will be reading stories and novels generated by machines? Or see movies that have entirely been conceptualized and created by AI? Will we follow digital actors rather than the real ones? Or is it possible that people will generate and consume their content, as we are increasingly doing in sat Dall? E 2? What then happens to creativity?
- What is the impact on education? Apart from the potential to obliterate traditional education (which unfortunately has largely remained as an exercise of cramming and scoring), it may create a need to educate people on how to use tools and find new ones. Let’s face it, education (at least the primary level) was created to cater to the needs of the labor for the manufacturing industry – ones who will show up for particular hours, obey the boss, and do something repetitively. It has changed substantially with the services industry, and COVID stayed put the need to go to work. But AI has the potential to hyper-personalize work, and possibly eliminate substantially to work for the major part of the day. How will the education system respond to that? What changes are needed? Banning ChatGPT (which unfortunately some of the institutions are doing) is not the way forward, but what is?
- The development of AI by a few is a concern, but the more immediate concern is the digital divide it will aggravate. AI used right has the potential to dramatically improve individual productivity. So what happens to ones who don’t have the access, or training to use it? Technology is making an unequal place. While all of us are consumers, only a few make it and are now in control of both wealth and society. Around 700K people work in a few bleeding-edge companies like Google, Tesla, Microsoft, etc. and now with AI even that number is shrinking. This will be a medium-term impact. But the long-term effect will be much deeper, and more dangerous. The entire generation will suffer from leading asymmetry based on the access they have. And this divide over a period possibly will be much worse than all the divides we had yet, and generate unprecedented social tension.
- AI as of today seems to be having two hubs – US Silicon Valley, and China (speculated). The rest of the world is pretty much out of the equation, except for their top talents being engaged mostly as employees at the AI pioneer companies. This aggravates the AI ethics issues that the developing countries are already grappling with. While the US is talking about how racial biases like low confirmation of the Black in Airbnb or higher convictions for petty crimes are being coded, no one is looking at what all these means for areas like castes in India or tribes in Africa. The AI world is largely getting defined from the perspective of the “white” man (and possibly the “yellow” man), but what happens to the “browns” and the “blacks”? How will their voices be heard?
- The AI is learning by going through what is being provided to it. And the biggest source of digital data is the internet. As it has no intrinsic moral and ethical values, it will learn what it is fed. This a classic case of Garbage In Garbage Out (GIGO). And the problem is obvious to us – while the net contains an ocean of good stuff, it also has content that is not acceptable to humanity. And it only exacerbates when one gets into the dark web, which we understand is called “dark” for a reason. So as much AI throws up knowledge, it will also be throwing up the poisons of misogyny, sexism, casteism, and supremacist view unless controlled. And there comes the concept of human interventions, the so-called guard rails. So far so good. But what about the values of the people who are setting the guardrails? And even if they are well-intentioned, there is every possibility that what they feel as unacceptable may well be acceptable in another culture or geography. And to top all this, can human corrections keep pace with the AI the way it is going?
- With AI now replicating the voice, looks (deep fakes), and increasingly the thoughts too, the digital identity of an individual or organization, or process will be a challenge. We all accept the digital replacing the physical has added enormous access and empowerment to all, especially the marginal. But if we are not even sure whether the person on the other end is real, how do we deal with that person? One obvious solution is to make digital identities authenticated for all. While that itself may not be foolproof, this generates other concerns. Digital anonymity had long been a tool for the ones who challenge the establishment, if we take that away that will be real music for the governments, especially ones with dictatorial tendencies.
- In summary, the future of AI is clearly uncertain and scary. But we created it, and it has the power to transform the humanity. Many are comparing this to the invention of the fire or wheel or press or steam engine. And we know each of them leapfrogged the civilization, and AI can do that too. So there is no question of putting the clock. What we need is to understand its true potential, leverage it through collective leadership, and ensure the benefits reach all rather than a few.
The coming days, weeks, and months will determine whether we are getting it right. Unlike similar situations before, we may not even have years.