Artificial Intelligence and the Moral Crisis
Is Frankenstein's monster really rearing out its head? Recently, some events in the technology world have caused concern. Science does not create monsters, nor do scientists; but anything can happen with the power of science which is under the ax of Benya rulers, just like how the world witnessed in 1945, Hiroshima Nagasaki. When the power falls into the hands of unscrupulous, corrupt businessmen, they use science to market it until their pockets swell. Regardless of tarnishing others’ reputation or killing ‘insignificant’ lives. Such crooks don't have the time or wishes to think about those things. All they want is money, fame, comfort and their own desires fulfilled.
Recently, a few ominous events occurred in the tech world that have concerned scientists and sociologists. A term called ‘deep fake’ has become popular all over the world, just like for Bollywood stars that has caused discussion, criticism and analysis going on about it. Along with that, there’s the news that an ordinary inspector lost his life due to the wrath of the robot. There was not much talk in the media about it, and even the bigger media platforms have neglected coverage on the matter. After all, what is even the value of an ordinary inspector’s life?
The ousting of the OpenAI founder Sam Altman is also a matter of discussion. This Sam Altman is not an ordinary person. Although it may seem like these three events are different, they are actually connected, and artificial intelligence is affiliated with all of them; in short, the tech world calls it AI.
Sam Altman is the CEO of OpenAI and the founder of ChatGPT. On November 17, he was fired by the company’s board of directors. He was alleged to have shown a lack of leadership. However, within two days, the tables turned completely. The co-founder and company president Greg Brockman resigned the same day Sam got fired. The next day, three senior scientists and researchers announced that they were leaving OpenAI. Later, 700 workers wrote a letter to the board of directors threatening to resign if Sam did not return. Lastly, the company investors also pressured Sam to return. Finally, he was forced back by the board of directors on November 21, and, in turn, he fired all but one of the directors.
Considering the last five days of dramatic events of OpenAI, it may seem that Sam was a victim of ordinary office politics. But the matter may not be as simple. He is the creator of ChatGPT, an AI app that studies and analyses human input data and responds accordingly. Where people lack ethics, what would the AI need them for? Let's say, in OpenAI, there must have been some dirty politics. Even though I don't want to discuss who was wrong or right, I believe something must have happened there; hence, how safe really is AI when in hands of such morally sane people?
ChatGPT, Google Bird or other big companies' AI may be ethically controlled, but numerous small AI companies still exist, most of which have only evil intentions! The AI that is developed by them is in no way safe, the Bollywood incident is a big example of that.
It all started with South Indian actress Rashmika Mandanna, followed by Katrina Kaif and lastly Alia Bhatt who was deepfaked when their AI made porn videos were created and spread around the internet. Porn traders have earned billions of rupees by selling them like this. In the case of these actresses, people were informed that they were fake videos made by AI. However, what would happen if this thing happened to a simple ordinary girl from our society?
The other day, a colleague spoke to me about this issue, mentioning how a young victim had told him of her experience. According to him, these matters can only be explained to friends or conscious, sensible people. But who will convince the victim’s parents, neighbours or relatives? Exactly how many people can it be individually explained to? The young woman will now be living with extreme emotional trauma. How many will be able to bear the shock of this trauma? Many of them might even resort to ending their lives. Making such fake videos of girls living in villages usually has more than a commercial motive. Usually it involves more of a blind rage to get a response to a love proposal. Those scoundrels can easily create pornographic images and videos of a young woman with a simple photo or video input. All they need is a smartphone in hand and there are already numerous apps.
So what is the solution to this problem?
AI is also being used in robots. In fact, there is no difference between robots and AI apps except for structural differences. Ever since the idea of robots came around, AI has been in the minds of inventors. Authors have been imagining such things in their fiction for a long time. They were empowered to make their own decisions. As a result of which, conflict started. Robots stood up against humans, rebelled, and began killing them. When robots began to truly emerge from the confines of science fiction, researchers and science fiction fans were actually amazed.
No one ever objected to mindless robots. But ever since the 1960s, scientists, researchers and sociologists started to warn people about the problems of these robots, and how they might attack humanity, just like Frankenstein's monster. They also knew that artificial intelligence would only take a matter of time to launch, and asked the inventors to consider its negative traits.
Inventors surely took that into account, hence, since the 1990s, efforts were made to improve the situation ethically. But after smartphones were established, and YouTube and Facebook came into existence, an easy way for young people to get popular eas discovered. Alongside being famous, they could also earn money through it and that’s when the moral crisis was created. Ordinary people were now obsessed with going viral on YouTube or Facebook. Keeping them in mind, technology companies released new apps and software tools on the market.
Also, while video or photo editing once used to be in the hands of experts, the smart age of technology has opened up its ways to everyone. Nowadays, it is easy for everyone to edit photos and videos. Ultimately, after the arrival of AI apps, they don’t even have to do it themselves anymore. Photos and videos can be created on command, as per the user’s wish, with the help of AI. Unless one pays close attention to the details, no one would recognise whether they were fake or real. This was not supposed to happen.
Isaac Asimov, a world-renowned science fiction writer, gave us hope by laying down two principles for robots that can help the AI crisis. The principles are, 1. Robots themselves will not allow any harm coming towards humans through anyone else or anything else. 2. Robots must obey human commands unless those commands violate the first principle. 3. The robot will always protect itself unless it violates the first and second principles.
The problem is that humans would have to instil Asimov's principles into these artificially intelligent robots, which does not solve the problem completely either. No device or system policy in the world is 100% perfect; in fact, according to physics, it is not even possible. Therefore, saying that robots will also work 100% perfectly would be a violation of the very principle of science.
Even the smallest error of a robot can bring a terrible disaster such as what happened to the Korean inspector recently. The robotics company said it confused the inspector with the spice box, and he got caught in the conveyor belt and died. However, it does not seem like that. If the man was so confused with the spice box, why did he hold it tightly with the belt? The spice box certainly doesn't need to be squeezed so hard. Later on, the company admitted that the robot's sensors had been faulty. This is not the first time that humans have been killed by robots. In 1979, a worker named Robert William was killed similarly in a factory in the United States.
AI has been affiliated with all three of these recent incidents. AI technology is usually developed with a variety of instructions, just like robots are instructed to follow Asimov's formula; But who will stop these untrustworthy app developers? They can do anything for money.There is still time to prevent that.
There is no way of telling whether the future of mankind will be dependent on AI. It is necessary to sit down with scientists, technologists, sociologists and psychologists and find a way for the ones that are already in power, otherwise artificial intelligence can become more dangerous than nuclear bombs.
Author: Science Writer
Translation: Sayema Akhtar
Leave A Comment
You need login first to leave a comment