The new generation of virtual omniscients is marching

Artificial intelligence robots/ Photo: JIRAROJ PRADITCHAROENKUL / Alamy / Alamy / Profimedia

In less than five days after launch, more than 1 million users joined Chat-GPT, the new virtual interlocutor with artificial intelligence that has an answer to every question and finds the solution to every problem. For such, at least, its authors present it. Most of the users involved in the "Open AI" project consider it as such.

The first reactions are mainly admiration for the skill with which the new "know-it-all" processes questions, urgently seeks and expressly quickly finds answers and serves them in an understandable way. That's one of the main reasons for the "pushers" to participate in the development of the project of "Open AI", the company that describes the name itself - open artificial intelligence upgraded by people.

The numbers speak for the extent of the infection – Chat-GPT reached 1 million users in five days. Instagram reached this number of users in 2,5 months, Facebook in ten months, and Netflix in 3 years and five months.

The popularity is a testimony to the universal applicability of the "pocket assistant". The Google search engine will find you articles, photos or recordings that contain the term you entered. Chat-GPT does not show you a list, but gives you answers in short sentences, with explanations of content and concepts in a way that people can understand. Instead of serving up facts for you to choose from, it "chews" them for you to digest.

– In front of you is a computer that can answer any question just as another person would. The machine has the skill to collect facts, confront them and analyze them, draw ideas from different concepts and combine them - explains Eron Levy, CEO of "Box", one of a number of technology companies in Silicon Valley.

Universal assistant

The first version of the virtual assistant is so advanced that users are left stunned by the system's potential. Already in the first step of development, Chat-GPT helps developers write the required code, from a blank screen to the final version. It can be used even by illiterates, those who do not know any of the programming languages. It can find, remove and replace all errors in the code, like the already known tool that automatically corrects typos and grammatical errors as you type in a text editor.

You can hire Chat-GPT to be your personal secretary, to answer formal communications via e-mails and social networks, to help you with boring, repetitive and time-consuming activities.

It can be your personal trainer and instructor, measuring every calorie consumed and consumed, taking care of your physical fitness, organizing your diet and motivating you to maintain a healthy life. He can be an excellent assistant in the doctor's office and in the operating room.

And not only that. Although it is in an early version, the new companion can prepare reports, reviews, generate work for you on a certain topic. It has already created an effective marketing strategy plan for some users. Can be used as an accountant. Even artists can use it to write an essay for them, compose a song, write a script for a scene from a television show. An eleven-year-old boy with the help of Chat-GPT created a computer game that is already played by thousands of users. Smartphone assistants "Siri" or "Alexa" will tell you jokes, cake recipes or play a song that matches your mood. Chat-GPT will create them itself. All you have to do is describe precisely what you want.

Future Zone at the Seoul Museum of Technology – EPA Photo, Yeon Heon Kyun

Digital hallucinations

At the same time, many criticize AI interlocutors as ineffective, question the usefulness of the whole concept, and some religious leaders declare them "the work of the devil." Sociologists suggest that if search engines have reduced people's capacity to remember, artificial intelligence bots will reduce people's capacity to think and create.

Chat-GPT has more advanced synapses than its predecessors such as "Siri" or "Alexa" and you may not even suspect that you are talking to a machine and not a human, but this is where the big trap is - the bot can over-pack information, process which in humans is described as hallucinating.

The development department of "Meta", the company that also owns Facebook, recently withdrew the Galactica application, which started generating incorrect information in the first hours of communicating with users.

A number of other artificial intelligence systems are also being developed. Dall-e renders images based on a description of exactly what the user wants to see on the screen. Imaging systems are particularly sensitive due to the high risk of misuse. The GPT-3 system is created as a perfect dictionary, capable of creating essays and articles, generating arguments and creating entire codes.

Chat-GPT and Google's competing system Lamda (Language Model for Dialogic Applications) were asked to converse "Mark Twain-style". After a while Lamda "ran away with the story" and described a meeting of the famous American writer with the "king of jeans" Levi Strauss. The reason was probably that Twain and Strauss were the most famous residents of San Francisco in the mid-19th century, but there is no record of them meeting or communicating. The machine simply combined the most common and matching data to create a whole new story that seems real and interesting.

Hidden risks

Lambda is built by copying the connections between neurons in the brain. Experts from "Open AI" claim that they took a different approach with Chat-GPT and did not seek to create an "ideal interlocutor", but a system that will learn from its own mistakes in order to recognize them, admit them and take care not to repeat them. .

– We have reached a point where the model can admit errors. He can refuse to answer inappropriate and absurd questions - says Mira Murati, head of technology at "Open AI".

Chat-GPT users are warned that the machine "may sometimes give incorrect answers" or "generate harmful or biased instructions".

The ultimate goal is for the machine to learn itself when it makes a mistake. The main problem remains the flesh and blood interlocutors. If they persistently try to provoke the machine to answer bizarre and provocative questions, after a while they will surely teach it to give incorrect answers. This creates the risk of "training" an army of bots to behave like real users, and precisely like those spreading hate and propaganda.

– You can program millions of bots to pose as humans and convince you to adopt a certain attitude or buy a certain product. I have been warning this for years. It is now clear that it is a disaster waiting for its time - warns Professor Jeremy Howard, who develops artificial intelligence.

Dear reader,

Our access to web content is free, because we believe in equality in information, regardless of whether someone can pay or not. Therefore, in order to continue our work, we ask for the support of our community of readers by financially supporting the Free Press. Become a member of Sloboden Pechat to help the facilities that will enable us to deliver long-term and quality information and TOGETHER let's ensure a free and independent voice that will ALWAYS BE ON THE PEOPLE'S SIDE.

SUPPORT A FREE PRESS.
WITH AN INITIAL AMOUNT OF 60 DENARS

Video of the day