Dangerous interactions using ChatGPT: Meaningless words

Dangerous interactions using ChatGPT: Meaningless words

ChatGPT, OpenAI’s chatbot, has gotten a lot of attention on the internet because of how confidently it talks. But the fact that the bot doesn’t know anything about the world and that the texts it writes are meaningless is hidden by how smart it is.

  1. If you look at the last few years, it’s clear that venture capital investors have been standing in line a lot. It’s possible that their legs are tired because of this. People are lining up to invest in the new crypto venture, the next star of autonomous driving, the new Metaverse project, augmented reality, the Uber of animals, smart cities, cleantech, fintech, genetically engineered food, or whatever else could be the “next big thing.” Generative artificial intelligence is the new buzzword and craze in Silicon Valley right now. At the big venture capital firm Sequoia, they said that the models were like smart mobile phones and that they, too, would give us a bunch of new apps: “The race is on,” they said without shame.
    The most popular company is OpenAI, which has a text generator called ChatGPT that has already been used by more than a million people. Almost everyone seems surprised by it. Conversations with the chat were shared as screenshots on social networks, and some Silicon Valley executives said they were excited. Paul Graham, who helped start Y Combinator, the most well-known accelerator, tweeted that “Something big must be going on,” Elon Musk wrote on Twitter. “ChatGPT is scary good.” We are not far from having AI that is dangerously smart “; And Aaron Levie, who helped start the cloud company Box, said, “ChatGPT is one of those rare moments in technology where you can see a hint of how everything is going to change in the future.”
    What is generative AI? What can this chatbot do that others can’t? What can’t it do? What does it make you think, and what power does it have that no one pays attention to? How to Use The chatbot that broke the internet confused people, so they rewrote it. It kind of makes sense, and there are some people who think it will change the future. Sure, maybe.
Sam Altman2
From right: Sundar Pichai, Sam Altman & Elon Musk (Credit: Bloomberg Reuters)
  1. The term “generative artificial intelligence” (Generative AI) refers to algorithms that can create new text, images, or sounds with little help from a person. Even though these apps have been around for a while, the last year was important because it was the first time that high-quality apps were given away for free to the general public (or at an extremely low cost). The creator of OpenAI also made the DALL-E 2 image generator, the Imagen image generator, and Google’s LaMDA, Stable Diffusion, and Mid journey languages.
    The OpenAI chatbot is made to talk to people in a conversational way. The website of a business called OpenAI says, “The dialogue format lets ChatGPT answer follow-up questions, admit mistakes, challenge false assumptions, and turn down inappropriate requests.” Even though bots are not new, ChatGPT stands out because the answers it gives give a strong impression that they are relevant, reasonable, well-thought-out, and well-spoken. Even the order of the conversation is “remembered” by the model, making it seem like a more natural conversation (by the way, only in English). When I asked the bot what it was, it said, “I don’t have a body, and I only exist as a computer programme.” It also said, “I’m not a person, and I can’t think or act on my own.” Some users have said “Google is dead” because of how easy it is to search, and the chat shows how the job market is about to change. Others noted that the bot, or at least its skills if we could use them, could completely change a number of jobs. Law, customer service, programming, research in general and in academia in particular, financial services, and everything to do with content writing, like marketing, literature, journalism, and even teaching.
    To figure out what these models can do in real life, you have to start with the basics: how do chatbots and similar things work? The chat works with a natural language model (NLP) from the 3-GPT family, as the name suggests. In short, the model is a way to use statistics to predict what words will come next in a list. That is, after being trained with a lot of data, the models can guess in a smart way which word is likely to come after which word (and sentence after sentence). 3-GPT is a big language model with 10 times more parameters (values that the network tries to find the best value for during training) than any other model before it. 3-GPT was made with 45 terabytes of data from places like Wikipedia, two large collections of books, and raw scans of web pages. The model even goes through a process called “fine-tuning,” in which it is taught a set of instructions and how it should respond to them. On the company’s website, there is some information about how the chatbot is trained. Written answers from real people are used as training data, and then the “reinforcement learning” (reward-punishment) method is used to get the model to give better answers. That’s about all we know about the model, and when I asked ChatGPT for more information, I was told (multiple times) that it can’t answer because this information is “property of OpenAI.”
    Even without knowing all the details, it is clear that the databases on which the model was built contain more information than a single person could ever be exposed to in their lifetime. Because the model has spent so much time learning, it can use probabilities to predict what words will come next in a text. This is not only the most likely word based on statistics, but it is also the most likely word based on other factors, like writing style or tone. Words like “statistics” and “probability,” not “language” or “meaning,” are used to describe how these kinds of models work. Even though the model knows how to put together convincing combinations, what it comes up with is unrelated and doesn’t try to tell you anything about the world. Even the literal meaning of what it comes up with isn’t trying to tell you anything. The combinations that the model puts together in a calculated (better) way don’t mean anything until someone gives them meaning, that is until someone from the other side reads them. This is very different from someone who says or writes what he says or writes in order to get a message across. This is a key point to grasp if you want to know what ChatGPT and other tools can do. These models can imitate natural language very well and are very good at guessing what words or phrases are likely to come next in a sentence or conversation. But that’s all they do. They don’t know what they’re saying, and despite being called “artificial intelligence,” they don’t have any mind or intelligence.
    Emily Bender, Timnit Gebru, and other researchers say that the process and models can be described as “stochastic parroting.” It’s called a “parrot” because it just repeats what it was told without understanding what it means, and a “stochastic” process because it’s random. “The created text is not based on a communicative intention, a model of the world, or a model of the reader’s mental state,” they wrote in an article from 2021. They went on to explain that what they really want is a system for randomly combining sequences of linguistic forms “according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.”
  2. The way the models fit together makes it seem like the machine “understands” what the person on the other side is asking, and we can understand its answers in this way. This makes sense because language is what sets humans apart from all other species. When we see a machine that is close to or has already reached this level of intelligence, we tend to imagine, compare, and project human intelligence as we know it onto the machine. This idea may come to us naturally, but the model’s creators also want us to believe that an algorithm that “manipulates a linguistic form” has also “acquired a linguistic system.” For example, a few days after ChatGPT came out, the CEO of OpenAI, Sam Altman, tweeted, “I’m a stochastic parrot, and so are you.” The same thing was done by Google CEO Sundar Pichai, but he did it in a more elegant way. When they launched their chat LaMDA, he explained that “Let’s say you were interested in Pluto, which is one of my favourite planets. LaMDA already knows a lot about Pluto and about a million other things.” We’re not robots, and the bots don’t “get it.” In their study, two researchers called these false comparisons “a metaphor that gives the human brain less complexity than it deserves and gives the computer more intelligence than it should.”
    “Skeptics” don’t have the right to say that the models don’t understand human language or aren’t smart. ChatGPT doesn’t understand the meaning, context, what it’s asked, or what it’s told to answer. Because of this, most of these generators’ outputs are used as “tools that help the creative process of humans” (as opposed to a tool that replaces the human creative process). What about the future, though? Can we expect the tools to get so good that they beat human creativity and can be used as the final product? Will they be able to be used as a search engine, a tool for journalists, or a way to find travel ideas? It’s hard to say for sure. What can be done is to put limits on how far the technology can be used since it is getting so much attention.
    Let’s start with the idea that chat can be used instead of writing articles, news stories, or anything else that needs to be based on facts or well-thought-out ideas, like legal documents, licencing agreements, contracts, or helping a student do their homework. ChatGPT and other sites like it can’t or shouldn’t bother writing about facts. This is because, even though the bot has access to a lot of data, it doesn’t have access to facts in the way that humans do. This is because the bot doesn’t “understand” what facts are, can’t decide if claims are true, and can’t cite or refer to sources, no matter how good or bad they are. The only way to insist that what it says is true is if the person asking the question knows about the facts.
    All of this is happening at a time when bad people are always using bots to spread fake news or bad reviews of products. This a problem that is so common that most of us already put in extra work to figure out what is true and what isn’t, and to find the real “writer” or “recommender.” OpenAI knows how risky it is to use their bot to find facts, and they bring this up as the first point in the “Limitations” of the model: “Sometimes ChatGPT writes answers that sound right but are wrong or make no sense. It’s hard to fix this problem because supervised training leads the model astray because the best answer depends on what the model knows, not what the humans know.” How about looking? It’s easy to get around in the world when you can get quick answers to simple questions. Virtual assistants are often asked questions like “What’s the weather like?” or “What time is it in New York?” As long as the questions are easy, there is nothing wrong with making things easier with technology. But getting direct answers to hard questions from a bot that copies language patterns are a dangerous way to look for information. Not only can the answers be wrong or not helpful, but because we like to use tools, they can also hide how complicated the world is. No more time spent looking at different answers in the search results, learning how to tell the truth from the lie, the fact from the fiction, or researching topics where the searcher isn’t sure what the question is or where the answer isn’t historically agreed upon. All we get is an algorithmic decision that doesn’t make sense.
    What about writing for the web? In the last week, it’s been popular to share different ChatGPT outputs on social networks instead of making creative content. Paragraphs and paragraphs that are meant to show how algorithmic creativity can be used in poetry, journalism, academic research, or prose. Sometimes, the content on social networks is meant to tease the people who read it. For example, this post that explains what ChatGPT is was written by ChatGPT. It’s hard to say how many random readers fell for the trick and thought the text was written by a person, but even when the generator did what it was supposed to do, it still made the text seem shallow, sometimes making text that looked more like “copy and paste” than original or current work. This doesn’t mean that it can’t be used in reports that don’t need a lot of depth, like the outcome of a football game or the weather.
  3. These problems won’t go away if the model is exposed to bigger databases, has more parameters in its natural language models, and is “stronger.” Size and computing power can’t solve everything. But computing power is still something that is talked about. A 3-GPT costs $12 million to train, and a 2019 study from the University of Massachusetts found that training one model makes as much carbon dioxide as five cars over the course of their average lifetime. The huge costs in terms of money, energy, and the environment mean that these tools can only be made by big companies or well-funded startups. When you put money into something, you also expect to get money back. These apps don’t get their money directly from the general public. The public is only the end goal of the chain, which is controlled by other companies that may use them wherever they can. Because of this, the credibility we give to these chats is very important. Because of this, it is especially important to pay attention to this credibility framework and set up clear rules about how to use these tools. If there isn’t a set of rules for how to use these tools ethically, we’ll end up with a lot of people using a tool that lowers the cost of making content but makes it harder to tell what’s true and what’s not. Because of this, people will have to work harder and learn new skills to figure out what’s true and what’s not. We wish you luck.

Who controls the bot?

Open AI seeks “safe” artificial intelligence
In 2015, a group of Silicon Valley investors, such as Elon Musk, Sam Altman, and Peter Thiel, set up OpenAI as a non-profit group. The company’s first goal is to “discover and realise the path to safe artificial general intelligence,” which means building a computer with consciousness, and then sharing these products with the world.
The idea was that it would be better for potentially destructive technology to be made by the “right” people (that is, them and the team they choose) than by companies like Google and Facebook that are only interested in making money.
But in 2019, the organisation switched from nonprofit to for-profit to get money, which it did in the form of a billion dollars from Microsoft. Microsoft also got access to the organization’s patents and other intellectual property products.
Under the new model, OpenAI has promised to pay each investor up to 100 times what they put in. Any money made over that will go back to the public.







Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *