Technology has never ceased to evolve. One of the latest is ChatGPT, an artificial intelligence (AI) programme developed by a company called OpenAI. Since its introduction, its users are expected to have surpassed 100million last month, a feat it took TikTok nine months and Instagram two and a half years to achieve. Despite its many advantages, an expert says care needs to be taken.
SOCIAL engineering, scamming and impersonation and other dangers have been identified as some of the dangers lurking behind the ChatGPT technology.
ChatGPT is a natural language processing tool driven by artificial intelligence (AI) technology that allows the user to have human-like conversations and much more with a chatbot. The language model can answer questions, and assist with tasks such as composing emails, essays, and code. Usage is currently open to the public free of charge because ChatGPT is in its research and feedback-collection phase. Launched November 30, 2022, ChatGPT was created by OpenAI, an AI and research company.
In an email report at the weekend, SVP Content Strategy & Evangelist at KnowBe4 Africa, Anna Collard, said it is possible to use a publicly available artificial chatbot to generate a complete infection chain, possibly beginning with a spear phishing email written in entirely convincing, human-like language and eventually causing a complete takeover of a company’s computer systems.
According to Ana, researchers at Checkpoint recently created such a plausible phishing email as a test. They only used ChatGPT, a chatbot that uses deep learning techniques to generate text and conversations that can convince basically anyone that it was written by a real person.
In reality, she said there are many potential cybersecurity dangers wrapped up in this impressive technology developed by OpenAI and currently available online for free.
One of the dangers is social engineering
According to Anna, ChatGPT’s powerful language model can be used to generate realistic and convincing phishing messages, making it easier for attackers to trick victims into providing sensitive information or downloading malware.
Another one is scamming, in which the generation of text through ChatGPT’s language models allows attackers to create fake ads, listings and many other forms of scamming material.
On impersonation, ChatGPT can be used to create a convincing digital copy of an individual’s writing style, allowing attackers to impersonate their target in a text-based setting, such as in an email or text message.
For automation of attacks, ChatGPT can also be used to automate the creation of malicious messages and phishing emails making it possible for attackers to launch large-scale attacks more efficiently while on spamming, she explained that the language model could be fine-tuned to produce large amounts of low-quality content, which can be used in a variety of contexts, including as spam comments on social media or in spam email campaigns.
“All five points above are legit threats to companies and all internet users that will only become more prevalent as OpenAI continues to train its model. If the list managed to convince you, the technology succeeded in its purpose, although in this instance not with malicious intent”.
“All the text from points one to five was actually written by ChatGPT with minimal tweaks for clarity. The tool is so powerful it can convincingly identify and word its own inherent dangers to cybersecurity,” the note noted.
However, Anna wrote, there are mitigating steps individuals and companies can take, including new-school security awareness training.
“Cybercrime is moving at light speed. A few years ago, cybercriminals used to specialise in identity theft, but now they take over your organisation’s network, hack into your bank accounts, and steal tens or hundreds of thousands of rands.
“An intelligent platform like ChatGPT may have been created with the best intentions, but it only adds to the burden on internet users to always stay vigilant, trust their instincts and always know the risks involved in clicking on any link or opening an attachment,” she said.
About OpenAI’s ChatGPT
According to Entrepreneur, an online platform, in 2015, Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever and Wojciech Zaremba founded OpenAI, an artificial intelligence research organization. OpenAI has other programs, but ChatGPT was introduced in 2018.
ChatGPT is based on GPT-3, the third model of the natural language processing project. The technology is a pre-trained, large-scale language model that uses GPT-3 architecture to sift through an immense pool of internet data and sources to reference as its knowledge base.
This AI is a well of knowledge, but its ability to communicate is what sets it apart from other technology.
It has been fine-tuned for several language generation tasks, including language translation, summarisation, text completion, question-answering and even human diction.
ChatGPT is a transformer-based neural network that provides answers and data with human writing patterns. The AI has been programmed with endless amounts of text data to understand context, relevancy and how to generate human-like responses to questions.
Other ChatGPT facts
- ChatGPT is large-scale. It has over 175 billion parameters, making it one of the largest language models ever.
- ChatGPT is pre-trained. The program has a “set it and forget it” quality, meaning the legwork to make it function has already happened.
- ChatGPT is capable of multitasking. The program has more than one language function, so it can simultaneously juggle translation, summarization and answering questions.
- ChatGPT responds in real time. Like a chatbot you’d find while online shopping, ChatGPT responds very quickly after you ask it a question or complete a task.
When it comes to high-level AI, there are several terms used to explain how the technology works that need explaining themselves.
Key terms of ChatGPT include:
AI is a sector of computer science that focuses on building systems that can perform tasks as humans do. Typical forms of AI include speech recognition, language translation and visual perception.
Natural Language Processing (NLP) is a subsection of AI dedicated to the interaction between humans and computers using language. Through algorithms and models, NLP can analyze, comprehend and use language with human diction.
A neural network is a machine learning algorithm that functions like a human brain. Just as the brain has pathways where information is stored and functions are carried out, AI uses neural networks to mimic that process to problem-solve, learn patterns and collect data.
A transformer is a structure within the neural network meant for NLP tasks that use mechanisms to analyze input and generate output.
A generative pre-trained transformer is a transformer-based language developed by OpenAI, which is who gave it the name. This is the first version of the language processor and generator part of the program that is unique to OpenAI, as it can generate text in a human-like way.
This stands for Generative Pre-trained Transformer 3, based on the Transformer network architecture developed by OpenAI. It is the most dynamic version of GPT to date, as it has self-attention layers that allow the program to multitask, adjust in real time and generate a more authentic output.
This is just what it sounds like – it’s the work OpenAI had to do to train the neural network to work how it wanted it to before it was ready for public consumption.
This part of training comes after pre-training. The program takes one task and trains it even further on a smaller, more specific task on more particular data. This is why ChatGPT can work so thoroughly.
An application programming interface is how the program remains uniform. It is a routine and guide for how each application is built. This allows new additions to the system to be integrated successfully.
How it Works
ChatGPT uses a vast neural network to produce the human-like language through which it communicates. But how does that process happen? This is a step-by-step breakdown of the process:
- Input processing: The human user types commands or questions into ChatGPT’s text bar.
- Tokenization: The text inputted is tokenized, meaning the program divides it into individual words to be analyzed.
- Input embedding: The tokenized text is put into the neural network’s transformer portion.
- Encoder-decoder attention: The transformer encodes the text input and generates a probability distribution for all possible outputs. Then that distribution generates the output.
- Text generation and output: ChatGPT generates its output answer, and the human user receives a text response.
ChatGPT has extensive capabilities that will likely change the landscape of many industries.
The artificial intelligence program can complete tasks such as Text generation, Text completion, Question-answering, Summarization, Text translation, Conversational AI, Sentiment analysis, Named entity recognition and Part-of-speech tagging.
ChatGPT is nothing without its text generation, as that is how it communicates with its human users. The program uses its pre-trained database to intake inputs and prompts and generates the appropriate response in a natural, human-like text structure.
If you’ve ever wished to have a friend or a sibling that could finish your sentences, ChatGPT might just be the way to go.
ChatGPT can finish your inputted sentence based on content and meaning if you supply the beginning. It might not always be the ending you wanted, but the capability is there.
For example, if you typed a command asking to finish the sentence, “The rainbow is….” You might be thinking, “The rainbow is beautiful.” But ChatGPT might respond, “Red, orange, yellow, green, blue, indigo, violet.” This is because it pulls from its pre-trained knowledge to find the answer. It might not be able to read your mind, but it can read its data.
ChatGPT can answer every question that is part of its pre-trained knowledge. This will include world knowledge and general facts.
The program can also answer questions in the format that you like. So based on your preference, you can command ChatGPT to answer in bullet points, a list or short answers.
If you input a long text into ChatGPT and command it to summarize the information, it will do so. You should not expect ChatGPT to summarize full-length novels, but a few pages of text is possible as it can handle up to 4095 tokens.
Just like Google Translate, ChatGPT can translate from one language to another, including English, Spanish, French, German, Italian, Portuguese, Dutch, Russian, Chinese, Japanese, Korean and Arabic.
The program uses its neural networks to form syntax and structures like when outputting English. And much like Google Translate, it is not a perfect science. While the AI is incredibly advanced, it may miss some grammar, semantics and other details of foreign languages.
One of ChatGPT’s biggest highlights is that it can respond in human-like, conversational language
This is a helpful way to receive and digest the output. It can also be useful for companies with e-commerce sites that want to integrate conversational interfaces for chatbots, virtual assistants and other applications.
- How OpenAI launched an API for ChatGPT and Dedicated Capacity
- ChatGPT Artificial Intelligence Chatbot Developed by OpenAI
- Artificial Intelligence AI – Meaning, Benefits, Types, Movie, Future
- ChatGPT AI Chatbot – How to Use, Download, Signup and Login
- Fraud Detection Algorithms Using Machine Learning and AI
- 7 Steps To Get Started in Machine Learning | Data Science
- How AI is Building the Robo-Advisor Landscape