Submit your nomination for the 2024 Socitm Awards

Chatbot-GPT – What does it mean?

Authors and contributors: Mark Brett

Chatbot-GPT – What does it mean?

Chat-GPT is an example of an Artificial Intelligence “AI” programme. These “Large Language Models” (LLMs) are continuing to develop at an ever-accelerating rate. There are several key issues to consider. The UK Government AI Strategy is a good starting point to understand the context and background. It’s also very useful to understand where AI fits into the wider Data Science and Information Management disciplines.

Produced by Mark Brett, Socitm Associate, and Trusted Cyber Security and Resilience Advisor. 

Key issues to consider

  • Ethics
  • Bias
  • Privacy
  • Copyright infringement
  • Extensibility
  • Secondary mining of the Metadata
  • Access Control
  • Consistency of output
  • Data Protection (GDRR) – Giving the right to query automated decisions.
  • The right to be forgotten.
  • Legal implications – vicarious responsibility for Chatbot-GPT generated answers and advice.

Initial thoughts

  • AI is the single most exciting and terrifying technology to emerge for some time.
  • Whilst AI has been around for years, there is a paradigm shift underway.
  • The machine learning models bring a whole new meaning to “Built on the shoulders of giants”. LLM’s will multiply, the pervasiveness of AI will be logarithmic.
  • Consider AI, it in terms of AI & Moore’s Law. We’ve saturated CPU growth, we’ve increased power of processors, made RAM and storage cheaper. Energy costs are the limiting factor!
  • As the LLM & AI snowball starts, the momentum will exponentially accelerate, potentially in a logarithmic fashion.
  • We may not be ready for it.
  • We must think Blue/Red=Purple Teaming, consider both the opportunities and the treats.

Background

ChatGPT is a chatbot powered by Generative Pre-trained Transformer 3 (GPT-3), which is a deep-learning language model. The release of ChatGPT took the world by storm in November 2022 due to its impressive ability to write in a human-like way.

Artificial intelligence (AI) powered chatbot ChatGPT is being used for a wide range of applications that require natural language processing and text-based conversation. The large-scale language model developed by OpenAI uses deep learning techniques to generate human-like responses to text-based conversations. It can be used as Chatbot, virtual assistant, language translation, content creation, educational resource, solving complex problems, and evening writing codes.

However, there have been instances when ChatGPT’s servers have been overloaded with users. This has locked users out of using the bot. On February 27th February 2023, ChatGPT reportedly went down for over three hours.

OpenAI said that the outage was due to “database instabilities,” and it started rolling out a couple of hours after the servers were taken offline.

This is the second major outage ChatGPT has seen in the last 90 days. The service experienced another outage on February 21, which brought down the chatbot for four and a half hours. With increasing popularity and user base these outages may become a regular occurrence. With the increasing need for AI writing tools like ChatGPT, people are looking for ChatGPT alternatives to help them be more creative.

We’ll consider some of the issues and key questions. Is this a useful new technology,
something to be trusted? Here to stay, or a passing trend.

Cyber risks & cyber opportunities

So some interesting issues around Chat-GPT and inputting sensitive data – beware, understand any data protection concerns and other privacy, copyright type issues. How do we assess the risks (Confidentiality, Integrity, and Availability)? The technologies use is currently novel, just like when WhatsApp appeared and other Cloud Software as a Service (SaaS) applications. When you ask it questions, are those questions then processed on and stored? Adding to the collective consciousness. How will be known about innate bias in the questions or mor importantly, the intrinsic bias built into the answers, who controls those? The issues of ethics will be a real concern.

As the name indicates, Open AI is an open platform that millions of users from all over the world use. This carries several security risk. For one, it gathers a lot of personal data that users, unassumingly, might provide. This, in turn, makes ChatGPT very attractive for hackers.

While one could argue that private users interact with ChatGPT at their own risk, as a company, you could become liable.

After all, you have to ensure user data privacy. If your customers’ personal data somehow because public through ChatGPT, you’re not only in violation of privacy laws. It can also damage the trust customers put in your company.

In addition, depending on what type of information you put into ChatGPT, you run the risk of making sensitive company information public. For instance, if your marketing team is playing around with the chat technology to come up with a good copy for sending out a customer e-mail about a new product that hasn’t been released yet, this information might become public before you even launch the product.

Or, ChatGPT might come up with a text that is protected by copyright laws. If your marketing team then uses this text, you could face legal charges.

Summing up, ChatGPT is a very promising and enticing technology, but it’s not quite ready for business use yet. However, there are safer alternatives!

Security Implications

“Just weeks after ChatGPT debuted, Israeli cybersecurity company Check Point demonstrated how the web-based chatbot, when used in tandem with OpenAI’s codewriting system Codex, could create a phishing email capable of carrying a malicious payload.

Use cases like this illustrate that ChatGPT has the potential to significantly alter the cyber threat landscape, adding that it represents another step forward in the dangerous evolution of increasingly sophisticated and effective cyber capabilities. Check Point also recently sounded the alarm over the chatbot’s apparent ability to help cybercriminals write malicious code. The researchers say they witnessed at least three instances where hackers with no technical skills boasted how they had leveraged ChatGPT’s AI smarts for malicious
purposes. One hacker on a dark web forum showcased code written by ChatGPT that allegedly stole files of interest, compressed them, and sent them across the web. Another user posted a Python script, which they claimed was the first script they had ever created.

Check Point noted that while the code seemed benign, it could “easily be modified to encrypt someone’s machine completely without any user interaction.”

Unsurprisingly, news of ChatGPT’s ability to write malicious code furrowed brows across the industry. It’s also seen some experts move to debunk concerns that an AI chatbot could turn wannabe hackers into full-fledged cybercriminals.

“ChatGPT and AI in Cyber Security

Here are five key areas of cyber security that ChatGPT and AI could impact, for better and
worse.

High-Volume Spear Phishing

initiate fraudulent transactions. A large variety of online sources, from company websites to social networking platforms, arm attackers with useful information about people and companies that can help them craft more convincing spear phishing emails.

The targeted nature of these emails normally makes them hard to scale to the level of normal email spam. Part of the reason for this lack of scalability is the research required, but it’s also that increased levels of cyber security awareness make people more likely to spot obvious signs of mass email phishing, such as spelling errors or clunky language. And with many hackers not hailing from native-English destinations, these mistakes appear often in the mass phishing emails that they write.

However, advanced chatbots like Chat-GPT could change the game and enable high-volume, targeted, and effective spear phishing email campaigns. Research carried out separately by two different security companies in December 2022 found ChatGPT could write a plausible and well-written phishing email impersonating a web hosting company and a CEO.

Asking ChatGPT to write a phishing email now gets flagged as unethical activity, which suggests OpenAI paid attention to the concerns of security researchers. But similarly advanced AI text-based tools will likely emerge, and not all of them will flag requests or queries as unethical. Scaling these difficult-to-detect spear phishing emails might become far more feasible for hackers in the not-too-distant future.

Malware-as-a-Service

ChatGPT’s programming prowess sets a worrying precedent in lowering the barriers to creating malware. It’s trivial, for example, to get ChatGPT to write VBA code that downloads a resource from a specified URL any time an unsuspecting user opens an Excel workbook containing that code. Such a request would make it very easy to weaponise a phishing email with a malicious Excel attachment without requiring in-depth skills or knowledge.

The resource downloaded onto an end user’s computer could be a keylogger or a remote access trojan that provides access to a system or network and sensitive assets. Some security researchers were even able to get the bot to write malicious PowerShell scripts that delivered post-exploitation payloads (ransomware).

While ChatGPT’s coding skills are a concern, it still requires at least some degree of cyber security knowledge to manipulate queries in a way that produces working malicious code. A perhaps more pressing issue is that generating malware from text commands alone opens up more opportunities for malware-as-a-service. Cybercriminals with real hacking skills could easily use ChatGPT to automate the creation of working malware and sell the end product as a scalable service.

Propagating Fake News

With its impressive writing abilities, ChatGPT comes with a lot of abuse potential in the context of spreading fake news. Eloquent yet false stories can be generated with a simple sentence prompt. Media outlets such as Sky have already experimented with letting ChatGPT write articles.

Fake news stories about personal data breaches or security vulnerabilities could be written by malicious insiders or published by hackers that infiltrate journalists’ or users’ accounts at high-profile publications and organisations. While unlikely, the possibility of this Orwellian outcome of not being able to decipher fact from fiction would lead to chaos and a loss of trust. At worst, a complete undermining of consumer confidence in the digital economy could ensue.

Enhanced Vulnerability Detection

Turning to a more positive perspective, ChatGPT (and AI in general) show great promise in improving vulnerability detection. Try the following experiment: copy a snippet of code from this Github page of vulnerable code snippets and ask ChatGPT to examine the code for security vulnerabilities. You’ll notice that the tool quickly flags whatever happens to be wrong with the code and even suggests how to fix the security weaknesses.

Turning to the broader field of AI rather than just ChatGPT, the powerful machine learning models that underscore these technologies can also enhance vulnerability detection. As a network and the number of endpoints on it grows, detecting anomalies and weaknesses becomes more challenging. AI-powered tools are far more effective at unearthing vulnerabilities because they can use enormous sets of training data to establish what’s normal while reducing the time to find what is abnormal on a network.

Automating Security Team Tasks

Cyber security skills gaps continue to place an excessive burden on security teams. The UK government’s 2022 report found 51 per cent of businesses have a basic skills gap in tasks like configuring firewalls, and detecting and removing malware. This skills gap places a heavy burden on existing teams to the point where alert fatigue and burnout are common issues.

Automation has a critical role to play in easing the impact of cyber skills shortages and helping security teams defend their organisations in today’s threat landscape. ChatGPT excels at rapidly writing programs and code that could prove beneficial for automating a range of security tasks. As an example, it takes a few seconds to produce a simple Python program that will scan for open ports on a given hostname.

You’re aware by now that ChatGPT can be manipulated to write malicious code, but the flip side of this is its usefulness in analysing malicious code to help figure out what it does. From explaining how various Windows registry keys can be used by malware to describing what large chunks of malicious code are attempting to do on a system, speeding up and strengthening the tricky area of malware analysis is invaluable for many organisations.

Getting prepared

While it’s still early days in understanding the full implications of ChatGPT and AI in cyber security, the ideas here offer a snapshot of what’s possible. Getting prepared for both the good and the bad of AI requires a “smart cyber security strategy that accounts for these technologies’ increasing influence.”

Alternatives

ChatSonic

India’s Chatsonic can be one of the alternatives to ChatGPT. It was introduced in 2021, far earlier than Open AI’s ChatGPT. Contrary to ChatGPT, ChatSonic incorporates text-to-speech and Google Search into its operation, making it effective enough to provide the most recent responses to your inquiries. processing to provide accurate summaries of current events, trends, and conversations.

Jasper AI

The Jasper AI programme, originally known as Jarvis, is one of the best AI writing tools. It is a recent addition to the Large Language models-based AI chatbot. It is an AI writing assistant that is powered by OpenAI’s GPT-3.5 model. month or $588 per year. The creation of Jasper AI enables individuals and teams to scale their content initiatives using AI.

Jasper claims that because it has read the majority of the public internet, it is fluent in over 25 languages and is knowledgeable about almost every niche. It makes the claim that it can assist users with translating the text as well as writing “blog articles, social media postings, marketing emails, and more.” Jasper also promises to deliver content that is “word-by-word original” and “plagiarism-free”.

Authoring services like Headline and Shortly AI have been acquired by Jasper. These programmes aim to be fully integrated with Jasper, however, they are currently standalone solutions. In Jasper AI, the content is produced for you when you select a topic and fill out a form with the necessary information.

Bard AI

Like ChatGPT, Bard AI, Google’s newest and most innovative AI-powered chatbot, is being developed on the company’s LaMDA AI platform. It is an experimental conversational AI service that is expected to have a significant impact on the AI industry.

LaMDA eliminates the limitation of having data confined to a specific year and revolutionises Bard’s natural language processing capabilities by enabling it to interpret and respond to human input with more precision. Google claims that Bard can generate texts and answer questions. This new conversational AI chatbot project by Google is also known to summarize texts. The company began testing the bot on February 6, 2023.

Microsoft Bing AI

Recently, Microsoft added artificial intelligence to their search engine, which is now referred to as Bing AI. The OpenAI large language model, which is far more potent than ChatGPT and GPT-3.5, is the foundation of Bing AI, which was created with the express purpose of elevating search to a new level. It has been optimised to maximum speed, accuracy, and efficiency. To guarantee customers receive the greatest results, it makes advantage of the important developments and lessons learned from its forerunners.

Microsoft unveiled new, AI-enhanced features for their Edge browser called “Chat” and “Compose.” In addition to their current Bing feature, this development. Additionally, Microsoft just released Bing and Edge mobile apps for iOS and Android users.

Bing gives users the ability to ask queries with up to 1,000 words and get AI-powered responses. Its capacity to process complex inquiries makes looking up information faster. If ChatGPT-powered Bing can’t provide a direct response to your query, it will give you a selection of related results. Bing AI as of now has no upfront cost. 1000 transactions are free per month.

DialoGPT

Microsoft’s DialoGPT is a large-scale pre-trained dialogue response generation model specifically built for multi-turn conversations. DialoGPT is a significant pre-trained system for producing replies that can be used in multiple dialogue exchanges. It was trained using a massive dataset of 147 million multi-turn discussions extracted from Reddit discussion threads between 2005 and 2017.

Similar to the outputs of GPT-2, the sentences that DialoGPT generates are astonishingly diverse and include information that relates to the initial prompt. According to Microsoft, DialoGPT is more conversational, animated, frequently lighthearted, and generally extremely dynamic — qualities that might be appropriate for the use you’re considering. DialoGPT, however, does not offer voice search, voice response, or personalities. Since this is a brand-new launch, there is no specific information about the pricing structure available.

NeevaAI

NeevaAI combines the efficiency and most recent data of the Neeva search engine with the strength of ChatGPT and other large language models.

Two former technology executives, Vivek Raghunathan, vice president of monetization at YouTube, and Sridhar Ramaswamy, former senior vice president of ads at Google, designed the search engine.

The system developed by NeevaAI is capable of searching and sorting through hundreds of millions of web pages to produce a single, comprehensive response that includes pertinent sources. Neeva can be compared to a search engine that has been given AI enhancements, but it is not yet a fully functional chatbot that is powered by AI. Neeva AI also provides references in its outcomes.

CoPilot

If you’ve been creating codes on ChatGPT and want to look at websites that provide the same or even better results, you can check out GitHub’s CoPilot. CoPilot, uses the GPT-3 model from OpenAI Codex for auto-completion.

This application supports various well-known coding environments, including VS Code, Neovim, and JetBrains. It also supports cloud workflows via GitHub Codespaces. It can produce syntax in up to 12 languages, including JavaScript, Go, Perl, PHP, Ruby/Swift/TypeScript, and BASH. In addition, it supports multi-language scripting, and the model is powered by trillions of lines of open-source code from the public domain, such as those found on GitHub repositories.

Character AI

Character AI is based on neural language models and has been trained from the ground up with conversations in mind. Instead of talking with a single AI chatbot, Character AI allows users to select from a variety of personas. Elon Musk, Tony Stark, Socrates, Joe Biden, and Kanye West are just a few of the many characters and people that may be found on the home page. The AI adjusts its conversational style according to the person you selected, which is the finest part. Creating a character is quite fun as you can go along, designing it according to yourself.

The AI has a built-in image generator for avatar creation. Once done, you can start chatting right away and even share it with others. Character AI is free to use, but you do need to make an account since the chat gets locked after a few messages.

YouChat

Another conversational AI model called YouChat was introduced by the search engine You.com. It functions similarly to ChatGPT and essentially performs what other generic chatbots do.

Artificial intelligence and natural language processing are used by YouChat’s AI to mimic human speech. It can create emails, write code, translate, summarise, and react to general inquiries. It offers average responses because it is still in the development phase.

While you can just talk to it, YouChat can also write code, give advice, break down complicated concepts, summarize books, and a lot more. It claims to provide the latest information; however, it sometimes commits errors there as well. YouChat is completely free to use, so you need only visit the website and start chatting.

Elsa Speak

Elsa Speak is a language-learning programme powered by AI. It analyses the user’s voice using AI and creates a set of tasks that are simple for the user to understand. Elsa Speak is thus another of the best ChatGPT alternatives to consider.

Elsa as an English-speaking speech assistant may aid you in translating between many tongues and English. The AI system used by ELSA was developed using voice recordings of English speakers with a variety of accents. This gives ELSA an advantage over most other voice recognition algorithms by allowing it to recognise the vocal patterns of people who do not speak with a native level of ability.

Useful resources

Additional articles

What is ChatGPT? How to use the AI chatbot everyone’s talking about (Digitaltrends)

Google’s new Bard AI may be powerful enough to make ChatGPT worry — and it’s already here (Digitaltrends)

The best ChatGPT alternatives (according to ChatGPT) (Digitaltrends)

Microsoft is bringing ChatGPT to your browser, and you can test it out right now (Digitaltrends)

Microsoft Bets Big on the Creator of ChatGPT in Race to Dominate A.I. (New York TImes)

Microsoft might put ChatGPT into Outlook, Word, and PowerPoint (Digitaltrends)

Other outcomes and novel applications

Voice actors seeing an increasing threat from AI (Digitaltrends)

ChatGPT: Automatic expensive BS at scale (Medium)

Scammers Mimic ChatGPT to Steal Business Credentials (DarkReading)