A growing body of research has shown that artificial intelligence chatbots can be tricked into performing harmful tasks, according to British officials who are advising businesses against using them in their operations.
The National Cyber Security Centre (NCSC) of Britain wrote in a pair of blog posts on Wednesday that researchers were still grappling with the security issues that could arise from algorithms known as large language models, or LLMs, that can produce human-sounding interactions.
The chatbots powered by AI are already being used, and some people see them replacing not only internet searches but also customer service jobs and sales calls.
According to the NCSC, there could be risks involved, especially if such models were integrated into other organizational business processes. Researchers and academics have repeatedly discovered ways to trick chatbots into ignoring their own built-in safeguards or executing rogue commands.
For instance, if a hacker carefully crafted their query, an AI-powered chatbot used by a bank might be duped into carrying out an unauthorized transaction.
“Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC said in one of its blog posts, referring to experimental software releases.
“They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it. Similar caution should apply to LLMs.”
Authorities from all over the world are attempting to understand the rise of LLMs like OpenAI’s ChatGPT, which companies are integrating into a variety of services like sales and customer support. Authorities in the US and Canada claim they have observed hackers embracing the technology, which is another indication of the security implications of AI that are still being explored.
According to a recent Reuters/Ipsos poll, many corporate employees use ChatGPT for basic tasks like composing emails, summarizing documents, and conducting preliminary research.
A quarter of those surveyed did not know whether their company allowed the use of the technology, while 10% of respondents said their bosses explicitly forbade the use of external AI tools.
The rush to incorporate AI into business practices, according to Oseloka Obiora, chief technology officer at cybersecurity company RiverSafe, would have “disastrous consequences” if business leaders didn’t put in place the necessary controls.
“Instead of jumping into bed with the latest AI trends, senior executives should think again,” he said. “Assess the benefits and risks as well as implementing the necessary cyber protection to ensure the organisation is safe from harm.”
SOURCE: GEO NEWS