HomeTechnologyBritish officials say AI chatbots could pose cyber risks

British officials are warning organizations about integrating artificial intelligence-powered chatbots into their businesses, saying research has shown they can be tricked into doing harmful things.

In a pair of blog posts published on Wednesday, Britain’s National Cyber ​​Security Center (NCSC) said experts do not yet understand the potential security problems associated with algorithms that can generate human-voice interactions – which are called the Language Model, or LLM.

AI-powered tools are seeing early use in the form of chatbots, which some envision displacing not only Internet searches but also customer service work and sales calls.

The NCSC said there could be risks, especially if such models are plugged into other elements of the organization’s business processes. Academics and researchers have repeatedly found ways to destroy chatbots by giving them rogue commands or fooling them into bypassing their own built-in railings.

For example, if a hacker structures his query correctly, an AI-powered chatbot deployed by a bank can be tricked into carrying out unauthorized transactions.

The NCSC said in a blog post, referring to the experimental software release, “Organizations building services that use LLM need to be careful, the same way they would if they were using a product or code library that Was in beta.”

“They might not allow that product to be involved in transacting on behalf of a customer, and hopefully they won’t rely on it completely. Similar caution should apply to LLMs.”

Executives around the world are grappling with the rise of LLMs like OpenAI’s ChatGPT, which businesses are incorporating into a wide range of services, including sales and customer service. Security implications of AI still coming into focus, US and Canadian officials say they’ve looked into it hackers embrace technology,

- Advertisment -