NEW YORK — Cybersecurity experts are raising alarms about the potential risks associated with artificial intelligence (AI) chatbots and virtual romantic partners, warning that these technologies could be exploited by cybercriminals for financial fraud and data theft.
Jamie Akhtar, Co-Founder and CEO of CyberSmart, told reporters that while AI technology for creating virtual partners is improving, it also presents new opportunities for malicious actors.
“Deepfake technology has come on leaps and bounds in the past few years,” Akhtar said. “The problem is that this technology can be used for malicious ends.”
Experts highlight two primary concerns. Cybercriminals could potentially use emotionally manipulative AI to extort money or trick users into downloading malicious software. Akhtar cited a recent incident where a finance worker at a multinational firm was deceived into paying $25 million to criminals using deepfake technology to impersonate a company executive.
Additionally, even legitimate AI chatbots may pose privacy risks. Chris Hauk, Consumer Privacy Advocate at Pixel Privacy, warned that these applications often collect extensive user data and may share information with third parties.
“Many of these apps do not make it clear as to what data is shared with third parties, nor are they clear about the AI they use,” Hauk explained. He added that as users become more comfortable with AI chatbots, they might reveal more personal information, increasing their vulnerability to data breaches or identity theft.
The experts advise users to exercise caution when interacting with AI chatbots. They recommend using only official, well-known, and well-reviewed chatbot applications and avoiding downloads from third-party app stores or suspicious websites.
Users should limit sharing personal information, even with popular AI platforms like ChatGPT or Google Gemini, and be aware that chatbots can potentially leak shared information.
Experts emphasize treating AI chatbots with the same caution as interacting with strangers online and never agreeing to send money or share financial information with an AI chatbot.
As AI technology becomes more sophisticated and accessible, cybersecurity experts anticipate an increase in AI-based attacks targeting individuals and businesses. They emphasize the need for increased awareness and caution among users of AI chatbot technologies.
The growing popularity of AI chatbots and virtual partners underscores the importance of developing robust security measures and clear privacy policies for these technologies. As the field evolves, ongoing research and regulation will be crucial in addressing these emerging cybersecurity challenges.