Exploiting ChatGPT for Voice-based scams

The latest AI model from OpenAI, ChatGPT-4o, can be exploited to carry out voice-based scams with financial motives, achieving success rates between 20% and 60%, thereby enabling large-scale fraud without human intervention.

ChatGPT-4o boasts notable advancements, including the integration of text, voice, and images. However, to protect users from harmful content, OpenAI has implemented various measures to detect and prevent unauthorized voice replication.

Despite these efforts, the prevalence of voice scams has intensified, resulting in millions of dollars in losses, particularly as deepfake technology and text-to-speech tools continue to advance.

Research from the University of Illinois indicates that current technologies can be misused due to inadequate protective measures, allowing cybercriminals to execute large-scale fraud without the need for human involvement.

These scams include bank transfers, gift card theft, cryptocurrency transactions, and the theft of login credentials from social media accounts or Gmail. While GPT-4o occasionally refuses to handle sensitive information such as credentials, researchers have found ways to circumvent these protections by impersonating victims in real-world scenarios.

The success rates of these scams range from 20% to 60%, with very low operational costs—approximately $0.75 per successful attempt. For more complex bank transfer scams, costs can rise to around $2.51, yet this remains significantly lower than the potential profits.

Currently, OpenAI is developing a new LLM model, o1, designed with enhanced protective features to combat fraudulent activities. Representatives from OpenAI assert that they are continuously striving to improve ChatGPT to thwart scam attempts without compromising its utility and creativity. Although GPT-4o has integrated several abuse prevention measures, the threat posed by other voice chatbots remains a significant challenge in the current digital era.

Source: Bleeping Computer