Impersonation is at the heart of insurance fraud 鈥 for example, fake physicians submitting fake medical records to support fake claims. Someone 鈥 or something 鈥 needs to supply the fake content, and generative artificial intelligence (AI) is more than available to fill the role. AI is fast becoming a criminal鈥檚 best friend.
Fraudsters once needed extensive technical skills to set up and execute their schemes. Today, the average criminal only needs to apply one of many easy-to-use and accessible AI tools. These provide sophisticated capabilities in writing, coding, and image generation and manipulation. Taken together, they create the perfect storm of criminal innovation and criminal opportunity.
The intelligence is artificial, but the crimes are real
Because they are trained on large language models, tools like ChatGPT are skilled at mimicking human speech and writing. Users feel like they are talking to real people, but AI-based chatbots also make all-too-human mistakes. Attorneys have learned the hard way never to rely on chatbots for legal research. In one recent episode, ChatGPT 鈥渉allucinated鈥 cases with realistic-seeming, but false case citations, resulting in disciplinary action for one less-than-careful attorney. In another instance, a New Zealand grocery store created a bot to generate tasty recipes from submitted lists of ingredients. The same bot made news when it recommended an 鈥渁romatic water mix鈥 based on a list of deadly household chemicals.
AI may occasionally produce nonsensical results, but in the hands of a skilled criminal, chatbots can create vivid, immersive simulations of people and even manufacture false medical evidence.
- Voice cloning: One method people can use to verify the source of a suspicious email or document is to call a listed phone number and speak directly to the sender. If the voice on the other end of the line sounds familiar and makes sense, it鈥檚 a legitimate inquiry, right? Not if an AI product has used hacked recordings of the person to clone his or her voice.
Scammers can sample a surprisingly rich and accessible audio archive of social media posts, video, and voice memos to build a library of speech 鈥 and it doesn鈥檛 take much. needed just three seconds of audio to produce a clone of a human voice with an 85% match to the original recording. About a minute of audio can push accuracy rates up to 95%, close enough to fool employers, family members, and insurance investigators.