According to a report in The Washington Post, AI voice-generating software is becoming more sophisticated, making it easier for scammers to mimic the voices of loved ones and trick vulnerable individuals into sending them money. Even the elderly can fall victim to this type of scam because it can be difficult to detect when the voice on the other end of the line is inauthentic.
The Federal Trade Commission has stated that these impostor scams are very common and have resulted in more than $11 million in losses in 2022. Unfortunately, it is challenging for authorities to investigate and prosecute these scams because they can be run from anywhere in the world.
While AI voice-modeling tools have many positive applications, such as improving text-to-speech generation and creating new possibilities for speech editing, the potential for misuse is high. For example, deepfake voice technology has already caused scandals when 4chan members made fake voices of celebrities making racist or offensive statements.
Companies must consider adding more safeguards to prevent misuse of the technology to avoid potential liability for causing substantial damage. However, many companies continue to release AI products without fully understanding the risks involved. The FTC has released guidance for companies, urging them to do more good than harm and to hold themselves accountable for the risks associated with their products.