Your membership has expired

The payment for your account couldn't be processed or you've canceled your account with us.

Re-activate

Save products you love, products you own and much more!

Save products icon

Other Membership Benefits:

Savings icon Exclusive Deals for Members Best time to buy icon Best Time to Buy Products Recall tracker icon Recall & Safety Alerts TV screen optimizer icon TV Screen Optimizer and more

    Consumer Reports’ Assessment of AI Voice Cloning Products

    New report finds that AI voice cloning companies lack proper safeguards to protect consumers from potential harms

    media-room-article-image

    Washington, DC – Consumer Reports (CR) released findings today from an assessment of voice cloning products from six companies: Descript, ElevenLabs, Lovo, PlayHT, Resemble AI, and Speechify. CR found that a majority of the products assessed did not have meaningful safeguards to stop fraud or misuse of their product. 

    Many AI voice cloning products enable consumers to create an artificial copy of an individual’s voice using only a short audio clip of the individual speaking. AI voice cloning products have many legitimate uses, including speeding up audio editing, enhancing movie dubbing, and automating narration. But without proper safeguards, these products also present a clear opportunity for scammers, who have used the technology to impersonate, for example, a consumer’s grandchild calling in need of money, and celebrities and political figures endorsing dubious products and bogus investment schemes.

    “AI voice cloning tools have the potential to supercharge impersonation scams,“ said Grace Gedye, policy analyst at CR. “Our assessment shows that there are basic steps companies can take to make it harder to clone someone’s voice without their knowledge—but some companies aren’t taking them. We are calling on companies to raise their standards, and we’re calling on state attorneys general and the federal government to enforce existing consumer protection laws—and consider whether new rules are needed.”

    Key findings from the study are below: 

    • CR researchers were able to easily create a voice clone based on publicly available audio in four of the six products in the test set: 
      • These products did not employ any technical mechanisms to ensure researchers had the speaker’s consent to generate a clone or to limit the cloning to the user’s own voice. These companies—ElevenLabs, Speechify, PlayHT, and Lovo—required only that researchers check a box confirming that they had the legal right to clone the voice or make a similar self-attestation. 
      • Descript and Resemble AI took steps to make it more difficult for customers to misuse their products by creating a non-consensual voice clone. 
    • Four of the six companies (Speechify, Lovo, PlayHT, and Descript) required only a customer’s name and/or email address to make an account.

    CR is calling on AI voice cloning companies to strengthen safeguards to protect consumers from the risks of their products. A list of recommended company practices is included below and in the CR study, and additional policy recommendations on AI can be found here

    Recommendations for AI voice cloning companies that address fraud, deception, and impersonation:

    • Companies should have mechanisms and protocols in place to confirm the consent of the speaker whose voice is being cloned, such as by requiring users to upload audio of a unique script.
    • Companies should collect customers’ credit card information, along with their names and emails, as a basic know-your-customer practice so that fraudulent audio can be traced back to specific users.
    • Companies should watermark AI-generated audio for future detection and update their marking technique as research on best practices progresses.
    • Companies should provide a tool that detects whether audio was generated by their own products.
    • Companies should detect and prevent the unauthorized creation of clones based on the voices of influential figures, including celebrities and political figures. 
    • Companies should build so-called semantic guardrails into their cloning tools. These should automatically flag and prohibit the creation of audio containing phrases commonly used in scams and fraud and other forms of content likely to cause harm, such as sexual content.
    • Companies should consider supervising AI voice cloning, rather than offering do-it-yourself voice products. Companies might also ensure that access to the voice model is limited to necessary actors and enter into a contractual agreement about which entity is liable if the voice model is misused.

    CR has also argued successfully that companies have a legal obligation under Section 5 of the Federal Trade Commission Act to protect their products from being used for harm, but more robust enforcement is needed, as well as new rules. To ensure that the U.S. is poised to counter AI-powered scams, Congress should grant the FTC additional resources and expand its legal powers. State Attorneys General should examine whether AI voice cloning tools that make it easy to impersonate someone without their knowledge run afoul of state consumer protection laws. Additionally, CR encourages the introduction of legislation at the federal and state level to help codify consumer protections as it relates to AI.