On Wednesday, the Federal Trade Commission (FTC) voted unanimously to prohibit marketers from using fake reviews—such as those created by AI technology—and other deceptive practices to promote their products and services.
All five FTC commissioners approved the final rule, which will take effect 60 days after it is published in the Federal Register, the official government record of rules and notices.
Rules are generally published shortly after adoption, so the FTC's ban on fake reviews is expected to be in effect by mid-October.
“Fake reviews not only waste people's time and money but also distort the marketplace and drive business away from honest competitors,” FTC Chair Lina Khan said in a statement.
In addition to banning reviews created by nonhumans, the FTC's rule also prohibits companies from paying for positive or negative reviews to artificially enhance or damage a product’s reputation. It also bars marketers from inflating their influence, such as by paying for bots to boost their follower counts.
The rule stipulates that fines could be imposed for each violation. Thus, for an e-commerce site with hundreds of thousands of reviews, penalties for fake or manipulated reviews could accumulate rapidly.
As e-commerce, influencer marketing, and generative AI have become more prevalent, advertisers increasingly use automated chatbots like ChatGPT to quickly generate user reviews for online products.
This trend can lead to consumers buying products based on false endorsements or misleading claims.
FEATURED NEWS
Apollo now puts probability of a tariff driven recession now at 90%
5/5/2025 Plane tickets are getting cheaper as domestic travel demand weakens
5/5/2025 International students are rethinking U.S. study plans amid visa policy shifts
5/5/2025 Trump administration says it will pay immigrants in the United States illegally $1,000 to go ho...
5/5/2025
Stay Updated
Subscribe to our newsletter for the latest financial insights and news.
