In a recent surge of advocacy, lawmakers and celebrities are calling for urgent measures to combat the growing menace of online misinformation. This movement has gained momentum as the digital landscape becomes increasingly fraught with false information, manipulated narratives, and harmful content that can sway public opinion and endanger democracy.
The Federal Communications Commission (FCC) recently took a significant step by unanimously adopting a new rule that bans the use of AI-generated voices in robocalls. This decision empowers state attorneys general to take legal action against telemarketing scams that exploit artificial intelligence to deceive and defraud the public. The ruling followed a bipartisan appeal from 26 state attorneys general who urged the FCC to address the misuse of AI in telemarketing. This action was further catalyzed by an incident in New Hampshire, where robocalls using an AI-generated voice of President Joe Biden were traced to a dubious Texas telecommunications firm.
FCC Chair Jessica Rosenworcel emphasized the urgency of the issue, stating, “Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters.” She added that the new rule would provide state attorneys general with the necessary tools to crack down on these scams and protect the public from fraud and misinformation.
The commissioners agreed that AI-generated voices fall under the category of “artificial” as defined by the Telephone Consumer Protection Act (TCPA). This act aims to limit junk calls by restricting telemarketing calls, the use of automatic telephone dialing systems, and artificial or prerecorded voice messages. Violators of this law could face fines of up to $23,000 per call, and recipients of scam calls have the right to seek legal action and potentially recover up to $1,500 for each unwanted call.
Consumer concerns about imposter scams and robocalls have been a top issue for the Federal Trade Commission (FTC). In its recent biennial report to Congress regarding the National Do Not Call Registry, the FTC noted that more than 2.6 million people signed up with the registry in the 2023 fiscal year, bringing the total to 249 million registrants. Despite the high level of concern, the number of complaints about robocalls declined by over 900,000 from 2022 to 2023.
The FCC’s decision comes on the heels of Rosenworcel’s proposal to make AI-generated voices illegal under the TCPA. In November, the FCC launched an inquiry to explore how the agency can best combat illegal robocalls and understand the role of AI in these activities. The agency is also investigating how AI can be leveraged for pattern recognition to identify illegal robocalls before they reach consumers. The FCC has signed a memorandum of understanding with at least 48 attorneys general to collaborate on this issue.
The internet, once envisioned as a utopian space for free and open exchange of ideas, has become a battleground for misinformation and harmful content. The early optimism of internet pioneers like Stuart Brand, Tim Berners-Lee, and John Perry Barlow has been overshadowed by the darker realities of online interactions. The rise of social media and the strategic use of online platforms for economic and political gain have significantly altered the nature of discourse.
Prominent internet analysts and the public have expressed growing concerns about the evolution of online interactions. Events and discussions over the past year have highlighted the challenges ahead. For instance, respected internet pundit John Naughton questioned whether the internet has become a “failed state,” and the U.S. Senate heard testimony on the use of social media for extremist causes. Scholars have provided evidence of social bots disrupting the 2016 U.S. presidential election, and news organizations have documented foreign trolls bombarding U.S. social media with fake news.
A Pew Research Center study found that 64% of U.S. adults believe fabricated news stories cause significant confusion about current issues and events. Another Pew report showed that 62% of Americans get their news from social media, leading to concerns about the internet’s impact on truth and democracy. The rise of online harassment, fake news, and the weaponization of social media has prompted calls for action from various quarters.
Celebrities have also been vocal about the need to address online misinformation. High-profile cases of social media mobbing, such as the harassment of “Ghostbusters” actor Leslie Jones, have brought attention to the issue. Industry reports have revealed biases in social media platforms, such as the suppression of conservative news content by former Facebook workers.
Governments and state actors have increased their efforts to monitor social media users, raising concerns about privacy and free speech. The Center on the Future of War has even started the Weaponized Narrative Initiative to study the impact of misinformation on public discourse.
Experts agree that the internet’s future will be shaped by how society addresses these challenges. Some predict that online reputation systems and better security and moderation solutions will become ubiquitous, making it harder for bad actors to disrupt discourse. However, there are concerns that such systems could lead to increased surveillance and suppression of free speech.
The balance between protecting anonymity and enforcing consequences for abusive behavior remains a significant challenge. As more people connect to the internet, the scale and complexity of online interactions will continue to grow, making it difficult to manage problematic content and contributors.
In conclusion, the call for action against online misinformation is gaining traction among lawmakers and celebrities. The FCC’s recent ruling on AI-generated voices in robocalls is a step in the right direction, but much more needs to be done to protect the integrity of online discourse and ensure a safe and truthful digital environment.