Meta faces legal scrutiny as AI advancements raise concerns over child safety



Concerns for child safety come amid rapid artificial intelligence advancements involving text and generative AI.

A group of 34 United States states are filing a lawsuit against Facebook and Instagram owner Meta, accusing the company of engaging in improper manipulation of minors who use the platforms. This development comes amid rapid artificial intelligence (AI) advancements involving both text and generative AI.

Legal representatives from various states, including California, New York, Ohio, South Dakota, Virginia and Louisiana, allege that Meta utilizes its algorithms to foster addictive behavior and negatively impact the mental well-being of children through its in-app features, such as the “Like” button.

The government litigants are proceeding with legal action despite the chief AI scientist at Meta recently speaking out, reportedly saying that worries over the existential risks of the technology are still “premature,” and Meta has already harnessed AI to address trust and safety issues on its platforms.

Screenshot of the filing. Source: CourtListener

Attorneys for the states are seeking different damages, restitution and compensation for each state mentioned in the document, with figures ranging from $5,000 to $25,000 per alleged occurrence. Cointelegraph reached out to Meta for more information but is yet to receive a response.

Meanwhile, the United Kingdom-based Internet Watch Foundation (IWF) has raised concerns about the alarming proliferation of AI-generated child sexual abuse material (CSAM). In a recent report, the IWF revealed the discovery of more than 20,254 AI-generated CSAM images in a single dark web forum in just a month, warning that this surge in disturbing content has the potential to inundate the internet.

The organization urged global cooperation to combat the issue of CSAM, suggesting a multifaceted strategy, including adjustments to existing laws, enhancements in law enforcement education and implementing regulatory supervision for AI models.

Related: Researchers in China developed a hallucination correction engine for AI models

In the context of AI developers, the IWF advises the prohibition of their AI for generating child abuse content, the exclusion of associated models, and a focus on removing such material from their models.

The advancement of generative AI image generators has significantly improved the creation of lifelike human replicas. Platforms such as Midjourney, Runway, Stable Diffusion, and OpenAI’s Dall-E are examples of tools capable of generating realistic images.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change



Source link