Amazon’s AI providers division filed 1.1 million experiences of suspected on-line youngster exploitation in 2025 to an advocacy group. However as a result of these experiences lacked important info, there have been zero circumstances the place regulation enforcement was in a position to take motion. A brand new inquiry opened within the Senate goals to make sure that by no means occurs once more.
Sen. Chuck Grassley, an Iowa Republican who chairs the Senate Judiciary Committee, this week opened an inquiry into eight huge tech firms over their dealing with of necessary reporting of on-line youngster exploitation. It is the newest step in a rising motion questioning whether or not tech firms could be trusted to maintain their youngest customers protected whereas on-line.
Digital service suppliers are required by regulation to report incidents of kid intercourse exploitation to the CyberTipline run by the Nationwide Middle for Lacking and Exploited Youngsters. In 2025, over 17 million experiences of suspected on-line youngster intercourse exploitation have been filed. However these experiences could not have the required info to immediate motion in the true world.
“I am alarmed by what I’ve learn,” Grassley mentioned. “Primarily based on info offered to my workplace, I’m involved that some firms haven’t offered NCMEC and regulation enforcement with enough information wanted to guard children and prosecute suspected predators.”
Grassley despatched requests for extra info to a number of main tech firms: Meta, TikTok, Roblox, Snap, Amazon AI Companies, xAI, Grindr and Discord. These eight firms make up 81% of all youngster exploitation experiences submitted to NCMEC. Notably absent from the inquiry was Google, proprietor of YouTube.Â
A Meta spokesperson instructed CNET the corporate “works tirelessly” to guard children from this “horrific crime,” stating: “We’re dedicated to fixed enchancment and admire suggestions, which has already led us to make some enhancements, as NCMEC has acknowledged. We’ll proceed making refinements to enhance our reporting course of.”Â
Grindr, Discord and Roblox made comparable feedback, saying they plan to work with the Senate and NCMEC on these points. Grindr added that its relationship web site is just for adults, aged 18 and up. The opposite tech firms didn’t instantly reply to requests for remark.Â
The Iowa Republican’s inquiry follows experiences from NCMEC in 2025 that tech firms have been failing to offer important location information of their experiences and failing to reveal their use of kid intercourse abuse materials in AI information coaching. That is particularly regarding given earlier incidents of AI getting used to create nonconsensual intimate imagery, together with youngster intercourse abuse materials.
Youngster exploitation on-line is a rising situation. In 2025, Meta alone filed almost 11 million experiences, 1.2 million of which handled suspected youngster trafficking. Meta owns the favored platforms Fb, Instagram and WhatsApp. NCMEC mentioned in 2025 that Meta and xAI had improved their reporting, nevertheless it was nonetheless missing.
“Many ESPs frequently tout the variety of experiences they undergo the CyberTipline, however fail to reveal that hundreds of thousands of experiences lack fundamental info,” NCMEC wrote to Grassley in 2025. “This leaves kids unprotected on-line, topics survivors to revictimization, allows sexual offenders to stay freely on-line and wastes worthwhile and restricted regulation enforcement sources.”
There was motion in different branches of presidency to carry tech firms accountable for youngster security. Meta was not too long ago discovered liable by a New Mexico jury for deceptive customers in regards to the security of its platforms and failing to stop youngster exploitation. The corporate was ordered to pay $375 million in damages. Sooner or later later, Meta and Google have been discovered liable by a California jury for creating social media platforms which might be addictive to kids.
The first particular person was convicted on Tuesday underneath the brand new US anti-AI deepfake regulation, the Take It Down Act, for creating AI-generated youngster intercourse abuse supplies.






