Monday, July 15, 2024

When Jennifer Watkins got a message from YouTube saying her channel was being shut down, she wasn’t initially worried. She didn’t use YouTube, after all.
Her 7-year-old twin sons, though, used a Samsung tablet logged into her Google account to watch content for children and to make YouTube videos of themselves doing silly dances. Few of the videos had more than five views. But the video that got Ms. Watkins in trouble, which one son made, was different.
“Apparently it was a video of his bottom,” said Ms. Watkins, who has never seen it. “He’d been dared by a classmate to do a nudie video.”
Google-owned YouTube has A.I.-powered systems that review the hundreds of hours of video that are uploaded to the service every minute. The scanning process can sometimes go awry and tar innocent individuals as child abusers.
The New York Times has documented other episodes in which parents’ digital lives were upended by naked photos and videos of their children that Google’s A.I. systems flagged and that human reviewers determined to be illicit. Some parents have been investigated by the police as a result.
The “nudie video” in Ms. Watkins’s case, uploaded in September, was flagged within minutes as possible sexual exploitation of a child, a violation of Google’s terms of service with very serious consequences.
Ms. Watkins, a medical worker who lives in New South Wales, Australia, soon discovered that she was locked out of not just YouTube but all her accounts with Google. She lost access to her photos, documents and email, she said, meaning she couldn’t get messages about her work schedule, review her bank statements or “order a thickshake” via her McDonald’s app — which she logs into using her Google account.
Her account would eventually be deleted, a Google login page informed her, but she could appeal the decision. She clicked a Start Appeal button and wrote in a text box that her 7-year-old sons thought “butts are funny” and were responsible for uploading the video.
“This is harming me financially,” she added.
Children’s advocates and lawmakers around the world have pushed technology companies to stop the online spread of abusive imagery by monitoring for such material on their platforms. Many communications providers now scan the photos and videos saved and shared by their users to look for known images of abuse that had been reported to the authorities.

Check out our other content

Check out other tags:

Most Popular Articles