Orginal Source: indianexpress.com
Twitch said it has introduced mandatory phone verification requirements and fortified technology used to catch and terminate accounts belonging to people under 13.
Twitch, the video game livestreaming site popular with teens and kids, announced changes it’s making on the platform to increase safety for its young users in the wake of criticism that it enables child predation.
On Tuesday, Amazon.com Inc.-owned Twitch said it has introduced mandatory phone verification requirements and fortified technology used to catch and terminate accounts belonging to people under 13, among other measures.
“Grooming is particularly insidious because it can be hidden in plain sight, and there are fewer established industry practices for detecting it,” Twitch said in a blog post. “These predators are not welcome and will not be tolerated on Twitch, and today we’re sharing an update regarding the continuous work we’re doing to combat them.”
In September, Bloomberg News published a report describing rampant child predation on Twitch and the platform’s insufficient moderation tools. Bloomberg analyzed 1,976 Twitch accounts with follower lists comprised of at least 70% kids or young teens. More than 279,016 apparent children were targeted by predators, according to data collected by a researcher who studies livestreaming websites. The researcher asked to remain anonymous due to concerns over potential career repercussions from being associated with such a disturbing topic. In a subsequent analysis over the past month, Bloomberg has discovered new predatory accounts and more children being targeted.
In the wake of the report, the UK internet regulator Ofcom contacted Twitch to discuss its poor protections for children on the platform. “We are actively reviewing whether Twitch’s measures are sufficiently robust enough to prevent the most harmful material being uploaded,” an Ofcom spokesperson said in an email.
Critics say the root of child predation on Twitch has been the ease with which kids can lie about their age, sign up for an account and immediately livestream themselves to anonymous and unquantifiable audiences. YouTube and TikTok both require users to possess a certain number of followers before livestreaming on mobile devices; TikTok, owned by ByteDance Ltd., recently announced plans to increase its age requirement for livestreaming to 18 from 16, effective Nov. 23, and Alphabet Inc.’s YouTube doesn’t “list,” or make searchable, mobile livestreams from users under 17 by default.
Twitch still lags behind competitors when it comes to age-verification and barriers to livestreaming for kids. Unlike others, Twitch hasn’t required two-factor authentication for users signing up on mobile. With the latest updates, Twitch will now require at least one phone verification before livestreaming, which it says will help block children who previously had been suspended for streaming while underage from creating new accounts. According to a new analysis of the predatory accounts in the data set, these accounts are still finding and following on average hundreds of apparent children a day.
“In the face of a tsunami of new legislation around the world, it is good to see some online services taking early steps towards adopting privacy-preserving age assurance methods, but none of the major global platforms most popular with kids has yet adopted sufficiently comprehensive, audited age checks to keep children safe online,” said Iain Corby, executive director at the Age Verification Providers Association, a global trade body. “Many still underestimate the risks their site pose to young users, particularly through enabling contact with dangerous adults about which parents often have little or no awareness.”
It’s notoriously challenging to moderate live video in real time. Twitch broadcasts more than 2.5 million hours of live content in 35 languages every day. The site’s moderation relies heavily on user reports, which, in part, resulted in the May terrorist attack in Buffalo, New York, livestreaming for 24 minutes on Twitch. The site was able to stop the livestream by the shooter within two minutes after violence began, according to a 47-page report from New York State Attorney General Letitica James on the role Twitch and other online platforms played in the mass shooting. Twitch now says it’s refining the technology human moderators use to review and take action on reports regarding children under 13.
Twitch’s discoverability features, which have helped expand its ecosystem of creators, have also made it easy for predators to find children. Kids who are streaming from their home bedrooms on their mobile phones can attract dozens or hundreds of viewers within minutes, including child predators who ask for live sexual acts through Twitch’s chat feature. Two predators watching a child’s livestream in November said they discovered the kids through Twitch’s “recently started” feature, which reveals accounts with low follower numbers, often indicative of child accounts. The livestream attracted 165 viewers within 35 minutes, according to a Bloomberg analysis.
In 2020, Twitch removed its “recently started” feature, which can make underage accounts easier for predators to identify, for about two content categories, but the activity persists in almost all others. On Tuesday, Twitch said it’s “expanding the signals we use to catch and terminate accounts belonging to users under 13.”
After Bloomberg’s report, several Twitch employees said they were surprised and upset by revelations of the scale of child predation on the platform. “There are a lot of people who are frankly very upset,” said Tom Verrilli, Twitch’s chief product officer, in an October interview at TwitchCon, the company’s developer conference. “Like on the rest of the internet, we understand that there are people who want to use Twitch for harm. We are always building against that.”
Verrilli said in September that fixes around child safety were “on our roadmap that we will probably accelerate as a result of the article” by Bloomberg. Twitch announced in early October it was removing a key feature used to assemble the report, cited in the Bloomberg report, that allowed third parties to track Twitch users’ following lists. Verrilli said the change wasn’t related to Bloomberg’s report.
On Tuesday, Twitch said it has updated default privacy settings for its direct messaging feature Whispers and blocked the ability to use certain search terms to find content on the platform. It has also deepened collaboration with third-party organizations who report inappropriate behavior on the platform and track grooming trends in the industry. Twitch said it also completed its acquisition of Spirit AI, which works with language processing to sift through online chat features and that will help build tools to detect harms written in text on Twitch.