Ivy Choi, a YouTube spokeswoman, said a video’s context and content would determine whether it was taken down or allowed to remain. She added that YouTube would focus on videos that were “technically manipulated or doctored in a way that misleads users beyond clips taken out of context.”
As an example, she cited a video that went viral last year of Speaker Nancy Pelosi, a Democrat from California. The video was slowed down to make it appear as if Ms. Pelosi were slurring her words. Under YouTube’s policies, that video would be taken down because it was “technically manipulated,” Ms. Choi said.
But a video of former Vice President Joseph R. Biden Jr. responding to a voter in New Hampshire, which was cut to wrongly suggest that he made racist remarks, would be allowed to stay on YouTube, Ms. Choi said.
She said deepfakes — videos that are manipulated by artificial intelligence to make subjects look a different way or say words they did not actually say — would be removed if YouTube determined they had been created with malicious intent. But whether YouTube took down parody videos would again depend on the content and the context in which they were presented, she said.
Renée DiResta, the technical research manager for the Stanford Internet Observatory, which studies disinformation, said YouTube’s new policy was trying to address “what it perceives to be a newer form of harm.”
“The downside here, and where missing context is different than a TV spot with the same video, is that social channels present information to people most likely to believe them,” Ms. DiResta added.
Article source: https://www.nytimes.com/2020/02/03/technology/youtube-misinformation-election.html?emc=rss&partner=rss
Speak Your Mind
You must be logged in to post a comment.