Facebook: A Vulnerability Caused Some Content That Did Not Violate Its Rules To Be Deleted By Mistake

take 7 minutes to read
Home News Main article

According to CNET, Facebook's parent company meta said on Tuesday that a vulnerability led to the erroneous deletion of some content that did not violate its rules in the first three months of this year** The social media giant said it had fixed the vulnerability and restored Posts wrongly marked as violating its rules, including anti-terrorism and organized hatred.

There were 1.6 million hate actions against Facebook in the first quarter of 2021, up from 1.6 million in the fourth quarter of 2021. The social network also took action against 16 million terrorist content in the first quarter, more than double the 7.7 million in the fourth quarter. Meta attributed this peak to a flaw in its media matching technology. A chart in the company's quarterly standards implementation report shows that the social network has recovered more than 400000 content wrongly labeled as terrorism.

Meta's photo and video service instagram also took action against more terrorist and organized hate content because of this vulnerability. This vulnerability also affects other types of content. The report said that due to this problem, Facebook recovered 345600 content marked as suicide and self mutilation in the first quarter, up from 95300 in the fourth quarter. The social network also recovered more than 687800 content wrongly marked as sexual exploitation in the first quarter, up from 180600 in the previous quarter.

These errors have raised questions about the operation of meta's automatic technology and whether there are other undetected vulnerabilities or errors. The company said it had been taking more measures to prevent content audit errors. Meta is testing new AI technologies to learn from appeals and recovery content. Guy Rosen, vice president of integrity at meta, said in a news conference call on Tuesday that it is also experimenting with giving more advance warning to people who violate the rules before social networks punish them.

Rosen said that when false positives are entered into its media matching technology, it will "spread" it and delete a large number of content that does not violate the rules of the platform. "We have to be very careful about the so-called seeds that enter the system before the fan out occurs. In this case, what we encounter is the introduction of some new technologies, which brings some false positives to the system," Rosen added, adding that the content was later restored.

At the same time, Facebook also faces censorship that did not delete terrorist content before it spread. Over the weekend, officials said a live video posted on twitter by a white man accused of shooting 10 blacks to death at a grocery store in Buffalo was also spread on social networks such as Facebook and twitter. The Washington Post reported that the link to the copy of the video appeared on Facebook, was shared more than 46000 times and received more than 500 comments. Facebook hasn't deleted the link in more than 10 hours.

Rosen said that after the company learned about the shooting incident, employees quickly designated the incident as a terrorist attack and deleted any copy of the video and the 180 page hate speech of the gunman said by officials.

Rosen said one of the challenges is that people create new versions of videos or links in an attempt to evade law enforcement on social media platforms. He said that the company, as in any case, would improve its system to detect violations more quickly. Rosen added that he had no more details to share about what specific measures Facebook was considering.

Boeing Will Launch Starliner's Second Space Station Flight On Thursday
« Prev 05-18
The Growth Of Stock Business Slowed Down, And Robin Hood Announced That It Would Launch Cryptocurrency Hosting Application
Next » 05-18