Facebook has made another wise move in trying to stop harassment criminal activity, misinformation, and other types of wrongdoing on the platform.
Facebook says that its researchers are developing new technology they hope will aid in ongoing efforts to make its platform’s AI have the ability to snuff out harassment.
How do these bots work?
These bots have three main functions and procedures. At first, WES uses machine learning to train bots so that these bots can act like real humans on Facebook. Second, WES can automate interactions of bots on a large scale, from thousands to millions.
Finally, WES deploys the bots on Facebook’s actual production codebase, which allows the bots to interact with each other and real content on Facebook — but it’s kept separate from real users.
According to Engadget, in the testing environment, known as WW, the bots take actions like trying to buy and sell forbidden items, such as guns and drugs. The bot can use Facebook like a normal person would, conducting searches and visiting pages. Engineers can then test whether the bot can bypass safeguards and violate Community Standards, according to the statement.
The plan is for engineers to find patterns in the results of these tests, and use that data to test ways to make it harder for users to violate Community Standards.
It has been a really long time since Facebook has started avoiding harassment, misinformation, and similar things, and now they seem to have reached their goal, and soon Facebook would be able to control and check every action made on the platform with no human supervision at all and all by WES.