Facebook’s engineers have developed a new method to help them identify and prevent harmful behavior like users spreading spam, scamming others, or buying and selling weapons and drugs.
They can now simulate the actions of bad actors using AI-powered bots by letting them loose on a parallel version of Facebook. Researchers can then study the bots’ behavior in simulation and experiment with new ways to stop them.
Simulating behavior you want to study is a common enough practice in machine learning, but the WW project is notable because the simulation is based on the real version of Facebook. Facebook calls its approach “web-based simulation.”
“Unlike in a traditional simulation, where everything is simulated, in web-based simulation, the actions and observations are actually taking place through the real infrastructure, and so they’re much more realistic,” says Harman.