Once a Facebook account has been algorithmically determined to be suspected of terrorism, moderators are given full access to it. Since Facebook has refused to define what ‘terrorism’ is, this is a huge data security and privacy problem.
The moderators can then proceed to read the account owner’s private messages and track the person’s location if they so choose. Worse yet, these moderators are not part of Facebook’s in-house team of employees, but merely third-party contractors.
Recently, there has been an increasing amount of pressure from several countries to step up their measures against terrorism, which is a consequence of the recent terrorist attacks that happened on English and European soil. It appears that what Facebook is doing falls in line with the current circumstances and events.
There were some leaked documents that reveal the arbitrary approach Facebook takes to determine what is allowed on the site and what is not. However, the final call always rests on the individual moderator’s shoulders.
According to Facebook, each company will be applying its own definition of what is considered terrorist content. However, they have not provided any kind of official definition of what the term means to them.
Among others, this introduces the problem of internal surveillance. While the US Intelligence community is governed by the Fourth Amendment, protections against unwarranted searches on Americans, there is a big elephant in the room: the same does not apply to private companies like Facebook.