By Monika Bickert, Head of Global Policy Management

Last month, people shared several horrific videos on Facebook of Syrian children in the aftermath of a chemical weapons attack. The videos, which also appeared elsewhere on the internet, showed the children shaking, struggling to breathe and eventually dying.

The images were deeply shocking - so much so that we placed a warning screen in front of them. But the images also prompted international outrage and renewed attention on the plight of Syrians.

Reviewing online material on a global scale is challenging and essential. As the person in charge of doing this work for Facebook, I want to explain how and where we draw the line.

On an average day, more than a billion people use Facebook. They share posts in dozens of languages: everything from photos to live videos. A very small percentage of those will be reported to us for investigation. The range of issues is broad - from bullying and hate speech to terrorism - and complex. Designing policies that both keep people safe and enable them to share freely means understanding emerging social issues and the way they manifest themselves online, and being able to respond quickly to millions of reports a week from people all over the world.

For our reviewers, there is another hurdle: understanding context. It's hard to judge the intent behind one post, or the risk implied in another. Someone posts a graphic video of a terrorist attack. Will it inspire people to emulate the violence, or speak out against it? Someone posts a joke about suicide. Are they just being themselves, or is it a cry for help?

In the UK, being critical of the monarchy might be acceptable. In some parts of the world it will get you a jail sentence. Laws can provide guidance, but often what's acceptable is more about norms and expectations. New ways to tell stories and share images can bring these tensions to the surface faster than ever.

We aim to keep our site safe. We don't always share the details of our policies, because we don't want to encourage people to find workarounds - but we do publish our Community Standards, which set out what is and isn't allowed on Facebook, and why.

Our standards change over time. We are in constant dialogue with experts and local organizations, on everything from child safety to terrorism to human rights. Sometimes this means our policies can seem counterintuitive. As the Guardian reported, experts in self-harm advised us that it can be better to leave live videos of self-harm running so that people can be alerted to help, but to take them down afterwards to prevent copycats. When a girl in Georgia, USA, attempted suicide on Facebook Live two weeks ago, her friends were able to notify police, who managed to reach her in time.

We try hard to stay objective. The cases we review aren't the easy ones: they are often in a grey area where people disagree. Art and pornography aren't always easily distinguished, but we've found that digitally generated images of nudity are more likely to be pornographic than handmade ones, so our policy reflects that.

There's a big difference between general expressions of anger and specific calls for a named individual to be harmed, so we allow the former but don't permit the latter.

These tensions - between raising awareness of violence and promoting it, between freedom of expression and freedom from fear, between bearing witness to something and gawking at it - are complicated, and there are rarely universal legal standards to provide clarity. Being as objective as possible is the only way we can be consistent across the world. But we still sometimes end up making the wrong call.

The hypothetical situations we use to train reviewers are intentionally extreme. They're designed to help the people who do this work deal with the most difficult cases. When we first created our content standards nearly a decade ago, much was left to the discretion of individual employees. But because no two people will have identical views of what defines hate speech or bullying - or any number of other issues - we now include clear definitions.

We face criticism from people who want more censorship and people who want less. We see that as a useful signal that we are not leaning too far in any one direction.

I hope that readers will understand that we take our role extremely seriously. For many of us on the team within Facebook, safety is a passion that predates our work at the company: I spent more than a decade as a criminal prosecutor, investigating everything from child sexual exploitation to terrorism. Our team also includes a counter extremism expert from the UK, the former research director of West Point's Combating Terrorism Center, a rape crisis center worker, and a teacher.

All of us know there is more we can do. Last month, we announced that we are hiring an extra 3,000 reviewers. This is demanding work, and we will continue to do more to ensure we are giving them the right support, both by making it easier to escalate hard decisions quickly and by providing the psychological support they need.

Technology has given more people more power to communicate more widely than ever before. We believe the benefits of sharing far outweigh the risks. But we also recognize that society is still figuring out what is acceptable and what is harmful, and that we, at Facebook, can play an important part of that conversation.

Facebook Inc. published this content on 23 May 2017 and is solely responsible for the information contained herein.
Distributed by Public, unedited and unaltered, on 23 May 2017 13:05:15 UTC.

Original documenthttps://newsroom.fb.com/news/2017/05/facebooks-community-standards-how-and-where-we-draw-the-line/

Public permalinkhttp://www.publicnow.com/view/9FC59B849E23F5CF1FE4AB7628A7687CC5D348C3