Collier: I guess to Musk, hate speech is form of free speech

Collier: I guess to Musk, hate speech is form of free speech

Collier: I guess to Musk, hate speech is form of free speech

Interview with a former member of Twitter's Trust and Safety Council

Photo: Private archive of Anne Collier

Twitter's Trust and Safety Council, which the company dissolved on Monday, was the largest security advisory body of large platforms and operated globally. The Council included several groups, among them the one for internet safety and harassment, where Anne Collier worked. She and two other members of the Council had resigned in protest against the messages on “security’’ of the new owner of Twitter, Elon Musk, as they stated in a letter addressed to the public. Anne Collier has been working with digital platforms on youth digital safety for over twenty years, and has been a member of the Council since its inception in 2016.

“It’s very, very important, based on all of the hate Tweets that we’ve received and all of the misconceptions and misinformation about the Council over the past few days, it is very important to understand that we were not employees of Twitter. We were an independent group of advisors that Twitter brought together in 2016 and has worked with for advice on policy and product development and features”, Collier said in an interview with Mediacentar Sarajevo. She assumes that for Musk, hate speech is a form of free speech.

The Trust and Safety Council included a group for human and digital rights, a child sexual exploitation group, a group for suicide prevention and mental health, a temporary group for content governance and one on dehumanization. They acted as Twitter's advisory body, primarily in the area of ​​security and hate speech control.

“So, it was really kind of thinking outside the box for Twitter. I think they are very reckless and ill-advised to get rid of us”, Collier said.

What are, in your opinion, the reasons behind Twitter’s dissolvement of the Trust and Safety Council? What will be the consequences?

It dissolved the Council because clearly, we were not what Twitter wanted to work with. They never reached out to us in the past six weeks since the new ownership took over. Three of us felt strongly that we should end the radio silence, as they say, and say something, have a voice in the public discussion besides, or in addition to Elon Musk’s.

Everybody should know that the Council was dissolved summarily in a three-paragraph email to everybody. When it had been arranged for the Council to have a meeting with Twitter. I was a little bit concerned when I heard that the meeting was only going to be thirty minutes, which would not allow for any significant discussion with people representing NGOs and support the vulnerable and human rights all over the world.

Have hate speech and insults increased recently. Have you noticed any difference in the past few weeks in terms of hate speech and insults? Because the role of the Council was basically to write and discuss and monitor these issues.

The evidence that safety is going down on Twitter has been cumulative. Besides the independent data, which is in our press release, from the Anti-defamation League and the Centre for Countering Digital Hate, which was published in many news outlets, including The New York Times – besides that independent data, the first that we had what was happening on the platform both before and after the new ownership took over, you know the staff was reduced by thousands, and Twitter announced that it would be relying more on automated moderation. So, that was another sign. Also, the treatment of people who were dismissed, as well as the employees who were left.

Other evidence was the reinstatement of people who have had violated Twitter’s Community Standards in the past, including rules against harassment, bullying, hate speech and threats of violence, the silence towards Twitter’s Trust and Safety Council, and now, our own direct experience of having our accounts on Twitter swamped in hate speech and accusations that are false, and misinformation and disinformation which Mr. Musk himself incited and spread by tagging us or allowing us to be tagged in Tweets of hate speech against us. And then a former Twitter employee told me that dismissing the Council signals to her that this is the end of checks and balances for Twitter safety. She had resigned long ago, in 2018, but she was responsible for the creation of the Trust and Safety Council and she was devastated by this development.

You mentioned automated content moderation that Twitter would rely mainly or completely on that. How do you see this and, in your opinion, what will be the consequences? We know how automated content moderation is not always perfect and there are many flaws.

Right. Keeping platforms, social media platforms safe and civil is very complicated and requires a large toolbox of tools, both behind platforms and external to the companies. One such tool is automated content moderation. But it’s driven by machine learning algorithms. And for an algorithm to work well, it has to be fed reams and reams of data. And the reason why automated detection doesn’t work that well in cases of hate speech, cyber bullying, harassment, the sort of human behavioral harmful types of speech, the reason why algorithms don’t work very well is because algorithms detect patterns, and human speech, human behavior, keeps moving on.

If we just zoom in on the protected class that I work with the most, children and minors, people under 18, we know that teenagers are highly innovative when it comes to trends and speech online. Good, bad, neutral, right? It’s really hard for patterns to develop before that speech moves on. And so much of what happens online is highly contextual to what happens within peer groups, at home, at school, offline. The platforms don’t have that context.

And that’s why, number one, it’s really important to have human moderation in the mix and not rely too heavily on automated moderation. And then, it’s really important to have other tools out here, such as NGOs like those who were represented on the Council, who can escalate harmful content to platforms and advocate for their constituencies, helplines like those throughout Europe, internet helplines, and all other manner of tools including suicide prevention lifelines and crisis hotlines for people on the ground.

We often hear that automated content moderation is not working, especially for small languages such as, for example, ours. Is that the case also with Twitter?

Yes, certainly. And that’s part of the problem and why my former Twitter colleague who I used to work with a lot in the last decade had formed the Council and made it so broad culturally, linguistically and nationally. It’s so important to get smaller languages in the mix of content moderation and algorithms have much less data to work on, to chew on in order to do good automated content moderation. The whole frame of speech online in social media has been way US-centric and then Anglophone-centric. And with moving fast and breaking things in the early days of social media, there was hardly any thought at all it seems to places like Sri Lanka and Myanmar and many other countries.

You mentioned also that Twitter has returned some suspended accounts. Why has this been done and what is your opinion regarding that?

Well, I’m glad you asked what my opinion is because I can’t possibly know, but it seems that after multiple statements on Mr. Musk’s part that he was going to double down on freedom of expression, that that was his reasoning somehow, that it was okay to reinstate the accounts of those who had in the past violated community standards. I guess to Mr. Musk, freedom of expression is hate speech, or hate speech is one form of free speech. Unfortunately.

Everything that is happening recently with Twitter is under the pretext of freedom of expression. Can we really put aside all other issues under the pretext of defending freedom of expression?

No, we can’t. We mustn’t. It’s inhumane. And that’s what we’re seeing, and that’s what, so far, the three of us former members of the Council are experiencing directly in our own lives. We are the subjects of incredible hate speech and threats right now.

You said that the three of you who resigned from the Council, before its dissolution, are being targeted with hate speech and various discreditation campaigns. Did you report them, what happened? Were there reactions from social networks and from other organizations from the civil society sector?

We are getting some kind pro bono advice from attorneys and advice on how to protect ourselves with technology. We are very thankful for that help and for the support of our colleagues. You really find out who your friends are in situations like this. We also have received threats from off of the Twitter platform and continue to be on high alert.

You mentioned protection of young people on social networks and Twitter especially. What are the implications of all these new decisions, especially when it comes to security of young people online?

I’m a huge fan of social media. I loved Twitter. For me, Twitter was an entirely civil space because the people I followed and who followed me started out to be tech educators, educators who teach technology in schools and then more and more researchers in multiple countries and some news media. It was an incredible space for a professional learning network, right? And it can be that for young people but, generally speaking, Twitter hasn’t been a favorite space for young people. I think what’s happening, we may be seeing the decline for mass social media.

Twitter is really social broadcasting. It’s a whole different kind of tool. It has direct messaging and private communications certainly, but young people have so many ways to communicate privately with their peers. Whether it’s in video games or on Discord or in Twitch or, you know, Instagram. Very often they just use texting on their phones. I just don’t know if social media as we’ve known it is going to stay very useful to young people.

And they’re very, very smart about maintaining their privacy for the most part, and we know that all young people are not equally at risk online, that it’s the young people who are most vulnerable offline who are most vulnerable online. It’s a huge subject and I apologies if I’m rambling, but advice…

No, I would never tell young people to go off social media or interactive media or technology because they use it for so many good things as well, and their experience is very individual, it’s very situational, and it’s highly contextual. Those close to them need to know that, and know that when they’re vulnerable in real life, they need extra support online as well.

Can we say the Council is similar to Facebook’s Oversight Board? Even though I think that the way it works is perhaps a bit different.

The difference between these two kinds of bodies is that one is kind of before-the-fact and real-time advice. That’s what the Trust and Safety Council was, and that what safety advisories are on all the platforms. Certainly all the platforms I worked with.

The Oversight Board is a whole different animal. It’s really after-the-fact, far after the fact. Some people have sort of likened it to an appeals court, whereby the company itself makes the content moderation decisions. And then users sometimes protest against those decisions, whether it was taking content down or leaving it up. And a tiny, tiny percentage of those appeals or protests would go to the Oversight Board. So, it doesn’t mitigate against harm, it really is more of a policy advisory body.

How do you see the implications for freedom of expression generally on the Internet in light of the events surrounding Twitter in recent weeks? Do you see this as an isolated case, or maybe we can expect similar developments in the future for other platforms as well?

No, I really don’t think any other platform would follow the path that Twitter has taken. What we know, what’s emerging from the research is that good content moderation and keeping users safe is good for the bottom line. It’s good business. But I think all the other platforms are also seeing how bad for business allowing, turning one’s platform into a free-for-all and allowing hate and harmful content to prevail is bad for business. I think that’s becoming quite clear even if just people have a strong bias for the political right and seeing how even content moderation has been politicized in a very negative way.

I am having an experience right now, I’m actually at a safety summit in Washington D.C. that Meta has brought together, and I am experiencing the direct opposite of what has happened in my experience with Twitter in the past few days. So, I’m thankful for that. I am thankful that there is still reason and rational thought and humanity in this space. Very thankful.

So, we hope this will be just one bad example that will not be replicated elsewhere because, as you said, it’s not only good for human rights, but also for businesses, right?

Yes!