Ex-Facebook employee Alexander Mäkelä recently launched a web booklet entitled Social Media for Change – Ideas, Tools and Best Practices for Civic Engagement and Elections with the help of Alberto Alemanno’s The Good Lobby.
I sat down with Alexander to discuss his background and experience in social media, as well as exploring just how the booklet came about and who and what it was meant for. This is the first part of two articles covering the interview.
DK: Can you tell me a bit about yourself and your journey up to this point?
AM: I am Finnish-Swedish, or rather Swedish-Finnish, in the sense that my parents were both Finnish, but I was born and raised in Sweden. So to clarify I am part of the Finnish minority in Sweden, not the Swedish minority in Finland.
DK: Yes, I was about to ask. That must be confusing for people, when you first meet them.
AM: Ha ha, yes. Generally speaking, I come from a kind of blended background, left an international mark on me, which has caused me to always look at things more globally. I left Sweden as soon as I could, to study in the UK, in Canada and here in Belgium. I also worked in different cities, like Paris and London, moving around a lot between very international vibrant places.
My first job was product development with the Body Shop, a company that was owned by L’Oreal. So we are talking about make up, fragrance, body butters. I was working business cases with marketing insights and that is also where I first got in contact with social media. It is a very competitive industry and you attract customers by doing really good marketing campaigns. So understanding social media from the influencer’s point of view. How do you use bonds in a certain way to influence emotions in clients. How do you do a month long campaign in different stages?
Eventually, I ended up in Brussels, like many people do. I did a traineeship at the European Commission and did my master’s degree in European public policy. I came back to the Commission and evaluated impact assessments for a while, to ensure that they remain evidence-based.
Then I found this job opportunity at Facebook, which was a dream come true, in the sense that from an early age on I had been interested in tech and sci-fi. Growing up with Star Trek, Silicon Valley was always very appealing to me.
I was there as a maternity leave cover for a year, so I was neither fired nor decided to leave, the contract term was simply up. But it was an incredible year in any case. I worked on things like fake news, hate speech and election integrity. I also became one of the lead people working on policy marketing material, within a company that is really good at creating marketing material. That is kind of it, in a nutshell.
DK: Why was Facebook working on policy marketing?
AM: I was part of the public policy marketing team. So for example, to establish our position on certain issues, we created marketing material, for example this could be amazing slide decks for VP’s, pamphlets and these kinds of things, but with a social media touch to it, so it kind of looks like Instagram or Facebook.
DK: It’s interesting as well to hear that they were investing a lot of time to tackle issues, such as hate speech, fake news etc.
AM: Facebook has been under a lot of scrutiny and I think when you are that big you probably should be scrutinised. There have been a lot of issues. It is the largest social media platform on the planet, what is it, 2.2 billion people that use it? Obviously, not everyone has good intentions, so some people are going to try and propagate their racist agendas, some people are going to try and spread fake news, be it to try and direct people to links and advertising farms, where they are just bombarded by ads. You know, I am talking about those links like “Top 10 something something…” and you click on it and suddenly there are a hundred adverts. To some extent that also can be considered fake news, spammy type of content.
To be fair, a lot of people have also started to use social media for elections and as a company, Facebook has really understood that it has a key role in society and that people use it for good and bad. It is about mitigating those bad situations and making sure that no political actor can use any of the platforms to their advantage, ensuring that there is an even playing field.
The company has either already taken initiative on all of these things, or is cooperating with authorities, for instance, with the European Commission. The hate speech work that we did at Facebook was focused a lot on training NGOs on how to quickly put up notices on hate speech content, training them how to easily find and identify it and to ensure that they had all the tools that they required to ensure that their platforms were as user-friendly as possible, which is very important. You don’t want to go on a platform, where you see hateful content.
Facebook also worked on counter-terrorism. With the help of algorithms and filters, 99% of Al Qaeda and ISIS content were removed before anyone ever even noticed it. That is the level of sophistication now.
DK: It is very interesting that Facebook is now able to do that, because there was this big controversy around the Copyright Directive in the European Parliament. Upload filters were being pushed as the solution to filter out all illegal content, such as the terrorist content you just mentioned and the argument against them was that it was very hard to actually create and implement these filters. Can you give me a bit more inside information into how Facebook approached this?
AM: When it comes to upload filters, I first would like to say that it is not a silver bullet. In terms of the Copyright Directive, a lot of people in the European Parliament and Brussels more generally, seemed to think that upload filters are this magic cure for everything, without realising that you can always find a way around it, by just tweaking your image or video a bit. You can just rearrange one or two pixels slightly and it will be enough to fool the system. The human eye won’t tell the difference. Maybe if you zoom in a bit, you can tell that the lighting is a bit off, or the contrast is slightly different, but this is enough to get around auto-detection systems and not everyone can afford more sophisticated filter systems than that.
When it comes to the counter-terrorism work that Facebook does, it cooperates with a lot of other companies through GIFCT, the Global Internet Forum to Counter Terrorism. They have an industry-wide hashing database. It kind of puts tags on specific content that has been removed before. They are also working with law enforcement and national governments and a lot of NGOs and even academia to make sure that the filters really do understand what type of images, videos or text should be removed. At the same time it is also very difficult in the sense that you have to ask: How do you distinguish between someone who is criticising a terrorist action and someone who is praising it? So it is an ongoing process…
DK: You use machine-learning for that, I assume?
AM: Yeah and it is going to get better over time, but right now filters are not a silver bullet in any sense of the word. I think, with counter-terrorism it has been successful, because there has been such strong cooperation between the companies with national governments and the European Union. In terms of copyright, I am not so sure we can achieve those same results.
DK: So basically Facebook’s solution for counter-terrorism was to be transparent and cooperate with as many other stakeholders as possible, in order to ensure that the filters don’t end up practicing censorship.
AM: Yes and at the same time, Facebook also employs over 200 people, who work specifically with law-enforcement on counter-terrorism and there are thousands of content reviewers as well. It is not just A.I. systems, there is also a human element involved as well.
When it comes to more commercial issues, like copyright, well, how do you determine a copyright infringement if you are a content-reviewer? In this scenario it becomes slightly more tricky. When it comes to hate speech or terrorist content, it is usually a bit more clear-cut. There are some grey zones, but you can determine it much more easily.