Real Friends and Meaningful Interactions
Poke London were kind enough to invite us to join their panel at Social Media Week last week for a discussion around the subject of meaningful interactions on Social Media. Below is the text and slides of Alex’s presentation. The botivist work mentioned was by our colleague and friend Prof. Saiph Savage.
It was great to get some thoughts onto paper once we were given the subject title and things fell into place quite quickly. Somewhat without realising it we have a lot of hard-earned experience…
Hi, thanks for having me here. Meaningful Social Media Interactions and time well spent… I want to speak to this topic from the basis of what I think we’ve learned in the last couple of years in both researching and trying our hand in developing for communities over social media.
I’ll avoid speaking too much right now about Etic Lab’s history but I’m going to draw from our work with Political/Social activists and organisations with an interest in using social media to encourage and reify communities of interest.
I’d like to share, if not design principles, at least some important lessons about what is going on on social media, meaningful or not and useful or not, that we try to account for in our own work. I’ll illustrate these with some of the exercises we took part in to learn them.
Probably uncontroversially, it is our view that designed SM interactions are successful to the extent that they are useful.
Now, everybody has a different idea of what is useful. For example, people at different stages of a project will want different things – be it confirmation their contribution has been received to feedback that it had an effect. Somebody else might expect that their experience is repeatable.
This results in highly variable publics that you, the designer of these interactions, will have to work with in terms of their different expectations.
We also argue that interactions have more power when the social media actor has afforded other people the ability to do what they want or to decide what they want to do. It is important to make the distinction between giving and affording this ability.
Giving is a top-down activity where someone with resources and power provides for another in such a way as to reinforce that power. Examples would be putting instructions on a wall or creating a policy banning hate speech.
Affordance is harder to nail down but you’ll all know it. My favourite example is if you’ve ever walked up to a door with a handle on it and pulled when you should have pushed. The door erroneously afforded pulling and you’ll have “known” what to do.
Bearing witness has turned up time and again when we have observed how social media affords emergent communities the chance to form from the bottom up. Its not about organising for an entity like a company to broadcast its message, but a way in which a public can come into being by speaking to that in which it is interested or has a concern.
A nice example we found was a campaign by Durham teaching assistants a few years ago to combat pay cuts and loss of resources. Their social media pages found themselves filled with other TAs from around the country sharing similar stories as well as techniques and tactics they had used in their fights.
Shared pain, shared strategies and Social Media platforms turned into the platform for this community – who might never have met before – to self organise.
The last “guiding principle” I’ll speak to is stuck up on the wall in Etic Lab’s office.
That is that “A conversation has only taken place when both parties have changed as a result.”
This is relevant to most of what I’ve said already but as a definition of what I think we are trying to do when we say we want meaningful interactions I think it is important.
This was one of the first experiments we had into how technology itself can join in a conversation. It was a paper from 2015 by a colleague of Etic’s working on an anticorruption campaign in Mexico.
The botivist was a bot that would find people on Twitter who were using hashtags and keywords associated with anti-corruption topics. It would then attempt to recruit them to a formal campaign using several different strategies – explained in the slide – to speak to the potential contributors.
The idea was to test the efficacy of a bot-activist in reaching out to people with a claimed interest in the political activity and asking them to share ideas and actions, making use of a bot’s abilities to do so en-masse.
A side note – the most popular non-corruption related conversations people tried to have with the botivist was about whether bots should be involved in activism.
What we found was that, somewhat counter-intuitively, the bot that didn’t try to share a lived experience with people what by far the most successful.
What we think this is speaking to is the difference between authentic and inauthentic voices. It’s just my opinion but I believe it is very easy to work out whether something trying to recruit you to an action is a human or not. I think there is something that feels fundamentally manipulative when a bot tries to invoke its humanity when it speaks to you.
Further, I think it is easy to know when something, be it just a message or a more complex interaction, is authentically felt or not.
This is a bot I built over last winter for an exhibition on “the future” that (I think) just finished at the Victoria and Albert Museum. It was commissioned to be representative of the kind of political bot that was meant to have been observed in the wild during the Brexit referendum and the 2016 US General Election.
However for various reasons it became a hybrid of what we did observe in our research and what various people on the commissioning side thought a political bot should be. As such, it has the ability to talk about subjects about the themes of the exhibition, whereas 2016’s political bots were dumber than that and tended to drown out SM conversations or hijack them with spamming their own content.
There’s a lot that I could talk about with respect to this project but I want to highlight how it was used and specifically the preconceptions about its abilities that were brought by people to interacting with a political bot.
Its so far had hundreds of people talk to it and the interactions ranged from people giving it a Turing test, people arguing with it about Ethics, to asking tough questions such as how to get my ex to love me again and who should I vote for.
I’m sure some of the interactions were flippant but I’m equally sure that some of them weren’t and I think it highlights an important point about our relations to technology, especially technology which is hyped or given saturated mainstream coverage, which I think politicized automation of Social Media has very much been in 2018.
Over and over people who stood in front of this and read its tag as a political bot invoked their knowledge of what it was capable of from what they knew of political technology from the near planet-wide discussions of how our politics has been affected by Social Media manipulation and demanded of a few lines of code far more than it was technically capable of.
Outside of social media we have a research interest in the ethical and social implications of technology adoption. One of the major phenomena we are interested in is complacency.
A grim illustration of this is a plane that ended up being shot down by the Soviet Union in the 1980s after wondering off course. The black box recording later revealed that, while the instruments reported nothing unusual, the pilots had conversations about the strange sky, the position of the stars which, to their trained eye, seemed to place them much further North than they should be.
Now, I’ve had conversations throughout this year where I have doubted the efficacy of much of the claims made for manipulation of Social Media for political purposes. I’d be happy to go into that after if you wish.
But my point here, and the reason I have used this bot as illustration is this: We believe that by giving yourself the successively more sophisticated trappings of devices, data manipulation, benchmarking and toolsets you are learning to rely on and make use of something you fundamentally don’t believe in.
This isn’t a point about politics; it’s about all of us. This isn’t about what bots are capable of but what digital technology and digital platforms for understanding and engaging with the world are doing to us.
They convince us that our own insights aren’t good enough and that they aren’t sufficiently evidenced. They also encourage you to work in terms of these tools and that’s not what we should be doing to ensure our efforts are “meaningful”.
Because we are immersed in a huge, rapidly evolving and expensively capitalized truth – that digital information and data are required for understanding the world – we are setting aside our own intuition about what the sky looks like.
For Etic Lab this is a hard learned point.
Some colleagues of mine did some work into the state space of Google videos and the recommendations algorithm for discovering new content. The tool they built called “Steps to Jordan Peterson” – which both of these images are meant to illustrate – was an attempt to discover how easy it is to arrive at one of his videos from a random starting point of an other video, somewhere on YouTube.
What they found is that the recommendation algorithm does privilege some type of videos, such as the work of Peterson, over others.
I want to say here that Etic Lab does NOT believe it is an express purpose of Google to favour anybody with the YouTube platform. It only favours that behaviour that encourages people to watch more videos. On top of that, people seem spend so much time on JP and his wider community’s videos is because there is ready access to material and content that they already want to hear.
That is to say, YouTube doesn’t act to privilege Peterson, but to privilege the community around Peterson because of the behaviour they exhibit in being consumers of his content.
Another phenomena from YouTube I want to mention is how the demands of producing material to be consumed regularly and in volume is crucifying people working on the platform. And because those analytics are engraved on their bank accounts they know that if they fail to keep that production schedule up they will lose their livelihoods. Peterson succeeds because there are a lot of people doing such work on his behalf…
We all look at the world and see different things. What I think we have found on YouTube is that we now have tools at our disposable for radically reconfirming and solidifying those differences by introducing us to others who share them and then match us with more and more of what we want.
The communities that we see emerging on YouTube (and other social media) provide us with a good picture of what it is they need. A strong hierarchy based around family, empathy with animals, the need to feel good about yourself or be good to yourself, attention from others for your thoughts, hopes and dreams – these are needs that are really complex and can be expressed through those sort of activities.
Both in the sense that they’re vulnerable to the ways they come across information but also in that they have real needs that aren’t satisfied by or present in other forms of behaviour these people are doing real cultural work – making sense of the world. Its work that needs to be done by all people and what we are finding is that, in the affordance structure of the internet such that people can do that for themselves and amongst non-hegemonic cultures.
When we looked at how the alt-right were using reddit and 4chan to organise in 2016 we watched them ask questions, look for support and developed common practice. Amongst Anti-vaxxers on social media what we find is that they’re commonly looking for confirmation of their world-view without having enough information. A frequent question, especially with an ill child is “Have I done the right thing?” and the unfortunate situation is that it is very easy to find someone who will reply “of course”.
This is the Tenants Union Etic had a hand in setting up in 2016. Again, I could find loads to speak to on this project.
The point I will make here is that we have never found anything as energizing as the common-sense and widely shared notion that there is such a thing as unfairness and that it cannot hold.
What we tried to do with this specific piece of tech was place all the requisite knowledge and experience for suing a landlord to get a deposit back inside the technology and remove as much of the burden of educating themselves and understanding the situation from the person using it so that they could react on that instinct.
Our analysis showed that there was a lot of reasons one might have for doing such a thing but for most people fighting back against unfairness was the motivator. In fact, even when they could possibly claim a lot more from their landlord, most people were happy to leave it at just getting their original deposit back.
However, our hope was that we might be able to speak to many concerns by providing an interface that afforded people the chance to make it work for them. So, for example, you could take the fight to an unfair landlord, or speculatively try and get a bit more cash from somewhere or perhaps make it political and strike back against the concept of landlords.
What we found was that there were also so many reasons people could give themselves for not doing it. Lots of people who, even thought their landlords would be horrible were reluctant to be horrible back. A real asymmetry. There were plenty of examples where there was compassion and caring about unfairness that did not equip people to be the aggressive participant. We even added the option to leave and come back, in case being drunk on a Friday night made people a bit bolder to do what they didn’t want to before, which produced no real uptick in takers.
That was a learning point for us – we got it wrong. But I think it means you shouldn’t worry that you’re tapping a whirlwind if you try to do authentic actions with your social media interactions. People aren’t going to invest themselves in ending capitalism because of a conversation they have on Facebook, no matter how easy the conversation made it seem.
So, in quick conclusion, I want to draw together our world, we do not have answers or rather; we’ve routinely made mistakes and learned a lot in our practice. We believe we have found some general principles that do ‘work’ in a meaningful way but we must recognize that Social Media is a complex and evolving ecosystem – not a platform or series of platforms.
Having an authentic conversation is an important way to create a meaningful interaction. A meaningful interaction should reduce somebody’s uncertainty about what to do. It is possible to have meaningful interactions, facilitated by social media with a public that spans a diverse body of people. That public may not all choose the same forms of expression, deploy the same concepts or be open to the same forms of evidence and argumentation.
There are however methods and technologies – with Social Media at the core which will facilitate the change toward a coherent and useful public discourse.
If you are working towards that and approaching in the right way you’re going to find that these things – homophily, managing affordances, building communities of interest and of practice and maintaining authenticity – will work in your favour.