The Social Dilemma

I recently watched this docu-drama on Netflix and found it fascinating and even a bit unnerving. Really enjoy the fact that they had the techies/engineers who worked for the companies and built the systems talk about them. The film doesn’t paint Big Tech as the bad guys, but does address the issue of morality and regulation of social media platforms. What we are finding is the unforeseen consequences of systems that have been designed to engage and influence people for the sake of advertising and thus profit. The problem seems to be that these systems are sometimes too good at engaging us and influencing our thinking. The rising problem of “fake news” that we are facing is a result of this.

Would be interesting in hearing from those who watched the film too. If you haven’t yet I can highly recommend it. I’ll post my thoughts shortly.

6 Likes

So here are some personal reflections.

One of the stand out quotes from the film was the following:

“It’s the gradual, slight, imperceptible change in your own behaviour and perception that is the product.” (Jaron Lanier)

Let me state again, I don’t think is some evil conspiracy thing, yet it is quite a sobering thought. This is a modification of the mantra: “If the product is free, then you are the product.” Now this statement could justly be applied to advertising in general, part of the difference today is that advertising is no longer just billboards and clever (or terrible) radio and TV ads, it is something that is driven by the most advanced AI algorithms. It’s that old meme, “Ads are getting smarter.” There are the obvious algorithms, search for a product once and you find it popping up on every website. It’s the subtler stuff that can be more persuasive.

One of the other issues highlighted in the film was that our views are becoming more polarised because the content that our news feeds is being tailored towards our tastes. Once again, this is not inherently some evil kind of mind control, it’s just the results of an algorithm serving up content to keep you engaged - since even the time you spend on a post before scrolling on is measured and captured. Let me give you a real world example based on my experience:

First of all let me give a bit of context. I’m a pastor at a church, our congregation consists of a variety of ages, and so I have a bit more interaction with older/elderly people - people who are not as internet savvy as those of us here on the forum. These are people who grew up in an era where news was considered to be true and factual - I say considered because we all know that propaganda and agendas have been around for centuries. These are people who really struggle to identify and discredit fake news. I can’t tell you how many times I’ve received a forwarded Whatsapp message over some urgent but ultimately untrue issue.
During this lockdown many of these people are using social media far more than usual. The one day I received a link from one of our members to “Plandemic” the notorious conspiracy theory video, with a tag line “Watch before they take it down!” I checked out the video for the most part and then went on a search to fact check the claims, naturally finding enough to at least question some of the claims made in the video. Here is the interesting point. I started seeing of this kind of content on Facebook/Youtube. More conspiracies or channels based on that content. More posts from people talking about the conspiracies behind Covid. These posts look interesting in a “what other theories can they be coming up with” kind of way. There is potential to get sucked in. Like the person who forwarded me the Plandemic video had clearly been. The algorithm doesn’t have some ulterior motive, it’s just doing it’s job keeping you engaged.

On a related note I’ve seen the Youtube recommended videos algorithm change “tastes” as I’ve researched both sides of an issue. When you get into the “left” side you find a lot of content to support that view, if you get into the “right” side, voila!, tons of content to support that view. The algorithm is giving you the content it thinks you want to see. It’s not evil, it’s a tool, a very persuasive tool. But like any implement, how it is used can become a moral issue.

Sorry for the wall of content :smile:

4 Likes

I was invited to a UX Designer social regarding and open discussion regarding this, think it was just last week. I, unfortunately, declined due to not having watched the film as I do not have access to Netflix. I would have loved to have attended, I guess I could have regardless.

As a designer who focuses on empathic practices and tries to put usability, usefulness and value at the forefront for uses. This is a goldmine and love to see content being created around dark web and bad practice techniques.

3 Likes

As I said in another thread, this is a must watch, for anyone, especially parents.

5 Likes

Damn, now you make me want to sign up to Netflix, I think they still have a trial phase on new signups :slight_smile:

3 Likes

Try for one of the new R39pm mobile packages - maybe you get lucky?

However, if the testing cell is assigned randomly upon loading of the Netflix signup page, it may be possible to continuously reload the page from different browsers and devices to access these plans.

As @Shrike (and others now) says, The Social Dilemma is a must watch. In a similar, but even more sinister vein, I’d add The Great Hack to your trial period binge session. And then watch some of the episodes of High Score to get happy again.

4 Likes

I didn’t even know those options existed, thanks for sharing.

3 Likes

That was such a great docu-series. I watched that while we were on our week-long break in Gordon’s Bay and wished for more. Between that and The Toys / Movies That Made Us, I’ve been living in some great media nostalgia. I can’t wait for further episodes in those series.

4 Likes

As someone who spends an awful lot of time debunking misinformation and disinformation, I fully acknowledge how biased my view is of the importance of this discussion. But I think it is extremely important :smile:.

It is far easier for me to summarise the things I didn’t like about The Social Dilemma than the things I did like — what I didn’t like is a much shorter list!

The interviews with former Big Tech staffers was a really interesting perspective that reminds everyone that these systems are built by regular people, and that the negative consequences are largely unintended.

Where Big Tech becomes the Big Bad is when their own staff and the outside world starts telling them “Hey, your magical algorithm is doing this terrible thing” and they respond with apathy.

I really dislike how “If the product is free, then you are the product” has become a truism. It’s really a gross oversimplification of the content/distributor→audience→advertiser model that paints everyone who makes their work available for free with the same brush.

I like the quote @DarthMol highlighted (“It’s the gradual, slight, imperceptible change in your own behaviour and perception that is the product.” —Jaron Lanier) because it brings some nuance to the discussion.

In most cases it is not you, the person, who is the product. Your favourite YouTubers monetise your attention when they do a sponsored promo read (“This video is brought to you by Squarespace”) at the beginning and end of their videos. They don’t sell a whole digital model of your identity like Facebook does.

A quote from the series I’m seeing get a lot of traction is “Algorithms are opinions embedded in code.”

That has all the hallmarks of becoming another one of those popular truisms because of how grossly oversimplified it is.

Part of my issue with the statement is that it relies on the re-definition of the word “algorithm” to specifically mean “machine learning”.

As best I can tell, the quote traces back to Cathy O’Neil who did a pretty decent TED talk about the biases developers can introduce in their programs.

In context, the statement works. It’s pretty clear she is talking about machine learning, and she has a limited amount of time so she can’t spend a chunk of it explaining the difference between traditional algorithms and machine learning.

However, ripped from that context the statement is just… inaccurate.

If everyone understands that the quote is short-hand for “sometimes we encode our real world biases in the software we write”, then cool. But I don’t think that meaning is clear when you hear the quote in isolation.

There are several important issues to discuss encapsulated in that one trite quote:

  • Sampling bias — I don’t know about Computer Science grads, but in Computer Engineering the pitfalls of sampling bias was emphasised in our statistics and Artificial Intelligence courses. This is a huge problem in machine learning that needs to be addressed if Big Tech wants to start rolling out such systems at scale (i.e. facial recognition).
  • Cameras — the basic hardware used as input into many of these machine learning algorithms — has a problem with darker skins. The issue of being able to properly expose pictures with dark-skinned and light-skinned people in the same frame also needs to be fixed.
  • Audio codecs favour lower frequency voices — women tend to have higher frequency voices, so this also introduces bias.
  • North America is struggling with the issue of redlining. No machine learning required… a bunch of discriminatory rules were coded just like that into the business rules of financial and insurance systems.

This is another deep and interesting topic!

I have older people in my life who have given me a different perspective than the one you are describing, but with the same results,

They are incredibly distrusting of the “mainstream” news because they still feel betrayed by the propaganda factories of old. They’ve told me horrifying stories of how church magazines, which were regarded second only to the Bible in terms of being an authoritative source for information, were used to promote the racial agenda of the time.

As a result they have a deep distrust for any form of authority or institution, so they end up gobbling up stuff from the “alternative” media which then feeds their conspiracy theories.

At least in their case, if you take them to the source of the news — the actual scientific literature — they are open to being convinced.

Somehow we have to figure out a way to teach the skills we’ve learned after a lifetime online to help people spot “fake news” online. And at scale. Like an inoculation against phishing attacks, trolling, misinformation, and disinformation.

6 Likes

Wow… Great read. Thanks for sharing.

1 Like

Yes, great documentary!

I think people should now be more aware that we are easily influenced by seeing/reading/hearing the same thing over and over again…That gradual imperceptable change in behaviour quote. Just be aware that this is happening and maybe it will happen less.
I have argued for years that the simplest and greatest example of this is the universal dislike of Nickelback.

4 Likes