As someone who spends an awful lot of time debunking misinformation and disinformation, I fully acknowledge how biased my view is of the importance of this discussion. But I think it is extremely important .
It is far easier for me to summarise the things I didn’t like about The Social Dilemma than the things I did like — what I didn’t like is a much shorter list!
The interviews with former Big Tech staffers was a really interesting perspective that reminds everyone that these systems are built by regular people, and that the negative consequences are largely unintended.
Where Big Tech becomes the Big Bad is when their own staff and the outside world starts telling them “Hey, your magical algorithm is doing this terrible thing” and they respond with apathy.
I really dislike how “If the product is free, then you are the product” has become a truism. It’s really a gross oversimplification of the content/distributor→audience→advertiser model that paints everyone who makes their work available for free with the same brush.
I like the quote @DarthMol highlighted (“It’s the gradual, slight, imperceptible change in your own behaviour and perception that is the product.” —Jaron Lanier) because it brings some nuance to the discussion.
In most cases it is not you, the person, who is the product. Your favourite YouTubers monetise your attention when they do a sponsored promo read (“This video is brought to you by Squarespace”) at the beginning and end of their videos. They don’t sell a whole digital model of your identity like Facebook does.
A quote from the series I’m seeing get a lot of traction is “Algorithms are opinions embedded in code.”
That has all the hallmarks of becoming another one of those popular truisms because of how grossly oversimplified it is.
Part of my issue with the statement is that it relies on the re-definition of the word “algorithm” to specifically mean “machine learning”.
As best I can tell, the quote traces back to Cathy O’Neil who did a pretty decent TED talk about the biases developers can introduce in their programs.
In context, the statement works. It’s pretty clear she is talking about machine learning, and she has a limited amount of time so she can’t spend a chunk of it explaining the difference between traditional algorithms and machine learning.
However, ripped from that context the statement is just… inaccurate.
If everyone understands that the quote is short-hand for “sometimes we encode our real world biases in the software we write”, then cool. But I don’t think that meaning is clear when you hear the quote in isolation.
There are several important issues to discuss encapsulated in that one trite quote:
- Sampling bias — I don’t know about Computer Science grads, but in Computer Engineering the pitfalls of sampling bias was emphasised in our statistics and Artificial Intelligence courses. This is a huge problem in machine learning that needs to be addressed if Big Tech wants to start rolling out such systems at scale (i.e. facial recognition).
- Cameras — the basic hardware used as input into many of these machine learning algorithms — has a problem with darker skins. The issue of being able to properly expose pictures with dark-skinned and light-skinned people in the same frame also needs to be fixed.
- Audio codecs favour lower frequency voices — women tend to have higher frequency voices, so this also introduces bias.
- North America is struggling with the issue of redlining. No machine learning required… a bunch of discriminatory rules were coded just like that into the business rules of financial and insurance systems.
This is another deep and interesting topic!
I have older people in my life who have given me a different perspective than the one you are describing, but with the same results,
They are incredibly distrusting of the “mainstream” news because they still feel betrayed by the propaganda factories of old. They’ve told me horrifying stories of how church magazines, which were regarded second only to the Bible in terms of being an authoritative source for information, were used to promote the racial agenda of the time.
As a result they have a deep distrust for any form of authority or institution, so they end up gobbling up stuff from the “alternative” media which then feeds their conspiracy theories.
At least in their case, if you take them to the source of the news — the actual scientific literature — they are open to being convinced.
Somehow we have to figure out a way to teach the skills we’ve learned after a lifetime online to help people spot “fake news” online. And at scale. Like an inoculation against phishing attacks, trolling, misinformation, and disinformation.