top of page
Search
  • Writer's picturensawaya

Give me the dials: The Social Dilemma.

I just watched The Social Dilemma, the documentary about social media companies. The film discusses the negative effects of these media platforms with respect to several topics: how content is chosen, how the apps are made to be as addictive as possible, the psychological effects of screen time, and the mechanics of the advertising model. But what I’m most interested in is the process that determines exactly what content reaches the screen, i.e. why it is that a person is fed one article rather than another.


At one point in the movie, a former Facebook employee claims that Mark Zuckerberg had access to “dials” that he could use to get particular macro results. These figurative dials could change the type of content on millions of users’ screens, controlling trade-offs between boosting ad revenue, increasing user screen time, and nudging users to invite new friends. For example if Mark wanted to gain new users in South Korea, the engineer said, then he’d be able to simply turn the dial for this outcome. Because they have an enormous quantity of behavioral data, they know which tweaks will lead to approximately X million new members in Y number of days.


While watching, I couldn’t get past one thought: I want these dials to be accessible to me. I should be able to control those variables. On top of that, I should be given much more insight into the landscape of information that I’m not seeing. I don’t see why these two functionalities shouldn’t be part of some legal regulatory framework. We should even have explicit legal rights to these principles.


Principle 1: Users should have several content control dials.


We should be able to increase the percentage of disagreeable political posts that we see. If someone wants to view only content that they already agree with, good for them. But they should know that it’s their choice. We should also be able to increase the content that is favorable or unfavorable to a specific set of ideas, like immigration restrictions or renewable energy or the new Borat movie. We should be able to choose how many happy/unhappy posts to see, another variable that we know is controllable. The algorithm already determines these outcomes for us and there’s nothing strange about that---obviously some choice has to be made. But the user should have a big say.


Principle 2: Users should have access to a dashboard of the content landscape.


I’m not sure exactly what such a dashboard would look like, but it ought to give the user an idea of what everyone else is reading. As one example perhaps it would cluster opinions or topics, showing topics you haven’t yet been exposed to. Propaganda doesn’t always take the form of specific opinions; effective propaganda often lies in omitting certain stories or “burying the lead.” No matter how much she reads, a citizen is not well-informed unless she knows about the broad set of topics and opinions that the rest of her community sees.


(I know that some internet companies give a few indirect ways to control content [principle 1] as well as a small window into how they categorize your politics [principle 2, sort of]. But those tools are absurdly inadequate and probably only serve the purpose of deflecting criticism.)


These two features would provide another less obvious utility. They would make everyone more conscious of the fact that they’re siloed in their own information sphere. The public might end up having a heightened awareness of how their opinions are actually formed, an understanding which can only be healthy. After all, as already stated, there obviously has to be some kind of algorithm that controls this content. A big issue, and very noticeable if you pay attention to your own biases and others’, is that people subconsciously/implicitly assume that their own news feed contains the objectively most relevant or most true views.


Again, these should be legal rights. I do not see any contradiction with current First Amendment law as I understand it nor with the principles of thinkers like John Stuart Mill.


Having stated this proposal, I want to clarify why I think algorithmic black-box article selection is so different from the choices a newspaper editor would make, over which notably we also have zero control. When I subscribe to a traditional press outlet like the New York Times, I have a basic understanding of the ways in which I’m being made a victim of persuasion. There are editors and writers that have certain views about the world, and the topics and opinions and emphasis they convey will reflect that worldview. It’s still quite opaque, but at least it’s being run by humans who’s basic motivations are understood---motivations like their political leanings, desire for journalistic prestige, and subconscious tribalism. Conversely, we are given almost no insight into how news articles are chosen with current social media algorithms (and it’s not because “even mathematicians don’t understand why deep learning works”---that’s an entirely different topic that shouldn’t be conflated with my points here).


That Facebook and Google and Twitter control these outcomes means everything. What you see determines your opinions and even your mood. I mean that literally---it determines your opinions. If you think you arrived at your views via Socratic self-reflection after hunting for and gathering all the relevant evidence, you are almost certainly wrong. You probably believe whatever you believe because certain text was arbitrarily placed in front of you at various points in your life, not because you have seriously considered many perspectives.

(I should note that I’m not in the “software companies are evil” camp, which would be obvious to anyone who knows me. And I know engineers working for these companies who are annoyed at this sort of criticism. But that just shows that what’s needed is what George Orwell called “a change of structure” as opposed to “a change of spirit.” It’s not something that these companies can fix via individual ethics. New principles and rules are needed in spite of the good intentions of the software builders.)


Perhaps advocates are already pushing for something similar to this proposal, or have some good reasons for why these principles won’t work. But I’ve dabbled in the topic, I read the mainstream articles on the Facebook misinformation controversies, and I’ve given money to EFF and (sometimes) read their newsletters. These sorts of changes do not seem to be discussed. And it seems that even if they’re not silver bullets, they would be both desirable and implementable.


I don’t understand why I can’t have access to the dashboard and the dials. Give me the dials Mark.


0 comments

Recent Posts

See All

What I wish I knew as a young scientist

Studying science is different from doing original science. The skillset for getting good grades in undergraduate math/sciences classes has some overlap with the skillset of proper scientific research,

bottom of page