糖心破解版

Skip to main content
Markkula Center for Applied Ethics Homepage

On Recommendation Algorithms and What Makes Us Boring

metal arches in a tunnel

metal arches in a tunnel

Constrained Choices and Compliance

Irina Raicu

Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics. Views are her own.

Wired magazine鈥檚 鈥渟piritual advice columnist,鈥 Meghan O鈥橤hieblyn, who noted that a streaming music app is 鈥渟carily good at predicting songs鈥 that the reader would like and asked, 鈥淒oes that make me boring?鈥

O鈥橤hieblyn redefined the question: 鈥淚'm willing to bet,鈥 she wrote, 鈥渢hat your real anxiety is not that you're boring but that you're not truly free. If your taste can be so easily inferred from your listening history and the data streams of 鈥榰sers like you鈥 (to borrow the patronizing argot of prediction engines), are you actually making a choice?鈥

Later in the column, however, she noted that users of services that include recommender algorithms, like the questioner, do make choices鈥攂ut choices that are themselves shaped by the algorithms:

On , we quickly scroll past posts that don't reflect our dominant interests, lest the all-seeing algorithm mistake our curiosity for invested interest. Perhaps you have paused, once or twice, before watching a film that diverges from your usual taste, or hesitated before Googling a religious question, lest it take you for a true believer and skew your future search results.

These are choices born of restrictions. They are efforts to mollify the algorithm鈥檚 rigid and itself limited perspective, lest it go from just overly simplifying to being outright wrong about our interests.

For the subset of users who understand the impact of the algorithms enough to try to appease them, the answer, then, is yes鈥攖hat makes one boring; but the 鈥渢hat鈥 is not the users鈥 predictability but the compliance with the algorithms. In this scenario, it鈥檚 the user who鈥檚 been trained by the algorithm鈥攏ot the other way around. The user acquiesces to being a shell of his or her 鈥渄ominant interests,鈥 concerned about the consequences of trying (or learning about) new things.

While fully acknowledging this reality, O鈥橤hieblyn writes that she doesn鈥檛 鈥渁dvise embracing the irrational or acting against your own interests鈥 as a response. 鈥淚t will not make you happy,鈥 she argues, 鈥渘or will it prove a point.鈥 Disagreeing with this view, in a different take on recommendation algorithms, Clive Thompson (who also often writes for Wired) ; he claims that acting out against the algorithms is not, in fact, against one鈥檚 own real interests. As he puts it,

our truly quirky dimensions are never really grasped by these recommendation algorithms鈥. They鈥檙e not wrong about us; but they鈥檙e woefully incomplete. This is why I always get a slightly flattened feeling when I behold my feed, robotically unloading boxes of content from the same monotonous conveyor-belt of recommendations, catered to some imaginary marketing version of my identity. It鈥檚 like checking my reflection in the mirror and seeing stock-photo imagery.

In other words, to offer a different answer to the Wired questioner who asked about the music app, the recommendation algorithms do make you boring鈥攁nd static鈥攊f you allow them to do all the work of finding music or other 鈥渃ontent鈥 for you.

To break that false mirror that Thompson mentions is therefore not to 鈥渆mbrace the irrational鈥 but to try to embrace your full self. In , Thompson offers a variety of suggestions for how one might go about 鈥渞ewilding鈥 one鈥檚 imagination in an age of recommendation algorithms鈥攚hile acknowledging that this requires more effort on our part.

If your music app鈥檚 recommendations are too accurate, you might not be boring but just stuck in a rut鈥攑erhaps in need of a reminder that might come (serendipitously) from : 鈥淵ou are under no obligation to remain the same person you were a year ago, a month ago, or even a day ago. You are here to create yourself, continuously.鈥

It鈥檚 important, also, to note that recommendation algorithms used in the context of, say, music streaming apps have very different societal impacts than those used in social media feeds or in news media outlets. The latter categories of recommenders have been accused of being partly responsible for increased social polarization, filter bubbles that impede understanding, radicalization, and other significant negative consequences that go far beyond making us 鈥渂oring.鈥

It seems only fitting to conclude this post with a couple of recommendations for further reading:

- a useful analysis and taxonomy published in 2020 by Silvia Milano, Mariarosaria Taddeo, and Luciano Floridi, titled

- a blog post by Claire Leibowicz, Connie Moon Sehat, Adriana Stephan, and Jonathan Stray, about research conducted by the Partnership on AI, titled 鈥

 

Image: , by , is licensed under

Dec 14, 2021
--