Greg Elmer, Fenwick McKelvey, Ganaele Langlois, and Seungwoo Baek
Introduction
This short research brief and accompanying literature review serves as an ongoing effort to identify not only trends in research on online harms, but also areas in need of development, funding, public attention, and debate. The broad umbrella of “systemic online harms” highlights a common set of factors that contribute to issues as broad as internet-based conspiracies, disinformation, troll farms, and fake news sites as well as the targeting, harassment, and abuse of marginalized and racialized individuals and communities. While politics and some extremist ideologies have clearly contributed to such online harms, this literature review is more narrowly focused on socio-technical factors: how the internet and other forms of networked computing have amplified such harms.
Some might think that our focus on “the internet” may limit the scope of mediated social harms, but under this umbrella term we include other essential technologies of networked computing: the web, social media platforms, search engines, user interfaces, and, indeed, personal computers, tablets, and handheld devices such as smartphones. Content that used to be exclusively accessed by television cables, newspapers, or over-the-air broadcasts all now converge — almost universally “streamed” or otherwise posted — on these internet properties and technologies. What’s more, we contend that online harms are often cultivated by — or mediated through — internet-based forms of news and information. Given that 79% of Canadians get their news/info from the internet, a rigorous and vibrant research agenda is urgently needed to offer potential solutions to online harms.
The shift to “the internet” further involves changing media habits and an overall convergence of entertainment, educational, professional, interpersonal, and news content on the same device, on the same app. This apparent collapse in generic distinctions among content has led to competition for scarce attention between news and entertainment. These trends have been ongoing since the advent of cable and the early internet, but the ubiquity of smartphones has further altered the context of media habits from passive watching to distracted scrolling. A decade after the introduction of the Blackberry changed expectations of worker availability and the iPhone made computing portable, we know media habits have changed, but how these changes matter to public life and to the prominence of online harms remains unclear.
This short research brief focuses on one of the main obstacles to tackling online harms, media personalization. Where we once referred to the social power of the mass media, stemming from common images, narratives, and stories told through film and broadcast television, mass personalization refers to the near universal customization and filtering of news and information for individual users. The term “personalization” might seem positive at first glance in that it evokes adaptive and intelligent technologies able to sift through massive amounts of information to find what a person actually needs, wants, or desires. However, the form of personalization that we are confronted with online today involves the constant surveillance and profiling of users to find ways to effectively turn and cultivate users to react to, or against, something.
Personalization should therefore be understood as another form of manipulation, much like mass propaganda was a dominant form of manipulation in the age of mass media. Again, unlike the previous broadcasting model of communication, the internet has long been characterized as decidedly decentred or at least distributed among many nodes, offering individuals much greater access to — and participation with — start-up media organizations, online diaries (blogs), or other forms of public communication.
The rapid proliferation of many sources of information provoked an old concern about “information overload” that iterations of internet services sought to address. Early solutions to this problem of content categorization mimicked preexisting systems found in libraries, like thematic categories (adopted by Yahoo) or the use of keywords to search archives (e.g., Google’s search engine). Such systems addressed two key problems introduced by decentralized, networked media, namely the need to find relevant information in a convenient manner. A second generation turned to public participation — crowdsourcing, gate-watching, and collaborative filtering — that popularized early excitement about a participatory internet and the possibility of social media. With the consolidation of social media by a few firms, these participatory elements became automated and dependent on surveillance and profiling to make content recommendations, a move that aligned closely with revenue models dependent on targeted advertising.
User interfaces have become increasingly governed by a back-end software substructure. While many have referred to this computational or “algorithmic” level as opaque, secret, or “black boxed,” the logic of mass personalization has long been exposed by technology journalists and researchers for many years, including by some information scientists responsible for their design. Starting with web cookies, code that web pages (often e-tailers) store on visitors’ personal computers to better customize their future transactions, internet personalization started to take hold as the overriding logic of algorithms that governed the visibility of content on social media platforms such as Facebook, Twitter, and Instagram.
Two other factors are commonly cited as having contributed to the massification of internet personalization, the widespread use of user registration systems (accounts for e-tailers and online service/information providers) and internet interfaces that reinforced personalized recommendations and content from online “friends.” The current internet landscape can in short be characterized as a back-end, algorithmically driven system that gives preference to personalized content, visualized by interfaces that make such personally “relevant” content convenient to find, follow, and engage with.
This brings us to the social harms produced by such personalized systems and the need for research to address potential solutions and mitigating factors. As the internet underwent a “user-generated content” revolution, greatly expanding the opportunities to contribute content to the internet’s many platforms and properties, interfaces have struggled to represent the accompanying explosion in content. Threaded conversations that gain added prominence from users’ votes have privileged hot takes, outrageous opinions, or worse. Personal attacks have consequently become commonplace alongside the personalization of internet content and social communication. Much of this research has focused on the internet as a distinct space or economy of attention, where harms are produced by those seeking to outshout or otherwise capture the attention of others online.
Personalization has been actively constructed in marketing as a common sense solution. Personalization sounds intuitive but is anything but. Moreover, personalization simplifies automated decision-making itself contingent on user surveillance, big data, and machine learning. Personalization is a layer of thin ice covering a deeper pool of questions relating to how platforms calculate user engagement, what counts as optimal behaviour, and what aspects of someone’s multifaceted identity are brought out by algorithmic recommendation.
The more prominent argument in the literature however questions the social effects of personalization and the resulting system of “homophily,” an information system that recommends users “more of the same.” In this context, algorithmic personalization is said to produce a polarized population in clusters of hardened worldviews and keeps individuals in ideological “bubbles,” distant from unfamiliar, uncomfortable, or potentially disagreeable content.
Much of the current literature focuses on specifying the impact of this homophilic form of algorithmic communication, on qualifying it, or, in a small though growing number of articles and books, on rejecting its central premise, arguing that users are free to choose content as they see fit. While individual agency is undoubtedly an important component of everyday consumption of internet-based information and news, the prevalence of personalized interfaces and content cannot be overcome or adequately addressed by individualized solutions. That users have agency and may be exposed to different sources of information does not diminish the need to question personalization as the de facto solution to content regulation online.
The individual power to find “different” content must still swim against the current of personalization, and choosing to do so makes internet browsing and search decidedly more work and less convenient. A thread of research that recognizes some users’ need and desire for different and unfamiliar content can be found in studies of content discoverability and search, an approach that addresses forms of personalization and commercial optimization that monopolize search engine results and, thus, the visibility of content and services.
Ultimately though, the core logic of a default and algorithmically governed process of media personalization needs to be addressed head on, with competing models of news and information urgently required to complement educational strategies, calls for privacy, and research on other political and social factors that mitigate harmful homophilic media effects. Such research requires a rethinking of the visibility and discoverability of heterogeneous information sources and opinions, while maintaining a degree of usability, relevance, and convenience. In short, we believe that studies of mitigating media personalization will never fully redress or prevent the harms of media personalization without recognizing and further studying user practices that prevail beyond their own customized media platforms and interfaces.
Personalization Research Project Bibliography
We see an emerging gap between a refutation of popular accounts of algorithmic harms and a growing call to research other, less-studied algorithmic harms. Filter bubbles and polarization are two key debates in contemporary communication studies. These debates invoked high-choice media environments, social media, and algorithmic filtering as factors in a politicization of world views and greater animosity to those with different opinions. In other words, filter bubbles and polarization introduce algorithmic harms, reflecting concerns that the algorithmization of media environments has detrimental effects on society and politics. We see limitations in these specific algorithmic harms and highlight other harms based on our broader literature review.
While we acknowledge there is no consensus about the significance of radicalization and polarization nor whether these are the right terms to discuss risks to democracy, we find that evidence in Europe finds little to support claims that algorithms are significant factors. Algorithms are commonly perceived to encourage radicalization and polarization through filter bubbles — or homogenous bubbles of information organized by algorithms. A major literature review conducted by the Reuters Institute concludes that “echo chambers are much less widespread than is commonly assumed” and “finds no support for the filter bubble hypothesis and offers a very mixed picture on polarisation and the role of news and media use in contributing to polarisation” [p. 5]. Rather, the effect is largely the opposite, with algorithms just as likely to result in serendipitous and incidental exposure. The review continues, “studies in the UK and several other countries, have found that algorithmic selection generally leads to slightly more diverse news use — the opposite of what the filter bubble hypothesis posits — but that self-selection, primarily among a small minority of highly partisan individuals, can lead people to opt in to echo chambers, even as the vast majority do not, and document that limited news use remains far more prevalent than echo chambers are” [p. 20]. This latter comment suggests that deeper research is needed to understand the effects of algorithmic recommendations on publics demanding more partisan or homogeneous information.
We caution that this study indicates a lack of research in Canada. Out of the 114 articles cited in the Reuters review, only 8 mention Canada. One of which explains that, “it was not possible for us to analyze Canada using our approach because it is essentially home to two media systems, one based on French-language media and one based on English-language media” (Fletcher, Cornea, & Nielsen, 2020). The low representation of Canada in international studies should point to a major caveat in the interpretation of results. When compared, Canada is better understood in relation to European counterparts than the United States. A cross-national survey of resilience to disinformation found that Canada was more like its European counterparts “with respect to welfare expenditure, support for public broadcasting, and regulations of media ownership, advertising, and electoral coverage,” and that “Western European democracies and Canada . . . are likely to demonstrate high resilience to online disinformation: they are marked by low levels of polarization and populist communication, high levels of media trust and shared news consumption, and a strong PSB” (Humprcht, Esser, & Van Aelst, 2020, pp. 13-15).
We then caution that radicalization and polarization are likely low algorithmic harms in Canada, because Canada is seen to be a more moderate media system with known countermeasures (such as public broadcasting) and there appears to be a less pronounced demand for highly partisan or extreme news. The Digital Democracy Project, the largest project of its kind to date, found that: “news sources catering to specific partisan audiences play a very small role in Canada”; “Canadians generally tend to be ideologically moderate”; and “Canadians are substantially less likely to pick partisan news sources and stories than one might expect.” Other studies find that Canada has seen slight increases in affective polarization but much less than in the United States.
While these studies suggest that Canada is both a distinct case — with a more complete linguistically diverse media system — and also a resilient media system, mitigating concern about algorithmic filter bubbles as a specific harm. However, we wish to stress that these studies tend to focus on political information and less on media habits, evaluating the effects of algorithms on access to information. In addition, there is consensus that some users might be more prone to these algorithmic harms. Research consistently identifies subsets of Republican and right-wing voters as more inclined to share misinformation and seek out echo chambers. The United States, however, is an anomaly in comparative media studies — Canadians may not share the same prevalence as Americans.
We observe a strong emphasis on journalists’ theories of media consumption — as information seeking — in prompting concerns about polarization and filter bubbles. The so-called realist approach to democracy, in contrast, considers voters to be less apt to seek out information, habitual, and, ergo, less prone to being influenced by the media. Thus, voter apathy and political disinterest in Canada might lead to reduced risk of polarization and filter bubbles — citizens simply cannot be bothered. If this is true, we then see a different set of potential algorithmic harms that warrant greater discussion.
These harms include:
- a more nuanced version of filter bubbles, one less concerned with homogeneity than the consequences of personalization to political culture;
- a focused concern on specific uses cases or demands (e.g., searching for white nationalist content on YouTube) that algorithmic systems cannot or do not distinguish from other requests;
- homophily and algorithmic attempts to draw inferences between users and content, effectively or not; and,
- transparency and opacity of algorithmic regulation and the consequences of its unknowability, including the encouragement of conspiracy theories as popular explanations (e.g., “my phone is listening to me”)
These algorithmic harms are less well researched, even less so in Canada.
Bibliography
Boxell, L., Gentzkow, M., & Shapiro, J. (2020). Cross-country trends in affective polarization. Cambridge, MA: National Bureau of Economic Research. https://doi.org/10.3386/w26669
Fletcher, R., Cornia, A., & Nielsen, R. K. (2020). How Polarized Are Online and Offline News Audiences? A Comparative Analysis of Twelve Countries. The International Journal of Press/Politics, 25(2), 169–195. https://doi.org/10.1177/1940161219892768
Humprecht, E., Esser, F., & Van Aelst, P. (2020). Resilience to Online Disinformation: A Framework for Cross-National Comparative Research. The International Journal of Press/Politics, 1940161219900126. https://doi.org/10.1177/1940161219900126
Algorithmic Harms
Adomavicius, G., & Tuzhilin, A. (1999). User profiling in personalization applications through rule discovery and validation. Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining – KDD ’99, 377–381. https://doi.org/10.1145/312129.312287
Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985–1002. https://doi.org/10.1177/1461444809336551
Bucher, T. (2018). If…Then (Vol. 1). Oxford University Press. https://doi.org/10.1093/oso/9780190493028.001.0001
Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9), e2023301118. https://doi.org/10.1073/pnas.2023301118
de Vries, K. (2010). Identity, profiling algorithms and a world of ambient intelligence. Ethics and Information Technology, 12(1), 71–85. https://doi.org/10.1007/s10676-009-9215-9
Garrett, R. K. (2013). Selective Exposure: New Methods and New Directions. Communication Methods and Measures, 7(3–4), 247–256. https://doi.org/10.1080/19312458.2013.835796
Gillespie, T. (2014). The Relevance of Algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media Technologies: Essays on Communication, Materiality, and Society (pp. 167–194). The MIT Press. https://doi.org/10.7551/mitpress/9780262525374.003.0009
Just, N., & Latzer, M. (2017). Governance by algorithms: Reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238–258. https://doi.org/10.1177/0163443716643157
Munger, K., & Phillips, J. (2022). Right-Wing YouTube: A Supply and Demand Perspective. The International Journal of Press/Politics, 27(1), 186–219. https://doi.org/10.1177/1940161220964767
Nielsen, R. K., & Fletcher, R. (2020). Democratic Creative Destruction? The Effect of a Changing Media Landscape on Democracy. In N. Persily & J. A. Tucker (Eds.), Social Media and Democracy: The State of the Field, Prospects for Reform (1st ed., pp. 139–162). Cambridge University Press. https://doi.org/10.1017/9781108890960
Prior, M. (2013). Media and Political Polarization. Annual Review of Political Science, 16(1), 101–127. https://doi.org/10.1146/annurev-polisci-100711-135242
Thorson, K. (2020). Attracting the news: Algorithms, platforms, and reframing incidental exposure. Journalism, 21(8), 1067–1082. https://doi.org/10.1177/1464884920915352
Tucker, J., Guess, A., Barbera, P., Vaccari, C., Siegel, A., Sanovich, S., Stukal, D., & Nyhan, B. (2018). Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3144139
van Dijck, J. (2013). The Culture of Connectivity: A Critical History of Social Media. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199970773.001.0001
Yeung, K. (2017). ‘Hypernudge’: Big Data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136. https://doi.org/10.1080/1369118X.2016.1186713
Yeung, K. (2018). Algorithmic regulation: A critical interrogation: Algorithmic Regulation. Regulation & Governance, 12(4), 505–523. https://doi.org/10.1111/rego.12158
Filter Bubbles (Homophily)
Andersson, M. (2021). The climate of climate change: Impoliteness as a hallmark of homophily in YouTube comment threads on Greta Thunberg’s environmental activism. Journal of Pragmatics, 178, 93–107. https://doi.org/10.1016/j.pragma.2021.03.003
Aral, S., Muchnik, L., & Sundararajan, A. (2013). Engineering social contagions: Optimal network seeding in the presence of homophily. Network Science, 1(2), 125–153. https://doi.org/10.1017/nws.2013.6
Caetano, J. A., Lima, H. S., Santos, M. F., & Marques-Neto, H. T. (2018). Using sentiment analysis to define twitter political users’ classes and their homophily during the 2016 American presidential election. Journal of Internet Services and Applications, 9(1), 18. https://doi.org/10.1186/s13174-018-0089-0
Cai, W., Jia, J., & Han, W. (2018). Inferring Emotions from Image Social Networks Using Group-Based Factor Graph Model. 2018 IEEE International Conference on Multimedia and Expo (ICME), 1–6. https://doi.org/10.1109/ICME.2018.8486450
Chun, W. H. K. (2018). Queerying Homophily. In Pattern Discrimination (p. 124). Meson Press.
Fabbri, F., Bonchi, F., Boratto, L., & Castillo, C. (2020). The Effect of Homophily on Disparate Visibility of Minorities in People Recommender Systems. Proceedings of the International AAAI Conference on Web and Social Media, 14(1), 165–175.
Figeac, J., & Favre, G. (2021). How behavioral homophily on social media influences the perception of tie-strengthening within young adults’ personal networks. New Media & Society, 146144482110206. https://doi.org/10.1177/14614448211020691
Halberstam, Y., & Knight, B. (2016). Homophily, group size, and the diffusion of political information in social networks: Evidence from Twitter. Journal of Public Economics, 143, 73–88. https://doi.org/10.1016/j.jpubeco.2016.08.011
Hekim, H. (2021). Ideological homophily or political interest: Factors affecting Twitter friendship network between politicians. Journal of Information Technology & Politics, 18(4), 371–386. https://doi.org/10.1080/19331681.2021.1881937
Ingram, P., & Choi, Y. (2017). From Affect to Instrumentality: The Dynamics of Values Homophily in Professional Networks. Academy of Management Proceedings, 2017(1), 15151. https://doi.org/10.5465/AMBPP.2017.15151abstract
Kevins, A., & Soroka, S. N. (2018). Growing Apart? Partisan Sorting in Canada, 1992–2015. Canadian Journal of Political Science, 51(1), 103–133. https://doi.org/10.1017/S0008423917000713
Khanam, K. Z., Srivastava, G., & Mago, V. (2020). The Homophily Principle in Social Network Analysis. ArXiv:2008.10383 [Physics]. http://arxiv.org/abs/2008.10383
Ladhari, R., Massa, E., & Skandrani, H. (2020). YouTube vloggers’ popularity and influence: The roles of homophily, emotional attachment, and expertise. Journal of Retailing and Consumer Services, 54, 102027. https://doi.org/10.1016/j.jretconser.2019.102027
Laniado, D., Volkovich, Y., Kappler, K., & Kaltenbrunner, A. (2016). Gender homophily in online dyadic and triadic relationships. EPJ Data Science, 5(1), 19. https://doi.org/10.1140/epjds/s13688-016-0080-6
Nagar, S., Gupta, S., Bahushruth, C. S., Barbhuiya, F. A., & Dey, K. (2021). Empirical Assessment and Characterization of Homophily in Classes of Hate Speeches. Proceedings of the AAAI-21 Workshop on Affective Content Analysis, New York, USA, AAAI.
Nahon, K., & Hemsley, J. (2014). Homophily in the Guise of Cross-Linking: Political Blogs and Content. American Behavioral Scientist, 58(10), 1294–1313. https://doi.org/10.1177/0002764214527090
O’Callaghan, D., Greene, D., Conway, M., Carthy, J., & Cunningham, P. (2015). Down the (White) Rabbit Hole: The Extreme Right and Online Recommender Systems. Social Science Computer Review, 33(4), 459–478. https://doi.org/10.1177/0894439314555329
Seargeant, P., & Tagg, C. (2019). Social media and the future of open debate: A user-oriented approach to Facebook’s filter bubble conundrum. Discourse, Context & Media, 27, 41–48. https://doi.org/10.1016/j.dcm.2018.03.005
Personalization
Atote, B. S., Zahoor, S., Dangra, B., & Bedekar, M. (2016). Personalization in User Profiling: Privacy and Security Issues. 2016 International Conference on Internet of Things and Applications (IOTA), 415–417. https://doi.org/10.1109/IOTA.2016.7562763
Awad & Krishnan. (2006). The Personalization Privacy Paradox: An Empirical Evaluation of Information Transparency and the Willingness to Be Profiled Online for Personalization. MIS Quarterly, 30(1), 13. https://doi.org/10.2307/25148715
Bennett, W. L. (2012). The Personalization of Politics: Political Identity, Social Media, and Changing Patterns of Participation. The ANNALS of the American Academy of Political and Social Science, 644(1), 20–39. https://doi.org/10.1177/0002716212451428
Bennett, W. L., & Segerberg, A. (2011). Digital Media and the Personalization of Collective Action: Social technology and the organization of protests against the global economic crisis. Information, Communication & Society, 14(6), 770–799. https://doi.org/10.1080/1369118X.2011.579141
Bennett, W. L., & Segerberg, A. (2012). The Logic of Connective Action: Digital media and the personalization of contentious politics. Information, Communication & Society, 15(5), 739–768. https://doi.org/10.1080/1369118X.2012.670661
Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209–227. https://doi.org/10.1007/s10676-013-9321-6
Cohen, J. N. (2018). Exploring Echo-Systems: How Algorithms Shape Immersive Media Environments. Journal of Media Literacy Education, 10(2), 139–151. https://doi.org/10.23860/JMLE-2018-10-2-8
Farid, M., Elgohary, R., Moawad, I., & Roushdy, M. (2018). User Profiling Approaches, Modeling, and Personalization. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3389811
Jain, S., Grover, A., Thakur, P. S., & Choudhary, S. K. (2015). Trends, problems and solutions of recommender system. International Conference on Computing, Communication & Automation, 955–958. https://doi.org/10.1109/CCAA.2015.7148534
Kang, H., & Sundar, S. S. (2016). When Self Is the Source: Effects of Media Customization on Message Processing. Media Psychology, 19(4), 561–588. https://doi.org/10.1080/15213269.2015.1121829
Kant, T. (2020). Making it Personal: Algorithmic Personalization, Identity, and Everyday Life (1st ed.). Oxford University Press. https://doi.org/10.1093/oso/9780190905088.001.0001
Karwatzki, S., Dytynko, O., Trenz, M., & Veit, D. (2017). Beyond the Personalization–Privacy Paradox: Privacy Valuation, Transparency Features, and Service Personalization. Journal of Management Information Systems, 34(2), 369–400. https://doi.org/10.1080/07421222.2017.1334467
Li, T., & Unger, T. (2012). Willing to pay for quality personalization? Trade-off between quality and privacy. European Journal of Information Systems, 21(6), 621–642. https://doi.org/10.1057/ejis.2012.13
Lury, C., & Day, S. (2019). Algorithmic Personalization as a Mode of Individuation. Theory, Culture & Society, 36(2), 17–37. https://doi.org/10.1177/0263276418818888
Matamoros-Fernández, A., Gray, J. E., Bartolo, L., Burgess, J., & Suzor, N. (2021). What’s “Up Next”? Investigating Algorithmic Recommendations on YouTube Across Issues and Over Time. Media and Communication, 9(4), 234–249. https://doi.org/10.17645/mac.v9i4.4184
Oeldorf-Hirsch, A., & Sundar, S. S. (2015). Posting, commenting, and tagging: Effects of sharing news stories on Facebook. Computers in Human Behavior, 44, 240–249. https://doi.org/10.1016/j.chb.2014.11.024
Rosenthal, S., Wasenden, O.-C., Gronnevet, G.-A., & Ling, R. (2020). A tripartite model of trust in Facebook: Acceptance of information personalization, privacy concern, and privacy literacy. Media Psychology, 23(6), 840–864. https://doi.org/10.1080/15213269.2019.1648218
Sundar, S. S., & Limperos, A. M. (2013). Uses and Grats 2.0: New Gratifications for New Media. Journal of Broadcasting & Electronic Media, 57(4), 504–525. https://doi.org/10.1080/08838151.2013.845827
Sundar, S. S., & Marathe, S. S. (2010). Personalization versus Customization: The Importance of Agency, Privacy, and Power Usage. Human Communication Research, 36(3), 298–322. https://doi.org/10.1111/j.1468-2958.2010.01377.x
Sunikka, A., & Bragge, J. (2008). What, Who and Where: Insights into Personalization. Proceedings of the 41st Annual Hawaii International Conference on System Sciences (HICSS 2008), 283–283. https://doi.org/10.1109/HICSS.2008.500
Zeng, F., Ye, Q., Li, J., & Yang, Z. (2021). Does self-disclosure matter? A dynamic two-stage perspective for the personalization-privacy paradox. Journal of Business Research, 124, 667–675. https://doi.org/10.1016/j.jbusres.2020.02.006
Echo Chambers
Acemoglu, D., Ozdaglar, A., & Siderius, J. (2021). Misinformation: Strategic Sharing, Homophily, and Endogenous Echo Chambers (No. w28884; p. w28884). National Bureau of Economic Research. https://doi.org/10.3386/w28884
Baccara, M., & Yariv, L. (2008). Similarity and Polarization in Groups. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.1244442
Mukhudwana, R. F. (2020). #Zuma Must Fall This February: Homophily on the Echo-Chambers of Political Leaders’ Twitter Accounts. In M. N. Ndlela & W. Mano (Eds.), Social Media and Elections in Africa, Volume 2: Challenges and Opportunities (pp. 175–202). Springer International Publishing. https://doi.org/10.1007/978-3-030-32682-1_10
Sasahara, K., Chen, W., Peng, H., Ciampaglia, G. L., Flammini, A., & Menczer, F. (2021). Social Influence and Unfollowing Accelerate the Emergence of Echo Chambers. Journal of Computational Social Science, 4(1), 381–402. https://doi.org/10.1007/s42001-020-00084-7
Mitigating (Algorithmic Harms)
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Flew, T. (2019). Digital communication, the crisis of trust, and the post-global. Communication Research and Practice, 5(1), 4–22. https://doi.org/10.1080/22041451.2019.1561394
Helberger, N., Karppinen, K., & D’Acunto, L. (2018). Exposure diversity as a design principle for recommender systems. Information, Communication & Society, 21(2), 191–207. https://doi.org/10.1080/1369118X.2016.1271900
Helberger, N., Pierson, J., & Poell, T. (2018). Governing online platforms: From contested to cooperative responsibility. The Information Society, 34(1), 1–14. https://doi.org/10.1080/01972243.2017.1391913
Nagulendra, S., & Vassileva, J. (2016). Providing awareness, explanation and control of personalized filtering in a social networking site. Information Systems Frontiers, 18(1), 145–158. https://doi.org/10.1007/s10796-015-9577-y
Natali Helberger. (2011). Diversity by Design. Journal of Information Policy, 1, 441–469. https://doi.org/10.5325/jinfopoli.1.2011.0441
Nissenbaum, H. (2011). A Contextual Approach to Privacy Online. Daedalus, 140(4), 32–48. https://doi.org/10.1162/DAED_a_00113
Reviglio, U., & Agosti, C. (2020). Thinking Outside the Black-Box: The Case for “Algorithmic Sovereignty” in Social Media. Social Media + Society, 6(2), 205630512091561. https://doi.org/10.1177/2056305120915613
Ziewitz, M. (2016). Governing Algorithms: Myth, Mess, and Methods. Science, Technology, & Human Values, 41(1), 3–16. https://doi.org/10.1177/0162243915608948
Homophily & Filter Bubbles’ Impacts Are Overstated or Overemphasized
Bruns, A. (2019). Are Filter Bubbles Real? Polity Press.
Dandekar, P., Goel, A., & Lee, D. T. (2013). Biased assimilation, homophily, and the dynamics of polarization. Proceedings of the National Academy of Sciences, 110(15), 5791–5796. https://doi.org/10.1073/pnas.1217220110
Dubois, E., & Blank, G. (2018). The echo chamber is overstated: The moderating effect of political interest and diverse media. Information, Communication & Society, 21(5), 729–745. https://doi.org/10.1080/1369118X.2018.1428656
Gargiulo, F., & Gandica, Y. (2017). The role of homophily in the emergence of opinion controversies. ArXiv:1612.05483 [Physics]. http://arxiv.org/abs/1612.05483
Kwon, H. E., Oh, W., & Kim, T. (2017). Platform Structures, Homing Preferences, and Homophilous Propensities in Online Social Networks. Journal of Management Information Systems, 34(3), 768–802. https://doi.org/10.1080/07421222.2017.1373008
Möller, J., Trilling, D., Helberger, N., & van Es, B. (2018). Do not blame it on the algorithm: An empirical assessment of multiple recommender systems and their impact on content diversity. Information, Communication & Society, 21(7), 959–977. https://doi.org/10.1080/1369118X.2018.1444076
Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., de Vreese, C. H., & Helberger, N. (2016). Should we worry about filter bubbles? Internet Policy Review, 5(1). https://doi.org/10.14763/2016.1.401