By Scott DeJong
In this post, we reflect on the challenges, struggles and questions that we encountered during our ethnographic meme collection.
Since our project experiments with methods to study memes and meme pages, we felt it was important to outline the challenges we came across as a collection team. We think it is critical that researchers reflect on their practice, and evaluate how they might be able to improve and adjust their practice to meet the needs of their study. Studying oppositional or conflicting perspectives online is not uncommon, and despite struggling to find other written pieces around the emotional challenges of data collection, we have the impression that our experience is one that is shared by a variety of researchers and important to document.
The emotional labour of studying memes
Near the end of Pride month, one of our right-leaning meme pages shared posts critical of the LGBTQ+ movement. It started out with some posts calling corporate pride fake, misplaced, and “selling-out”. While these critiques do not necessarily invoke anti-LGBTQ+ sentiments (indeed, they are often also voiced by the left-leaning spectrum of the LGBTQ+ community), in the comments section to these posts individuals made inappropriate, problematic and unfounded claims that not only breached Facebook’s policies around discriminatory conduct but also completely changed the conversation. Filled with homophobic and transphobic messages, the commenters moved far beyond the argument of the posted meme, eventually suggesting that all LGBTQ+ individuals were pedophiles, an untrue and hate-enabling statement.
Following along was unsettling. A member of the research team struggled with collection, and recorded in their own research journal the following:
This commenter is making deeply problematic claims that directly counter my own value system. Seeing the page support rather than harshly critique such a claim is challenging to read. For myself, I know it is wrong, I want to write back to the commenter and attempt to fix this narrative. However, I am not sure that this is the right space, as a researcher I am observing, not changing the narrative, but allowing comments like this to stay up deeply bothers me as it paints a blatantly false narrative.
As this comment demonstrates, ethnographic researchers embedded in these communities struggled with the decision between sitting back and recording content, or actively engaging the page in attempts to report content in order to have the platform’s content moderators remove it. Since we are studying these pages, documenting the problematic is part of the task and can actually be quite fruitful to research findings. However, as researchers we can find this process emotionally draining. In this case, the team member took a break from recording content and chose not to intervene.
To report, or to not report?
Incidents like the above, as well as other occurrences such as disinformation, led to our team discussing Facebook’s reporting and moderation tactics. We discussed the option of reporting content on Facebook using the built-in option beside each post. Facebook allows users to flag posts for specific information or submit a report to be reviewed by Facebook.
However, reporting options are not very rewarding for users and in some cases can be challenging to use properly. The reporting process itself can be somewhat convoluted to go through, especially if reporting comments. Facebook offers little or no space to explain the reason behind a report. Rather, users are encouraged to just select one of a handful of keywords. Additionally, even if a user reports a post, by the time Facebook removes or marks the post as fake, a significant number of users will have already seen or shared the content.
Our indecision about reporting echoes concerns about the transparency of the reporting process. This opacity in moderation practices is highly characteristic of Facebook’s approach to managing harmful content, maintaining a veil of supposed neutrality (Caplan, 2018), which has been critiqued as embedded in the platform’s business model (Gillespie, 2018). Sarah Myers West (2018) argues that content moderation on Facebook remains highly invisible both to reporters and reportees. In many cases, submitted reports are handled by an AI, which leads to frustrations and confusion on the end of the human users. Within our own sample, content gets flagged as misinformation days after it is posted and is not removed. Rather, such posts stay up on the page but just receive a “cover” that informs users that the information is misleading (see Figure 1). On the reportee side, many feel as if bans come as a surprise, while others report a lack of human interaction and engagement in the banning process. For many, this makes dialogue around inappropriate content appear non-existent and the reporting process may feel unrewarding for all involved parties.
At the start of the pandemic, Facebook attempted to take a strong stance at combating fake news and misinformation (Statt, 2020). However, the company has never decided to take down misinformation, rather they have moved to labelling the content (as seen in Figure 1) but leaving it up. Furthermore, when it comes to memes Facebook admits that its systems struggle to properly mark them as hate speech, but re-affirms that it is working on it (Statt, 2020).
The dilemmas encountered in regard to reporting content raises many questions of research ethics. Our experiences above highlight two questions we have yet to resolve. A standard research ethics expectation is that participants are not harmed during data collection. Is getting users banned for posting hateful messages a harm? What are the researchers’ obligations to the broader community? Reporting crosses a line that intervenes in the system, pushing the researcher past being just a witness to an active participant. Whether crossing this line is justified raises a second concern about efficacy. Is reporting the right response to harmful speech? By reporting, researchers abide by Facebook’s emergent and arguably constrained approach to its obligations as a platform. At a time when there are extensive debates about platform regulation, is reporting legitimating corporate regulation at the expense of research publicizing that advocates for broader media reforms? Neither question has a clear answer and we hope to explore the issue more in the future.
Reading the comments
Reporting becomes even more convoluted when the problematic content is not in the post itself but in the comments from users on the post. When we look at the comments and discussion on these posts above, the posts are generally in support of the information reported. The comments create discussion, generate dialogue, and typically lead to more problematic or completely incorrect claims.Comments may have been posted prior to and after the time of Facebook’s independent fact-check, which means that the post can reach a large audience before it is flagged, and that flagging does not deter some from still reading and agreeing. The Facebook Overview Board does review comments alongside posts but unfortunately this act is largely invisible to the public, making it hard for scholars to see if Facebook has responded or altered the comments.
As the earlier example demonstrates, the post itself only offered critiques of the LGBTQ+ movement, which is in a grey area of breaking Facebook’s hate speech or disinformation rules. However, within the comments of the post, users brought ideas to the extreme, making claims that were bigoted and misinformed. This builds on Joseph Reagles’ argument that comments are entrenched in ulterior motives that play with legitimacy and can be used to intentionally alter audience perception. In many ways, the comments are worse than the memes or posts they respond to.
Sifting through a series of these comments on a variety of issues can quickly become challenging and overwhelming for a researcher, which makes it critical that we develop tools to help researchers talk through and work out these challenges. It appears that this conversation is starting to become more prominent (Ashe et al., 2020; Kelley & Weaver, 2020). However, there is little work to date that reflects on the role of the researcher when studying groups that directly counter one’s belief system and provides tangible options to use in practice. In June, Brit Kelley and Stephanie Weaver (2020) argued that we need to revisit our ethics, and focus on doing no harm for the subjects and the researcher. However, how do we manage being witnesses of harm to others on the platform?
In our own work, we considered how we could make this process easier for us as researchers. We used our weekly meetings as a time to debrief on the challenges and concerns we might be having. We also spent time discussing our options around reporting and evaluating, and each researcher set up collection practices that allowed them to engage with the material in more amenable ways (such as taking breaks in the work, or looking at certain pages alongside other pages to balance the content one is looking at). Especially for work that is highly subjective and focused on a specific collective, conducting ethnographic research means that we as researchers will face challenges in understanding and connecting with the group’s ideas, challenges which are also furthered by the logistical constraints and questions that collection provokes.
Conclusion
Overall, qualitative meme collection, while an interesting and sometimes humorous task, creates challenges for researchers that are important to consider. The emotional labour that can go into reading memes that propagate or implicate mistreatment, discrimination and hate requires personal support systems to maintain emotional stability due to the challenges that might arise during the research process. Furthermore, meme collection also comes with an array of logistical challenges in determining which memes to document and where to search for them. Adjusting methods while in a project is critical to improving the project’s overall success. In our case, by iteratively assessing our collection techniques and establishing procedures within our methods to adjust to the challenges was an effective approach to qualitative meme collection.
References
Ashe, S. D., Busher, J., Macklin, G., & Winter, A. (Eds.). (2020). Researching the Far Right: Theory, Method and Practice.
Caplan, R. (2018). Content or Context Moderation? Data & Society Research Institute.
Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Kelley, B., & Weaver, S. (2020). Researching People who (Probably) Hate You: When Practicing “Good” Ethics Means Protecting Yourself. Computers and Composition, 56. https://doi.org/10.1016/j.compcom.2020.102567
Myers West, S. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11), 4366–4383. https://doi.org/10.1177/1461444818773059
Reagle, J. M. (2015). Reading the Comments: Likers, Haters, and Manipulators at the Bottom of the Web. The MIT Press; JSTOR. https://doi.org/10.2307/j.ctt17kkb2f
Statt, N. (2020, May 12). How Facebook is using AI to combat COVID misinformation and detect “hateful memes.” The Verge.