25 September 2021
To Whom It May Concern:
My comments address the technical paper that will inform the government’s approach to online harms. The technical paper proposes to:
- establish new information sharing between police, security agencies, and OSCs as well as create new obligations for TSPs;
- mitigate five distinct categories of criminal activity or online harms; and,
- propose a new regulatory framework for the oversight of commercial content moderation online.
These three objectives, debatable in their own right, invoke distinct policy traditions and fields of expertise. I am concerned that the technical paper conflates these three separate issues into one legislative agenda.
My submission focuses on the third objective, the new regulatory framework. Before focusing on the third objective, I note:
- Changes to the administration of the criminal code for OCSs must be treated separately from the regulatory framework for content moderation
At present, the technical paper too often frames online harms as a policing problem at a time when the biases and oversight of Canada’s policing services are evident and calls for reforms clear and needed. The distinction between online harms and criminal activities remains ambiguous in the technical paper. Conflating harm and criminal acts risks deputizing OSCPs with both enforcement and police reporting for criminal activities (1). Proposal 20 specifically requires a separate consultation phase and it should not be assumed that because automated content takedowns are happening that automated content takedowns are an effective or central instrument to address online harms. Furthermore, the technical paper’s overall focus on OSC regulation is diluted with its discussion of new measures for Internet services and new blocking/filtering obligations for Telecommunications Service Providers. These powers are out of scope and arguably within the power of the CRTC to implement if needed already.
- Comparability of these five online harms is debatable and more targeted legislation may prove more effective
The five online harms need further definition. Furthermore, the nuances of each online harm, such as the national and international dimensions of terrorist activities, for example, may not be well suited for an omnibus framework (2). Protecting Canada’s democracy, ostensibly another online harm, has been addressed through reforms to Canada’s Elections Act.
- Online harms require a whole of society approach that is out of scope with aspects of this bill focused on OSCs
More accountability to commercial content moderation has not and will not resolve the root causes of online (3). Rather, better regulation of already-existing content moderation is enough of a regulatory accomplishment without the added challenge of suggesting that content moderation as a first response to systemic injustice.
With these primary concerns in mind, I move to the administrative aspects. I acknowledge that content moderation is a needed part of inclusive communication systems, but certainly not more important than matters of access, affordability, and inclusion. As part of these the broader reforms to Canada’s communication system, the technical paper that:
- Defines the regulatory category of OSC in line with the CRTC’s suggested reforms in CRTC 2017-359 for new domain/industry-specific media regulation categories distinct from a TSP;
- Establishes a Digital Safety Commission that includes a Digital Resource Council of Canada, and an Advisory Board to monitor compliance and administer OSCPs enforcement of online harms;
- Sets new obligations for OSCPs to report and filter 5 types of illegal content; and,
- Grants the DSC new powers including AMPs for OSCPs as well as inspection powers.
The regulatory framework seems a viable opportunity if primarily seen as a mechanism to enhance oversight and transparency for commercial content moderation (4) and algorithmic filtering (5). The DSC’s powers to investigate are welcome additions to Canada’s media institutions especially since the DSC is subject to the Access to Information Act. As access to data is a primary barrier to research into OSCs (6), the DSC may enhance public knowledge of largely opaque moderation practices.
The DSC’s mandate must be defined with clear policy objectives. I recommend its policy objectives follow Dr. Suzie Dunn who suggests that, “Canada’s approach to regulating platforms should centre human rights, substantive equality, and intersectionality, and employ a trauma-informed approach.” (7)
I am expressly supportive of DSC’s power to investigate how an OSCP monetizes harmful content (14f) and encourage more attention to this function in the subsequent act.
The technical paper, at present, does not provide a timeline for the constitution of the DSC. I recommend that:
- Reporting and inspection powers be prioritized as a first step before automated takedown obligations come into effect;
- The DSC establish a Technical Standards Committee with Measurement Canada, the CRTC, OSCPs, civil society, and academics to develop information gathering and potentially an equivalent of the CRTC Monitoring Report to establish a public record about the threat of online harms and the state of commercial content moderation and automated content recommendation;
- All matters of composition (46, 64, and 71) be changed from considering inclusive membership to requiring inclusive membership and establish better democratic oversight of candidate selection and vetting as proposed in the BTLR report;
- Ensure sufficient budget and effective reporting mechanisms before implementing blocking and filtering regimes.
My timeline emphasizes focusing the DSC first on enhancing transparency first on commercial content moderation then secondly considering its effectiveness to combat online harms. Ideally, other measures or more dedicated initiatives could develop simultaneously taking a whole-of-society approach to the 5 identified online harms.
The present risk is that the 24-hr takedown requirement along with a lack of penalties for false positives may encourage OSCPs to further invest in automated content moderation, especially artificial intelligence as a form of media governance.
The consideration of automated content regulation is lacking in the current working paper and needs substantive consideration. The technical paper does not address its responsibility nor its legitimization of artificial intelligence as used by OSCPs to classify, filter, and demote harmful content. The technical paper proposed a regime legitimating automated content regulation at scale without sufficient records of the efficacy of the systems in Canada’s both official languages and in Canada’s multicultural society. The technical paper needs a substantively expanded discussion of AI accountability including times when the potential risks require the prohibition of automated solutions (8).
The DSC may need powers to designate standards for content moderation work that then prohibit AI as high-risk applications and better accountability mechanisms. Inversely, outsourcing and ghost labour in commercial content moderation require better labour standards and safer working environments. At present, the labour of moderation is assumed to be automatable and without long-term harm to the workers.
The DSC must be seen as a promising beginning that must proceed cautiously to build knowledge, expertise, and autonomy before implementing content takedowns (11a) recognizing instead that better accountability to already-existing content moderation, reporting and oversight is a needed first step before assuming that content takedowns will be an adequate form of online harms that require a whole-of-society approach.
The technical paper and discussion papers mark an early first step that hopefully leads to a more fulsome consultation, public record, and clearer legislative agenda. I continue to support these efforts in my research and my opinion here.
Sincerely,
Fenwick McKelvey
References:
(1) For a distinction, see: Tenove, Chris, Heidi Tworek, and Fenwick McKelvey. “Poisoning Democracy: How Canada Can Address Harmful Speech Online.” Ottawa: Public Policy Forum, November 8, 2018. https://www.ppforum.ca/wp-content/uploads/2018/11/PoisoningDemocracy-PPF-1.pdf.
(2) For a more detailed discussion of focused reforms, see: Khoo, Cynthia. “Deplatforming Misogyny.” Technology-Facilitated Violence. Toronto: Women’s Legal Education and Action Fund, 2021. https://www.leaf.ca/publication/deplatforming-misogyny/.
(3) McKelvey, Fenwick. “Toward Contextualizing Not Just Containing Right-Wing Extremisms on Social Media: The Limits of Walled Strategies.” SSRC Items (blog), July 13, 2021. https://items.ssrc.org/extremism-online/toward-contextualizing-not-just-containing-right-wing-extremisms-on-social-media-the-limits-of-walled-strategies/.
(4) Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press.
(5) Hunt, R., & McKelvey, F. (2019). Algorithmic Regulation in Media and Cultural Policy: A Framework to Evaluate Barriers to Accountability. Journal of Information Policy, 9, 307–335. JSTOR. https://doi.org/10.5325/jinfopoli.9.2019.0307
(6) Tromble, Rebekah. “Where Have All the Data Gone? A Critical Reflection on Academic Digital Research in the Post-API Age.” Social Media + Society 7, no. 1 (January 1, 2021): 2056305121988929. https://doi.org/10.1177/2056305121988929; Tworek, Heidi. “Open Access to Data Is Critical in a Democracy.” Centre for International Governance Innovation, August 25, 2021. https://www.cigionline.org/articles/open-access-to-data-is-critical-in-a-democracy/.
(7) Dunn, Suzie, William Perrin, and Heidi Tworek. “What Can Canadian Law Makers Draw from the New UK Online Safety Bill?” Centre for International Governance Innovation, May 20, 2021. https://www.cigionline.org/articles/what-can-canadian-law-makers-draw-new-uk-online-safety-bill/.
(8) Balayn, Agathe, and Seda Gürses. “Beyond Debiasing: Regulating AI and Its Inequalities.” Brussels: European Digital Rights, 2021. https://edri.org/our-work/if-ai-is-the-problem-is-debiasing-the-solution/.