The Algorithmic Media Observatory is pleased to announce its successful application to the SSHRC Insight Program, winning funding to support a four-year investigation of the past, present and future of artificial intelligence as an instrument of media policy. Lead by Dr. Jonathan Roberge and Dr. Fenwick McKelvey, the project will study how AI is changing regulation from how we post on social media to how information travels through our networks. Dr. McKelvey explains, “In a short time, we have seen AI being proposed as an immediate solution to almost all contemporary media problems, from spam to hate speech. Our team brings together the expertise necessary for accountability of these proposals as well as for continuing the study of media policy to address these new technologies.” An abstract and biographies of the research team can be found below.
Abstract
Blanket enthusiasm for AI obscures its social risks. AI works because it learns from training data often gathered by media systems repurposed for data surveillance. From facial recognition to vocal assistants and behavioural advertising, the framing of AI as revolutionary legitimates privacy encroachments and the expansion of surveillance sensors for training algorithms. AI simultaneously introduces new hazards due to the lack of transparency and explainability when these algorithms are deployed as forms of automated management.
This project critically interrogates how AI is proposed as a solution to media regulation. Working at the intersection of communication studies and science and technology studies, the project has the following research questions and associated objectives:
1. What historical form of automated regulation anticipate AI? The first objective seeks to distinguish AI from other forms of technological governance within the history of communication and control technologies in order to address the specific benefits and challenges of today’s applications of AI.
2. How has AI been framed in Quebec and Canada as a new and enhanced regulatory solution? The second objective analyzes news and social media coverage of AI to compare the framings and public imaginaries of AI between Quebec and the rest of Canada to determine how definitions of AI shape its possible applications and downplay its surveillance risks;
3. How do AI and autonomous technologies trouble policy traditions of fairness, accountability, trust, and explainability in Canada and Quebec? The third objective participates in policy processes around key issues – such as advertising, content regulation, disinformation, and telecommunications – to build comparative theories about how institutions can govern AI and what capacities are needed to properly ensure proper democratic oversight of AI as forms of regulation; and,
4. How can more inclusive and democratic futures be imagined for the integration of AI and autonomous technologies in media and democracy? Through a participatory research agenda, the fourth objective will prototype speculative designs, mediators, interruptions and foresight research exercises to develop collaborative formats to co-create alternative futures for AI and consider policy solutions with experts and publics.
In addition to its scholarly contributions, the project will create new knowledge to aid governments, institutions and publics in addressing these new regulatory technologies. It anticipates and responds to legislative challenges to ensure that future media governance serves the public interest.
Research Team
Fenwick McKelvey
Fenwick McKelvey is an Associate Professor in Information and Communication Technology Policy in the Department of Communication Studies at Concordia University. He studies the digital politics and policy, appearing frequently as an expert commentator in the media and intervening in media regulatory hearings. He is the author of Internet Daemons: Digital Communications Possessed (University of Minnesota Press, 2018), winner of the 2019 Gertrude J. Robinson Book Award. He is co-author of The Permanent Campaign: New Media, New Politics (Peter Lang, 2012) with Greg Elmer and Ganaele Langlois. His research has been published in journals including New Media and Society, the International Journal of Communication, public outlets such as The Conversation and Policy Options, and been reported by The Globe and Mail, CBC The Weekly and CBC The National. He is also a member of the Educational Review Committee of the Walrus Magazine.
Jonathan Roberge
Dr. Jonathan Roberge is an Associate Professor at INRS cross appointed to Urban Studies and Knowledge Mobilization (PRAPP). He funded the Nenic Lab as part of his Canada Research Chair in 2012. He is a member of the Chaire Fernand-Dumont sur la culture at INRS, the Centre interuniversitaire sur la science et la technologie and the Laboratoire de Communication Mediatisée par les ordinateurs at UQAM. He is among the very first scholars in Canada to have critically focused on algorithmic cultures. In 2014, he organized the first sociological conference on this topic which culminated into a foundational text in this domain entitled Algorithmic Cultures (Routledge, 2016, translated into German at Transcript Verlag, 2017). He is a renowned specialist in computer vision algorithms stemming from a collaborative Insight Development grant in 2016.
Bart Simon
Bart Simon is co-founder and current director of the Milieux Institute for Arts, Culture and Technology and Associate Professor in the Department of Sociology and Anthropology at Concordia University. He is co-founder and director of the machine agencies research group developing research and research-creation projects in cultural AI, the socio-materialities of machine learning agents, and play and machines.
Luke Stark
Luke Stark is a Postdoctoral Researcher in the Fairness, Accountability, Transparency and Ethics (FATE) Group at Microsoft Research Montreal; starting this summer, he will be an Assistant Professor in the Faculty of Information and Media Studies at the University of Western Ontario. His work interrogates the historical, social, and ethical impacts of computing and artificial intelligence technologies, particularly those mediating social and emotional expression. He was previously a Postdoctoral Fellow in Sociology at Dartmouth College. Luke holds a PhD from the Department of Media, Culture, and Communication at New York University.
Brenda McPhail
Brenda McPhail is Director of the Privacy, Technology & Surveillance Project at the Canadian Civil Liberties Association (CCLA). She received her PhD from the University of Toronto Faculty of Information, and holds Master’s degrees in Information Studies and English. Her work focuses on litigation, advocacy and public education relating to the ways in which privacy rights are at risk in contemporary society. Current areas of focus include national security, intelligence, and law enforcement surveillance technologies, information sharing in the public and private sector, and the social impacts of existing and emerging technologies such as smart city tech, the internet of things, big data and artificial intelligence.
Reza Rajabiun
Reza Rajabiun (MA, LLM, PhD) is a competition policy and telecom strategy expert with research interests in Internet infrastructure development and network governance. Dr. Rajabiun’s work on the design of competition regulation and the development of broadband Internet infrastructure has appeared in various peer reviewed scholarly journals, including Competition Law and Economics, Indiana Law Journal, Telematics and Informatics, Government Information Quarterly, and Telecommunications Policy. He is a Research Fellow at the Ted Rogers School of Information Technology Management at Ryerson University in Toronto and at the Algorithmic Media Observatory at Concordia University in Montreal. Selected works are available here.