This Advanced Consultation Guide has been created for participants who already have some level of expertise in artificial intelligence (“AI”) law, policy, regulation, and/or governance, or in an issue area that intersects with artificial intelligence (“AI”), such as environmental impacts, labour, gender-based violence, or education, for instance. This is not so much a guide as an expansive list of questions meant to prompt further thought and provide numerous entry points into the issues. The questions have been copied, adapted from, or inspired by similarly expansive consultative proceedings from other jurisdictions, such as those listed at the bottom of this page.*
There is no obligation or expectation that any one person or group will answer all questions, and we recognize there may be overlap between questions or categories. Feel free to address as many or as few questions as you see fit to. You may opt to focus your comments exclusively on questions under one issue area, or ignore the questions entirely if you already know what you want to say. If applicable, please indicate which specific topics or questions each part of your submission responds to, but that is not mandatory.
You are encouraged to draw upon your own experiences with AI-based technologies or how you or your community have been impacted by them, wherever relevant.
NOTE: For details on how your comments in this submission will be used and what will be done with them, please see “What Will Happen to Submitted Comments?” here. Submissions will be posted publicly, so please only share details you are comfortable being posted. Keep this in mind if sharing stories or experiences that are not your own, and do not share identifying details about someone else without their consent.
If you are not comfortable diving in with the questions below, you may wish instead to use the Basic Consultation Guide + Submission Template, or host a micro-consultation with friends, family, or colleagues using the Local Facilitation Guide.
[A downloadable version of this guide is available at the bottom of this page.]
For the purposes of this consultation, “artificial intelligence” (AI) is used to refer collectively to any technology or system currently considered to be “AI” or an AI-based technology or system, as well as any combination of such technologies or systems. Participants are strongly encouraged to avoid using the term “AI” wherever possible, and to name the specific type of technology or actual software company, model, or tool being discussed in each answer.
1. DEFINITIONS
a. How should “AI” be defined?
b. How should key terms relating to AI be defined, such as “algorithmic decision-making”, “high-risk system”, “automated management” or “algorithmic discrimination”?
c. What technologies count as “AI” and which don’t, and why?
d. How can this definition be precise enough to be useful, yet flexible enough to apply to future technologies that may not exist today?
2. CURRENT AND POTENTIAL USES OF AI
a. What are the most common applications of AI in Canada today?
b. What are the ways in which you, your organization, or your community use/s AI?
c. In what ways and environments has AI use — by you or others — affected you, your family and friends, your colleagues, classmates, students, or social circles, or your community?
d. Are there ways in which an AI tool or application has benefited you or your community? If so, what were they, and how?
e. Are there any types or use cases of AI which have costs, but where the benefits clearly and significantly outweigh the costs? What are they, and why?
f. Are there any types or use cases of AI that do not have any costs attached? What are they?
g. Do you see a beneficial use of a particular AI-based technology that is entirely possible today (i.e., does not need further or new technological advancements to be realized), but to your knowledge has not yet been implemented? What is it, what are the benefits, and what are the barriers to implementation?
h. What kinds of safeguards are currently in place to protect people from harmful consequences of AI being used or deployed? Are these safeguards effective, and why or why not?
3. LABOUR
a. How has AI affected your working life and/or how do you expect it to in the future?
b. How does AI impact labour rights or other labour issues?
c. How are specific types of labour (e.g., precarious work, independent contractors) or specific labour sectors (e.g., manufacturing, agriculture, health care, creative arts) uniquely impacted by AI?
d. How does AI impact the relationship between employers and employees?
4. ENVIRONMENTAL IMPACTS & DATA CENTRES
a. What issues and implications does AI have for the environment and climate change?
b. How does going “all in” on AI impact Canada’s approach to natural resources, energy, carbon emissions, and other pollution (air, water, soil), both at the level of local policy and in individuals’ everyday lives?
c. What issues and concerns do you have related to the building of AI data centres?
d. Are there different issues or implications depending on the location where a data centre is built (e.g. in a rural community, on an island, on Indigenous territories)? If so, what are they?
e. What concerns and benefits are there with data centres being developed in Canada?
f. What dangers or benefits might the federal, provincial, territorial and municipal governments expect from investing in data centers?
5. INDIGENOUS RIGHTS
a. How do AI issues impact and intersect with Indigenous legal systems, traditional knowledge, land rights and land titles, right to self-governance, and Indigenous digital and data sovereignty (including the principles of Ownership, Control, Access, and Possession [OCAP])?
b. What is required to ensure that any “national AI strategy” by Canada satisfies Canada’s obligations under pre-existing treaties, adheres to the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP), and advances the 94 Calls to Action arising from the Truth and Reconciliation Commission?
6. SUBSTANTIVE EQUALITY, DISCRIMINATION, AND HATE-BASED OPPRESSION
Substantive equality is the idea that different groups must be treated differently to account for their starting on a playing field that was not level to begin with. The concept was established by the Supreme Court of Canada in Canadian constitutional law, in contrast to formal equality, which is treating everyone the same even if they are starting from different places, thus perpetuating the original inequality. Whenever this consultation uses the word “equality”, the term is used to mean substantive equality, also referred to as equity, and responses should similarly address equality as substantive equality.
a. How prevalent is AI-facilitated or algorithmic discrimination based on protected categories such as race, sex, and age? Is such discrimination more pronounced in some sectors than others? If so, which ones?
b. More specifically, how does the use or deployment of AI advance or undermine gender equality, racial justice, socioeconomic equality, equality with respect to sexual orientation and gender identity, disability rights, or migrant and refugee justice?
c. Are there cases of AI-facilitated or algorithmic discrimination that are difficult to identify or measure when they occur? What are they, and what can be done about them and their invisibility?
d. Are there ways that AI can be used truly to alleviate inequality or mitigate or prevent wrongful discrimination, which do not present technosolutionist pitfalls nor run into any of the issues discussed in the previous questions?
e. What are examples of uses or deployments of AI that might seem to advance or promote equality, but in fact apply a formal equality approach, undermining substantive equality?
f. How is AI being used to victimize, persecute, harass, abuse, enact violence on and/or direct and engage in hate against people, in particular members of historically marginalized communities? Examples include gender-based violence, intimate partner violence, and other forms of identity-based violence and abuse. What should be done to mitigate, or prevent this?
g. What kind of legal recourse and remedies should be provided to those who are the targets of AI-facilitated gender-based or other oppressive hate-based violence, abuse, and harassment?
7. MENTAL, SOCIAL & COGNITIVE WELL-BEING
a. What cognitive, mental health, or psychological issues arise with certain use cases of AI, or uses by certain vulnerable groups of people (e.g., children, high school students, the elderly) under certain circumstances (e.g., extended daily conversations with a sycophantic chatbot such as ChatGPT, outsourcing of basic cognitive tasks and critical thinking)?
b. How are people or companies deploying AI in ways that manipulate people or otherwise undermine or distort human agency? What should be done about this?
c. In what ways has the use of AI been beneficial or detrimental to interpersonal communications and relationships?
8. MEDICINE & HEALTH CARE
a. What benefits and concerns are related to the use of specific types of AI in medical care, medical research, medical devices, and caregiving?
b. How can risks, issues, and negative implications that arise from using AI in health-related applications be mitigated or prevented?
9. CHILDREN & YOUTH
a. Are there uses or deployments of AI to which children or teenagers are particularly vulnerable or susceptible? What are they, and why?
b. What types of AI use cases concerning children and teens, and the data collection, use, and disclosure involved in them, are most concerning? What should be done about them?
c. To what extent should trade regulation rules distinguish between different age groups among children (e.g., toddlers, pre-teens, teenagers)?
d. What protections for children and youth, against AI-facilitated harms, would you recommend, or recommend against (eg., parental consent or controls, age assurance (and what kind), outright bans or prohibitions, product design)?
e. Is the development or adoption of AI contributing to the proliferation of child sexual abuse material (“CSAM”)? If so, how, and what should be done to address that issue?
10. JUSTICE, LAW ENFORCEMENT AND NATIONAL SECURITY
a. What are key issues and implications associated with use of AI in law enforcement, criminal justice, intelligence, and national security contexts?
b. What are the key issues and implications associated with use of AI in other parts of the justice system, such as in the civil justice system (e.g. use by self-represented people involved in court cases, use by lawyers for advising clients, document review, court records, judicial decision making)?
11. PUBLIC SERVICES, GOVERNMENT AGENCIES, AND ADMINISTRATIVE BODIES
a. How is AI currently being used by government agencies at the provincial, territorial, or federal level? What are their impacts on constituents?
b. What are key issues and implications associated with uses of AI by government agencies other than law enforcement and national security, such as in the context of social welfare, public housing, income tax, education (primary, secondary, post-secondary), and/or other public services or administrative agencies?
12. PRIVACY & DATA PROTECTION
a. What are the privacy, data protection, and surveillance issues and implications associated with the use of AI by individuals, governments, and/or businesses?
b. How can these risks be mitigated, or can they be at all?
c. What are the most concerning constitutional privacy issues, and/or the most concerning consumer privacy issues?
d. Does the use or deployment of AI introduce cybersecurity risks or system vulnerabilities, and how/why?
13. CONSENT
a. Under what circumstances, use cases, or deployments is consent an effective measure to regulate and prevent harmful impacts of AI? Under what circumstances, use cases, or deployments is consent less or not effective as a measure? What factors go into determining whether or not consent is effective as a mechanism?
b. Are there uses of AI that should be prohibited regardless of whether the person or people affected gave consent or not? If so, what are they, and why?
c. Given the nature of AI systems, are users, consumers, or impacted individuals able to withdraw consent once it has been given? Why or why not, and how should this inform AI law and policymaking?
d. In the context of a given AI system, what does it mean for someone to have provided meaningful, valid, and informed consent? Should requirements to meet this standard of consent differ depending on the group (e.g., children, teenagers, parents, the elderly)?
e. Should an individual be able to opt out of AI systems entirely if they wish, or under what circumstances or use cases? What are the benefits or drawbacks of an opt-in versus an opt-out system (such as that which has been applied in the privacy and data protection context)? What would it take to give effect to an opt-in system, or one-time universal opt-out, to consumers or impacted individuals for any given AI system? Should the law require this option be provided to people (e.g., in electronic devices, in decisions concerning legal or similarly significant interests, in classrooms)?
14. CONSTITUTIONAL RIGHTS, HUMAN RIGHTS & DEMOCRACY
a. What human rights and civil liberties implications arise from widespread adoption, implementation, and deployment of specific types of AI? Consider constitutional and human rights such as equality, privacy, freedom of expression, freedom of association and peaceful assembly, and life, liberty, and security of the person.
b. Are there use cases or deployments of AI that engage constitutional issues such as division of powers, principles of fundamental justice, or Charter values such as dignity and autonomy? What are they, and why/how?
c. What risks do widespread adoption, implementation, and deployment of AI pose to democracy and democratic institutions? Can those risks be mitigated or prevented; and, if so, how?
15. MEDIA & INFORMATION ECOSYSTEM
a. What has the impact of AI been on the media ecosystem and overall information environment online and in the world generally?
b. How has the use of AI affected legacy media, online media, and the journalism industry?
c. How has the use, adoption, or deployment of AI uniquely impacted independent media, freelance journalists, or smaller and more local news outlets?
d. How can the health, resilience, integrity, and reliability of the information and media environment be protected and strengthened in the face of generative AI and other AI systems?
16. SCIENCE, RESEARCH & KNOWLEDGE SYSTEMS
a. What benefits and concerns are related to the use of AI in scientific research and application?
b. LLMs and other machine learning techniques are known to produce errors inherent to statistically generated outputs. What does this mean for the reliability of AI systems? Under what conditions can they be trusted and in what areas of potential application are they unsuitable?
17. ECONOMIC IMPLICATIONS
a. Are there concerns that the current level of national and global investments in AI represents an “AI bubble”? If so, why, and how significant is this concern?
b. What should the government be concerned about with respect to this AI bubble and its potential impact on the economy, society, and vulnerable groups if or when the bubble pops? What does a worst-case scenario look like?
c. What can or should be done now to prevent or mitigate negative impacts of an AI bubble bursting?
d. How does or will AI affect the financial sector (e.g., banking, traditional and digital payment systems, securities, insurance, pensions)?
e. What are the different business models around AI? How do they work, and what do they rely on to succeed? Are they sustainable, and why or why not? Do they come with negative externalities? If so, what are they?
f. What are the infrastructure issues raised or impacted by Canada’s focus on and investments into AI? How would a “national AI strategy” intersect with existing infrastructure and related issues?
18. DIGITAL & AI “SOVEREIGNTY”
a. Much of the current wave of extraordinarily large investments in AI is driven by the businesses and governments of major countries other than Canada, consolidating their own power and providing them with greater control over the future of AI within their jurisdictions. What implications does this have for Canada’s national interests and Canadian sovereignty? What should the Canadian government do about this?
b. How would you define “AI sovereignty” or “digital sovereignty”? Is that a useful concept to guide Canadian AI law, policy, regulation, and/or governance, and why or why not?
c. How does the concept of “Canadian digital sovereignty” operate as a framework—or can it—when put next to Reconciliation and Indigenous sovereignty, including Indigenous digital sovereignty?
19. AUTOMATED DECISION-MAKING SYSTEMS
a. How prevalent is algorithmic error? To what extent is algorithmic error inevitable? If it is inevitable, what are the benefits and costs of allowing companies to employ automated decision-making systems in critical areas, such as housing, credit, and employment? To what extent can companies mitigate algorithmic error in the absence of new trade regulation rules?
b. What are the best ways to measure algorithmic error? Is it more pronounced or happening with more frequency in some sectors than others?
c. Does the weight that companies give to the outputs of automated decision-making systems overstate their reliability? If so, does that have the potential to lead to greater consumer harm when there are algorithmic errors?
d. To what extent, if at all, should new laws require companies to take specific steps to prevent algorithmic errors? If so, which steps? To what extent, if at all, should the government or an independent regulator require firms to evaluate and certify that their reliance on automated decision-making meets clear standards concerning accuracy, validity, reliability, or error? If so, how? Who should set those standards—the government, an independent body, the businesses themselves, or someone else?
e. To what extent, if at all, do consumers benefit from automated decision-making systems? Who is most likely to benefit? Who is most likely to be harmed or disadvantaged? To what extent do such practices violate human rights legislation, privacy laws, or consumer protection standards?
f. Could new laws or regulations help ensure that firms’ automated decision-making practices better protect non-English speaking communities from fraud and abusive data practices? If so, how?
g. If new laws or regulations restrict certain automated decision-making practices, which alternatives, if any, would take their place? Would these alternative techniques be less prone to error than the automated decision-making they replace?
h. To what extent, if at all, should new laws forbid or limit the development, design, and use of automated decision-making systems that generate or otherwise facilitate outcomes that violate human rights or other Canadian laws? Should such laws apply economy-wide or only in some sectors? If the latter, which ones? Should these rules be structured differently depending on the sector? If so, how?
i. What would be the effect of restrictions on automated decision-making in product access, product features, product quality, or pricing? To what alternative forms of pricing would companies turn, if any?
20. ADDRESSING ALGORITHMIC DISCRIMINATION
a. How should the government, regulators, technology developers, or deployers of AI tools (e.g., employers, landlords, banks) address such algorithmic discrimination? How can or should discrimination be identified or managed based on proxies for protected categories? How should discrimination be analyzed and accounted for intersectionality, where more than one protected category is implicated for the same individual or community?
b. Are there particular considerations, legal or otherwise, that need to be taken into account when analyzing harms to historically marginalized groups that are not necessarily protected under human rights law (such as low-income tenants, or unhoused people)?
c. Should there be new laws created to address algorithmic discrimination, and how would these laws interact with existing laws addressing discrimination in areas such as housing, employment, labour, and education? Should different laws addressing algorithm discrimination be enacted within and tailored to specific sectors, or addressed through comprehensive legislation that applies across the board to all sectors?
d. Should there be an exemption in algorithmic discrimination laws for systems that differentiate and target different groups in order to ameliorate historical and systemic discrimination (e.g., affirmative action programs)? How would such an exemption work, and what would prevent it from allowing prohibited discrimination?
21. AI REGULATION & LAWMAKING
a. What insights should the federal government consider from previous approaches, failures, and lessons learned from earlier major societal shifts and hype cycles in technology, whether wireless Internet, Y2K, smartphones, ‘big data’, social media, new media, or digital platforms?
b. Are the right legislative frameworks in place to support a just and meaningful “national AI strategy”? What Canadian laws already apply to AI (and how), and what new laws are needed, if any?
c. What can be learned about comprehensive AI regulation from what has already been done in other countries or jurisdictions (e.g., EU)? Are there specific laws, regulations, or initiatives (at any level of government, from local to federal) that you would recommend Canada should emulate, or avoid?
d. Are there jurisdictions other than the US, UK, or EU that Canada should look to for good examples of AI law, policy, regulation or governance, in particular, jurisdictions in the Global South?
e. Are there any uses of AI that should be banned outright, not just regulated (similar to the “no go zones” established by the Office of the Privacy Commissioner of Canada, or the applications prohibited by the EU’s AI Act)? If so, what are they, and why?
f. Are there AI-related considerations that apply to rural and remote communities and environments in particular?
g. Are there approaches to AI regulation that might face constitutional or other legal challenges? How might those challenges be dealt with or accounted for?
h. In which contexts are transparency or disclosure requirements effective? In which contexts are they less or not effective?
i. Which parts of government should be responsible for overseeing and enforcing laws and regulations regarding AI, or different uses and deployments of AI? How can or should their respective institutional roles, expertise, and authority complement each other? Examples include Industry Canada (or Industry, Science, and Economic Development), Heritage Canada, the Department of Justice, the Office of the Privacy Commissioner of Canada, provincial and territorial privacy commissioners, the Canadian Human Rights Commission and Tribunal, provincial human rights commissions and tribunals, and the Canadian Radio-television and Telecommunications Commission. Should different functions be assigned to different departments or regulators, or a joint cross-department task force or new stand-alone agency or regulator be created?
j. How should lawmakers and regulators account for further technological developments or the emergence of new business models in AI, in order to “future-proof” AI law, policy, regulation, and governance?
22. LIABILITY, REMEDIES, AND RECOURSE
a. What liability regime should apply to harms caused by uses and deployment of AI? Should the penalty differ based on the victim’s age, or their economic or social standing?
b. What legal remedies should be available to those harmed by uses of AI?
c. How should perpetrators of AI-facilitated violence, abuse, or criminal activity be held accountable for the harm and damage their actions cause? What is an appropriate consequence for someone engaging in these acts? (For instance, monetary fines, jail sentences, or something else entirely?)
d. Should the companies — or their executives — that develop (e.g., OpenAI), sell, or make available (i.e., Google Play or Apple Store), or host specific AI technologies be held accountable for damage, harms, or other negative consequences caused by use or deployment of those technologies? Which players from which parts of the “AI supply chain” should or should not be held liable, and why or why not? What should accountability look like? (For instance, monetary fines, jail sentences, disgorgement of profits, deletion of algorithmic models?)
*Sources drawn upon for questions:
- Blueprint for an AI Bill of Rights (US Office of Science and Technology Policy)
- Commercial Surveillance and Data Security Rulemaking (US Federal Trade Commission)
- Public consultation on transparency requirements for certain AI systems (European Commission)
- Targeted consultation on artificial intelligence in the financial sector (European Commission)