Skip to content
Ottawa Citizen logo

Canada must control AI technology that gathers and analyzes workplace data

Listen to this article as an audio recording

This opinion editorial was originally published on August 2, 2023 in the Ottawa Citizen.

The risk posed by bad data and biased predictive models is significant. Private, foreign-controlled companies could be providing an incorrect picture of what’s really happening, and we would have no way of knowing it.

Unscrambling the risks and benefits of artificial intelligence can be daunting; figuring out how to deal with them, even more so.

In May, dozens of the world’s top AI experts and executives — including the so-called godfather of AI, Canadian Geoffrey Hinton — issued a blunt 22-word statement warning that AI posed an existential risk to humanity.

While several governments around the world, including our own, seem to have taken note of the warning, ramping up work on a voluntary code of AI conduct, as the European Union and United States are doing, may be too little, too late.

Compounding the risk that government action could fall short is that it’s unclear that public interest as it currently stands will provide enough of a political spark. With something like Armageddon fatigue settling in, recent expert warnings on AI may be dismissed as more clucking from a flock of modern-day Chicken Littles.

But if the sky is not quite falling yet, the on-the-ground, everyday implications of growing reliance on AI-generated information are already apparent. That’s what we should focus on to fire up interest and spur action.

Take the labour market. We’ve all heard fears that AI will destroy millions of good jobs around the world by replacing humans with algorithms, but perhaps even more insidious — because it risks happening under the radar — could be the impact of unchecked, unregulated AI on the information we count on to make career-related choices.

Already today, job seekers, employers and policymakers are making life-altering decisions based on information generated with the help of AI. This use will only grow, and yet relatively little thought is given to who or what is assembling and disseminating this AI-generated information, or how accurate it is.

Generative AI and machine-learning tools are already widely being used to make sense of the clutter of labour market information: job openings, wages, unemployment trends and the like. Canada risks ceding an important piece of its sovereignty if it doesn’t control the technology used to gather and analyze essential workplace data.

The risk posed by bad data and biased predictive models is significant. Private suppliers of labour market analytics, often foreign-controlled, could be providing a fuzzy or incorrect picture of what’s really happening in the Canadian workplace, and we would have no way of knowing it.

For example, a U.S-based global leader in labour market analytics, whose services are widely used in Canada, doesn’t mine data from French-language job posting sites. As a result, the information it generates may overlook vast swaths of the job market, resulting in flawed and biased analysis, and ultimately poor decisions.

With most of the developments in this field now the proprietary preserves of private-sector organizations, users of AI-generated information or analytics rarely have a line of sight on the machine-learning models, algorithms and data underpinning the information they’re buying.

AI tools also risk perpetuating longstanding problems capturing the economic reality of diverse and marginalized communities, including new immigrants and Indigenous people.

A 2018 federal auditor general report found that the federal government had been reporting incomplete and sometimes erroneous information about the well-being of First Nations people living on reserves, including job status, incomes, education and health. Other studies have pointed to chronic biases and sampling errors related to Indigenous Canadians in the census and other government surveys. Data analytic companies may then scoop up and resell this flawed information exacerbating systemic workplace biases.

Curbing its risks and getting the most out of AI’s workplace potential is a major undertaking that requires all governments — provincial, territorial, and federal — at the table.

The status quo of unreliable, opaque, and fragmented labour market information is a dead-end for workers, employers and for Canada’s competitiveness and standard of living. Good data, on the other hand, recognizes Canada’s vastness, its regional differences, its growing diversity and its commitment to reconciliation with Indigenous people. And it must be standardized, accessible and transparent to users.

The immediacy of AI’s workplace impact and the urgency of action should be the spark needed for Canada to take a leadership role and reclaim sovereignty over its data.

Portrait of Massimo Bergamini.

Massimo Bergamini

Executive Director

Massimo Bergamini is an accomplished, results-oriented association executive with more than 30 years of government and intergovernmental relations, research, policy and public affairs experience.

Scroll To Top