Skip to content

Overview

Learn more about the Canadian Job Trends Dashboard

The Canadian Job Trends Dashboard allows users to explore labour market information based on trends found in online job postings from across Canada.

The information found in online job postings can help to track trends in work requirements such as skills, knowledge, and tools and technologies, as well as occupational demand.

This interactive dashboard is updated weekly with new data. Data from online job postings are collected from thousands of Canadian websites and job boards by Vicinity Jobs; LMIC ingests this data into the LMIC Data Hub so that the dashboard can share timely, granular LMI that can help all people in Canada explore employment trends.

To learn how to use the dashboard, consult this overview and our FAQ. If you still can’t find the answer you’re looking for, please email us at info@lmic-cimt.ca.

Background

LMIC strives to empower Canadians to make informed decisions by enabling access to quality, relevant and comprehensive data and insights across the pan-Canadian labour market information (LMI) ecosystem.

In 2018, LMIC conducted a series of surveys asking Canadians about their labour market information needs. We found that work and skill requirements were the most sought-after type of LMI by almost every group surveyed.

To provide Canadians with the LMI they needed to succeed in the labour market, we created the Canadian Job Trends Dashboard.

This interactive dashboard is updated weekly and provides information about the links between work requirements and online job postings across Canada.

Online job postings identify the work requirements detailed by employers. Included in work requirements are skills, knowledge domains, and tools and technologies.

Online job postings data are collected across thousands of Canadian websites and job boards by Vicinity Jobs.

LMIC ingests this data from Vicinity Jobs into the LMIC Data Hub.

The data in our dashboard is updated weekly and dates to January 2018.

How to use the dashboard

The dashboard offers a view of what Canadian employers are looking for in potential new hires. It allows users to explore timely, granular labour market information related to online job postings.

Users can explore the type and frequency of occupations and work requirements within any period (starting in January 2018), geographic region and NOC-code level.

The dashboard offers two ways to view the online job postings data: by occupation or work requirement. The two views provide an overview of the occupations or work requirements relating to the search query.

The dashboard also offers different ways to search for data: general search or specifical search.

Occupations

The occupations tab provides the ranking, NOC code, occupation and postings (number of online job postings your search appears in).

Clicking on an occupation in this list will then provide the work requirements associated with the selected occupation.

Work requirements

The work requirements tab provides the ranking, category, work requirements and occurrence (how often the work requirements appear in online job postings).

Clicking on a work requirement from the list will then provide the occupations associated with the selected work requirement.

General search

For a general search, leave the search field blank then choose your location and date range. This will populate a list of all occupations relating to your query.

Specific search

For a specific search, enter a term into the search field then choose your location and date range. This will populate a list of job postings for your specific search term. This will only list occupations relating to your search term.

What information is available?

The dashboard allows users to explore timely, granular labour market information related to online job postings.

Users can explore the type and frequency of work requirements within any time period (starting in January 2018) and geographic region.

LMIC classifies work requirements into four categories based on the Skills and Competencies Taxonomy from Employment and Social Development Canada (ESDC): skills, knowledge, tools and technology, and other.

ESDC’s taxonomy encompasses seven categories of skills and competencies. We have collapsed interests, personal abilities and attributes, work activities and work context into the group “other”.

How is the information collected?

Online job posting data are provided by Vicinity Jobs, a Canadian company that deals with big data analytics and Internet search technologies. They collect and analyze job postings found on websites and job boards and link each posting to a unique occupation and set of work requirements.

These work requirements are defined by Vicinity Jobs’ proprietary taxonomy for categorizing free text descriptions in online job ads.

This data is then ingested into the LMIC Data Hub for use in the dashboard.

The dashboard data begin in January 2018 and run to present.

This near-real time information on jobs and their work requirements is available by detailed occupation (Career Handbook Occupational Profiles) as well as aggregated occupational groups; month, quarter or year; and job location.

Currently, there are 78 sub-provincial or sub-territorial locations available, which are based on Statistics Canada’s Economic Regions with some exceptions.

Data Interpretation: Caveats and limitations

There are some important caveats and limitations to be mindful of when using and interpreting data.

Not all job postings are posted online

Not all job postings are advertised online. Some employers may search for staff before there is a formal job opening. This search will help create a pool of applicants from which to hire.

Additionally, many open jobs are never posted online in the first place. These jobs are likely filled internally or by word-of-mouth recruitment.

Not all work requirements are the same

The list of work requirements shown does not indicate how important the requirement is for that occupation. Some work requirements are not mandatory, but this is not specified in the job posting.

Alternatively, a work requirement might be expected for that job posting but not listed.

While we cannot determine the relative importance of work requirements for a job posting, we can find out how often work requirements appear in similar job postings.

Data interpretation: job postings vs job vacancies

The dashboard presents a sample of online job postings. While this sample is large, it is only a subset of all job postings. Though many employers actively recruit online, job postings do not precisely represent job vacancies.

Job vacancies refers to the number of available job openings that an employer wants to fill. The primary data source for measuring vacancies in Canada is Statistics Canada’s Job Vacancy and Wage Survey (JVWS). It defines a vacancy as a job that is or will become vacant during the upcoming month, for which the employer is actively recruiting outside the organization.

There are three important caveats when considering job postings in the context of job vacancies:

  1. Not all vacancies are posted online.
  2. A count of job postings (online and offline) may underestimate the number of actual vacancies because employers may seek to fill multiple vacancies via a single job posting.
  3. Counting job postings may overestimate vacancies if, for example, the employer does not take down a job posting that they are not currently seeking to fill (in which case the posting does not technically represent a vacancy as defined in the JVWS).

In general, the number of job postings in our dashboard will differ from a complete count of job vacancies across Canada.

Data interpretation: gross vs net changes in employment demand

When evaluating a growing sector or occupation, we think about the net change in employment demand.

For example, “How many new web design jobs will there be this year?” is a question about net employment changes.

Conversely, gross changes in employment demand include these new jobs plus turnover (where the previous person left the position).

This distinction matters because online job postings can only be used, with some caution, as proxies for gross changes in employment demand — not net changes.

With online job postings, there is no way to know if the position results from the organization’s growth or a need to fill an existing position that is vacant.

As such, the economic health of different occupations should not be estimated simply from the growth in the number of online job postings — growth here might reflect either economic dynamism or particularly high turnover.

Data interpretation: work requirement frequencies

Work requirement frequencies do not necessarily indicate their importance.

Every week, job postings across Canada are collected, cleaned and structured, extracting key details such as occupation, location and work requirements.

Since these data are pulled from individual online job postings, we should note that the language that is used by employers does not follow any commonly agreed upon vocabulary.

This real-world use of language should be interpreted with caution.

  1. There is no guarantee that employers explicitly state all work requirements in job postings. In many cases, they may assume that certain requirements are obvious to prospective job candidates and leave the requirements out of the posting.
  2. There is no way to tell which work requirements are critical for the position, or the proficiency that is needed to successfully perform in the job. The data only allows us to observe which requirements are more frequently posted by employers.

For example, Microsoft Excel was associated with 22% of online job postings for economists and economic policy researchers and analysts (NOC 4162) in 2019.

Does this mean that the other 78% do not need Microsoft Excel experience to succeed in their job? Probably not, but we are unable to confirm that through just job postings data.

Data interpretation: work requirements and skills

The data in our dashboard are organized into four work requirement categories based on ESDC’s Skills and Competencies Taxonomy:

  • Skills: the developed capacities that an individual must have to be effective in a job, role, function, task or duty. For example: critical thinking and problem solving.
  • Knowledge: the organized sets of information used to execute tasks and activities within a particular domain. For example: budgeting and electrical systems.
  • Tools and technology: the categories of tools and technology used to perform tasks. For example: Microsoft Excel and diodes.
  • Other: the work requirements not captured in the other three categories, inclusive of work activities, work context, interests, and personal abilities and attributes.

The work requirements of any particular position are a combination of skills, knowledge of a subject area and use of a particular tool or technology. (Also see: What’s a skill?)

Given the nature of the inter-connected yet distinct categories of work requirements, there is a limit to drawing conclusions about which skills are required by employers or what is lacking in the job market.

Data Collection

Representativeness/bias

In processing the content of online job postings, data may be skewed towards certain industries, occupations, regions, firm sizes and education level requirements. While Vicinity Jobs strives to capture all verifiable online job postings, this cannot be guaranteed.

Hidden job market

Many employers hire internally or through informal means such as word of mouth. These sources of information about employment demand cannot be captured in online job posting data nor vacancy survey data.

Data quality

Data are collected from online job postings via scanning algorithms that seek to deliver a comprehensive set of job postings information.

Organizations that collect data in this manner typically develop proprietary algorithms to clean and structure raw data. While the data are acquired from the same set of online job postings, the information is structured differently across different providers.

Despite these differences, quality assurance remains an essential part of the process. To this end, Vicinity Jobs frequently tests and revises its algorithms for collecting, cleaning and structuring the raw data, based both on internal quality assurance checks and ongoing feedback from partners, including LMIC.

Duplicate job postings

One major data-quality issue associated with online job postings is that many employers post the same job on many different websites — in fact, an estimated 80% of job postings are duplicates.

To avoid multiple counting of job postings, de-duplication — removing job postings that appear on multiple websites — is essential.

The process, however, is not perfect. Vicinity Jobs removes an estimated 95% of duplicate job postings each month. But small differences in the same posting, including website layout, means that a few are still missed.

Methodolody

The most sought-out type of data from online job postings are work requirements and job titles.

To provide users with their primary data of interest, the analysis should link job postings to an occupation (or job title) and to a set of work requirements. The process for doing so can be divided into four broad steps:

  1. Data collection
  2. Data cleaning
  3. Data structuring
  4. Data extraction

Although the four steps proceed in sequence, refining and optimizing each step often requires reviewing the results of one part while iteratively testing another part.

For example, steps 3 and 4 (structuring and extracting) may produce valuable information that can be used to increase the accuracy of de-duplication algorithms used in step 2 (cleaning).

The four-step method to transform online job postings into useable labour market information and insights are described below.

Step 1: Data collection

Data collection refers to the process of acquiring the text from online job postings. These job postings can be extract from job boards, job aggregator websites or directly from corporate websites.

Data collection typically involves a variety of approaches, including web scraping (downloading raw text from websites) and accessing website content through the host’s dedicated API connection.

In all cases, the data collected are publicly and freely available.

The technique often employed for this purpose is called Document Object Model (DOM) parsing. This allows a user to extract portions of a web page based on its underlying HTML structure.

Specifically, data collection programs can retrieve the dynamic content of a web page by referencing, for example, the Cascading Style Sheet (CSS) selectors associated with specific parts of the page.

Importantly, quality assurance protocols are implemented even in this initial step through the selective identification of which websites should be monitored.

Since websites vary widely in terms of the reliability of jobs posted, Vicinity Jobs selects their job posting sources through careful review and vetting prior to starting the data collection process.

Step 2: Data cleaning

After the raw text is downloaded, it must be processed into a form conducive to further statistical analysis. This process is known as feature extraction.

With respect to text data, this process includes tasks such as simplifying punctuation, converting words from plural to singular, replacing abbreviations and collapsing to lower case letters.

Another phase of the cleaning process focuses on removing duplicate or fake job postings.

A solution used by Vicinity Jobs to address job postings duplication is de-duplication, a process to remove duplicate job postings. This is achieved through a natural language processing algorithm that trains statistical models to predict whether two postings refer to the same position. Approximately half of all raw job postings are identified as duplicates and are removed.

Step 3: Data structuring

Once data from the job postings have been collected and cleaned, they are ready to be structured and organized into a specific set of classifications or categories.

One particularly important exercise is the classification of job postings into a standard taxonomy of occupations. Examples of taxonomies include the Canadian National Occupational Classification (NOC) and the American Standard Occupational Classification (SOC) systems.

Most algorithms that create this mapping between raw text and occupational classifications do so via examining the job title in the posting. This approach is limited due to the high variability of job titles used by employers.

For instance, a programmer might refer to NOC category 2173: software engineers and designers, or to 2174: computer programmers and interactive media developers — depending on the actual requirements of the position.

Similarly, NOC categories might be assigned differently based on industry. For example, a sales representative job would be allocated to different 4-digit NOC codes depending on if the job is in the financial services, retail or wholesale industry.

More advanced approaches use both the job title and job description to inform the classification process.

One method for mapping job titles to occupations requires a reference list of known job titles and a corpus of terms used to describe occupations.

One such resource is the UK Office for National Statistics’ (ONS) 2010 index, which identifies nearly 30,000 alternative job titles across all occupational unit groups. Similarly, the NOC profiles could be used as a list of titles associated with occupations.

These reference lists are used to further clean the job title text by removing words in the posting title that do not exist within the list of known titles. Afterwards, these job titles are matched to a standard taxonomy of occupations.

A more recent approach is to use machine learning techniques to build a text classifier for associating job postings with occupational codes (for example, NOC).

After cleaning the titles as described above, the text is converted into vectors of numbers — a process called word embedding. One of the simplest and most flexible models for word embedding is the Bag-of-Words (BoW) model. The BoW model relies on a method called one-hot encoding to transform the text into mathematical vectors. Once the text has been converted into vector representations, a supervised learning model is implemented.

This requires a set of data for which the occupational classification is already known.

The labelled job postings are then trained on the data using an SVM Linear classifier. Once the classifier is built and its effectiveness measured, it can be used on the collected data to assign occupational codes.

Another objective of using data from online job postings is to draw insights on the work requirements (for example, skills) demanded by employers.

This often requires structuring the raw job description text into a more meaningful format. As with the classification of postings into occupational units, the raw skills text can be classified into to a predefined taxonomy.

Step 4: Data extracting

The final step in the process is to extract insights from the resulting structured data.

For example, this can take the form of calculating the most frequently requested work requirements by occupation and geography.

Since work requirements are a broad category, it is informative to further organize these into a hierarchical classification.

To that end, we categorize work requirements into four categories based on ESDC’s Skills and Competencies Taxonomy: skills, knowledge, tools and technology, and other.

We collapsed the remaining categories of interests, personal abilities and attributes, work activities, and work context into other.

Classification

Vicinity Jobs’ approach for linking job postings to NOC

Vicinity Jobs uses an iterative process that first attempts to match job postings to NOC based on their job titles. The allocation is further refined by using additional anchors from the content of each job posting and known information about the employer.

This process relies on Vicinity Jobs’ proprietary knowledgebase that encapsulates tens of thousands of job titles.

This knowledgebase was originally compiled from available NOC specification sources (for example, ESDC’s occupational profiles), then refined using historic job postings. Further refinements in linking the job posting to a NOC are then made by incorporating contextual knowledge (for certain job titles can only be allocated to certain occupations in certain specific contexts).

To identify the appropriate contexts, Vicinity Jobs’ algorithm attempts to allocate postings to industries. It does so, in part, by matching employer data (such as employer name and URL) to known employer profiles stored in a vast, up-to-date database of known Canadian employers.

After postings have been organized into Career Handbook occupations, a set of unassigned job postings remains. The content and job titles of these postings are not sufficiently specific, or the nature of the jobs does not allow for an allocation to a single Career Handbook Occupation.

Instead of forcing or guessing, the algorithm attempts to assign as many of them as possible to the less specific broad occupational category (1-digit NOC).

Vicinity Jobs’ work requirements taxonomy

Vicinity Jobs works to ensure their work requirements and certifications reporting meet their quality standards while remaining representative of the unique aspects of Canadian labour markets.

They have established a taxonomy for work requirements that combines extensive input from their clients and partners with information from publicly available sources such as the O*NET taxonomy.

Data from publicly available sources contains tools and technologies, structured skills keywords published by certain websites, and other work requirements data assembled from Canadian online job postings and research.

After compiling an initial version of their work requirements taxonomy, Vicinity Jobs uses a proprietary algorithm to enrich and refine their knowledge about work requirements and the linguistic constructs that represent them.

This process involves a sequence of iterative steps. In each iteration, the knowledge base is testing against a broad set of non-structured job postings content. The job postings content is then used to identify the work requirements.

The resulting data are manually reviewed and provide feedback to fine-tune the algorithms.

The resulting knowledgebase encompasses tens of thousands of work requirements and certifications. The knowledgebase includes labels and identifying keyword combinations, and a complex framework of underlying interdependencies and context definitions needed to ensure accurate identification of work requirements.

For example, the system knows that certain work requirements and certifications could apply to any job, while others are only relevant to specific occupations.

Sign up for our newsletter

Contact Us

410 Laurier Avenue West
Suite 410
Ottawa, Ontario K1R 1B7

Please enter your name.
Please enter a message.
Please check the captcha to verify you are not a robot.
Scroll To Top