Dare You Use Artificial Intelligence To Write Better Job Postings?
Gordon Pelosse is senior vice president, employer engagement at CompTIA. Unlocking the potential in millions of under and unemployed.
For years we have been talking and writing about how most of the jobs in tech don’t require a degree. At the end of 2022, the governor of Utah announced the state will no longer require a degree for most state jobs and will instead focus on experience. We have seen many companies, such as IBM, Google, Amazon, LinkedIn and thousands of others, come out with policies that explicitly say they no longer require applicants to have a four-year degree. Many have spoken about how skills matter most and how we need a skills-based requirement. A recent study by Cengage Group showed 47% of tech employers say skills training credentials are most important when considering a candidate for an entry-level position. Only 26% say college degrees are impactful, and yet we continue to see 81% of tech employers’ job postings requiring a degree.
Why does such a fundamental contradiction exist? Why are so many talented, skilled, experienced and diverse individuals being screened out? Well, it’s that lack of a college degree; “Many employers still require a four-year degree because they have not updated or innovated how they screen talent and evaluate their potential,” according to the study published by Cengage Group.
We are hearing more and more about a number of artificial intelligence (AI) tools that could help you with time-consuming tasks or even write the perfect essay or perhaps the perfect research paper—and in the case of HR specialists, write the perfect job posting. It seems like a great idea, leveraging the power of big data and artificial intelligence technology to help with such a task, pulling from massive sources of data to build a “perfectly” written output. The challenge, of course, is that the AI is using all the data available to it, good and bad, and if that data is biased or flawed, it (and you) could perpetuate that flaw or bias.
I ran a simple query on the AI engine ChatGPT: “Generate an ideal job posting for a cybersecurity specialist.” In 10 seconds I had an answer; it included a minimum of a bachelor’s or master’s degree and five years of experience. Hmm, I must have done something wrong with my ask. So I tried a variant, adding the words “entry-level”: “Generate an ideal job posting for an entry-level cybersecurity specialist.” Again, in five to 10 seconds, I had an answer; it included a minimum of a bachelor’s or master’s degree, with no experience defined. Not much better, as a degree is certainly not required for this entry-level role.
Okay, perhaps “cyber” is too new or too challenging, so I tried a much easier task: “Generate a job posting for an entry-level IT help desk technician.” Just 10 seconds later, what it produced was a beautifully written one-page job posting that was formatted perfectly and included a summary, responsibilities and background, including that a bachelor’s degree was not required but was preferred. However, it reduced the experience required to zero to two years.
While this output was better, it still is concerning. We all know an entry-level help desk technician doesn’t require a degree, and in most cases even saying a degree is recommended or preferred can be problematic.
What is this telling us? Some AI engines search millions of existing job postings and look for the consensus results; if most historical postings preferred a degree (rightly or wrongly), the AI decided a degree is a preference. Other AI engines arrive at answers by making a series of guesses, which is why they can get the answer completely wrong.
While AI and tools like ChatGPT can be helpful in some circumstances, AI has its limitations, either guessing at probable answers or using flawed data with bias from the historical past, which will continue to repeat the bias into the future.
This is not the first or last limitation of AI; we have case after case where the tools have gotten it very wrong. You may recall an AI project that was scrapped at Amazon. The tool was built to screen résumés and select the top five for interviews from hundreds, streamlining the hiring process. It was found that the tool was not rating candidates in a gender-neutral way and had taught itself that male candidates were preferable over females.
If we want to improve our hiring, we must make that investment in innovation to screen talent for skills and not rely on shortcut tools. While AI may write a nice-looking job posting, it often simply perpetuates historical practices that emphasize the mistakes of the past.
Be informed and beware of the possibility of real harm. Imagine the risks if the answers provided by AI tools are wrong and we treat them as absolute in critical situations where we need to trust our tools.
Forbes Human Resources Council is an invitation-only organization for HR executives across all industries. Do I qualify?
Leave a Reply