
Is finding a job increasingly complicated because of AI? Recruitment software is now very common, but companies often underestimate its limitations
More and more companies are adopting artificial intelligence systems to manage the hiring process, from posting job ads to conducting interviews. The field of human resources is in fact one of the main areas, along with IT, where AI has found wide application. At the same time, a broad debate has emerged on the risks and limits of these procedures, often deemed excessively impersonal. This drift caused by the overuse of AI in recruiting processes—especially among large corporations—has recently been highlighted by The Atlantic, in a much-discussed article: «The process of getting a job has become a late-capitalism nightmare,» the U.S. magazine writes. «Online hiring platforms have made it easier to find job opportunities but harder to actually get them: candidates are submitting thousands of AI-generated résumés, and companies are using AI to filter them out».
In parallel, according to the Economic Times, OpenAI has announced the 2026 launch of an AI-powered hiring platform, directly challenging LinkedIn. The project aims to connect businesses with AI-skilled workers by using algorithms to find the “perfect match” between companies’ needs and candidates’ abilities. At the same time, the company introduced a certification program through the OpenAI Academy, in partnership with Walmart, with the declared goal of upskilling 10 million Americans by 2030. «Anyone looking to hire, whether through the Jobs Platform or elsewhere, needs to be sure candidates are truly fluent in AI,» Simo explained.
In essence, as The Atlantic explains, «the difficulty of reaching the individual interview stage pushes job seekers to submit more applications, relying on ChatGPT to tweak their résumés and answer pre-screening questions. And so the cycle continues: the surge of nearly identical applications, written with AI, leads employers to rely on automatic filters to manage the hiring process». In this context, in the U.S., «the hiring rate has fallen to its lowest point since the recovery following the Great Recession», beginning in 2009, the Atlantic writes.
How to use AI for a job interview
reviewed 17 job applications and they all looked, led and sounded the same because of AI. i was so shocked
— kemi marie (@kemimarie) July 15, 2025
After the pandemic, more attention has been given to so-called “soft skills”, meaning relational and behavioral abilities that go beyond technical preparation, and often make the difference within a team. Assessing these aspects, however, is not simple and often requires longer, more structured processes—one of the reasons many organizations turn to AI-based systems, which are not always introduced with adequate preliminary testing or error-correction mechanisms. In this context, it should be noted that applying for a job today requires greater effort than in the past: especially in large companies, the hiring process almost always involves multiple interviews and may include logic tests, the production of video résumés, specific exercises, or group interviews.
The widespread perception is that these procedures are actually a way to reduce the number of participants and pre-filter those who are not sufficiently motivated, thus lightening the workload of résumé analysis. But it is also true that companies increasingly tend to post generic job ads mainly to gather profiles they can use for future needs: some large corporations, in fact, keep positions constantly open that recur with regular frequency, even if they don’t have immediate hiring needs. All these dynamics contribute to making today’s job market a «hell,» as The Atlantic defines it.
An interview with a chatbot?
AI systems used in hiring processes today face two main limitations. On the one hand, they share a common issue with many audio and video content analysis tools, which are increasingly used in the early stages of recruiting: voice assistants (even the most advanced ones, such as those developed by Google, Amazon, or Apple) do not always correctly interpret all the nuances in a candidate’s speech. On the other hand, there is the risk that certain groups of people may be penalized because they are underrepresented in the datasets these systems are “trained” on.
A well-known case dates back to 2018, when a Reuters investigation revealed the flaws of an experimental algorithm developed by Amazon to evaluate candidates through machine learning techniques: the software assigned scores from one to five but was found to introduce evident gender bias. This happened because the model was trained on a dataset based on résumés submitted to Amazon over the previous ten years: most of these came from men, due to the male dominance in the tech sector, and so the recruiting algorithm ended up penalizing female applicants.












































