- Anthropic has an “AI policy” for job candidates that discourages the technology from being used during the application process. The company says it wants to field candidates’ human communication skills.
Anthropic is known for its AI innovations—but the company doesn’t want job candidates using the technology.
Anthropic’s job postings state an “AI policy” for any candidates hoping to land a gig at the $60 billion AI juggernaut. The application process requires people to answer “why do you want to work here?” box aside from their cover letter—but adds a stipulation.
“While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process,” the policy reads. “We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills."
The rule applies to all job opportunities. Anyone hoping to snatch open positions including legislative analysts, account executives, and external affairs officers are beholden to the policy. But with Anthropic being one of the leading AI creators, the policy is ironic.
The business touts the extensive promises and applications of AI, including its model, Claude for Enterprise, built for large organizations. Google has even thrown its hat into the ring, making a $2 billion investment in Claude AI in 2023. But Anthropic has a hard stop when it comes to job seekers using the tools.
Anthropic’s disdain for job seekers using AI to help build their applications isn’t uncommon. Companies and their managers have long been caught in an AI arms race that is upending the hiring process. Now both applicants and recruiters are experiencing tensions about how to use the tech.
Companies discouraging job applicants from using AI
Anthropic is just one of many employers discouraging prospective hires from using AI—but it does feel weirder, given that it’s a tech giant entirely reliant on AI usage. But the reasoning behind this policy echoes the same reasoning from other businesses.
About 80% of hiring managers dislike seeing AI-generated CVs and cover letters, according to 2024 data from CV Genius. And they’re confident in being able to pick up on the automated content; around 74% say they can spot when AI has been used on a job application. That can have a damaging effect on an applicant's prospects, as over half of those hiring managers say they are significantly less likely to hire an applicant who has used AI.
Yet AI has become deeply ingrained in people’s personal and work lives. About 57% of job candidates used the OpenAI chatbot in their applications, according to 2024 data from Neurosight. Companies are promoting it, too—around 70% of workers say their organizations have received training to use generative AI correctly, and 90% of their bosses agree, according to a recent study from Accenture. Businesses continue to encourage their workers to become digitally savvy, and use the tools on the job to boost productivity.
This increase in digital optimization has left employers scrambling for human skills. Anthropic’s policy mirrors the growing need for soft skills, as AI isn’t capable of authentic communication, storytelling, and emotional intelligence. These qualities are top of mind for companies—even when hiring their next top executive.
Many disgruntled hiring managers are hoping to find these qualities in candidates, and AI-optimized cover letter responses may not be the best litmus test. But as recruiters have to bulldoze through thousands of applications for a single role, and job seekers need to apply to hundreds of jobs to land a gig, AI will stick around in the process.