
- As more companies and leaders embrace AI, a new Microsoft study finds troubling implications for the human workers who use it.
Business leaders have been urging workers to get AI training to stay relevant in their roles, but it might just keep employees stagnant. New research from authors at Microsoft and Carnegie Mellon University finds that leaning too much on tools such as ChatGPT is associated with weaker critical thinking.
Surveying 319 knowledge workers through 936 first-hand self-reported examples of using generative AI at work, the authors attempted to gauge the perceived enactment of critical thought and how GenAI impacts said process. Respondents were asked about how they used AI tools, their confidence in AI’s ability, their ability to judge AI’s work, and their assuredness in their ability to complete said task without AI.
Researchers found that higher confidence in GenAI was associated with less critical thinking, and that higher self-confidence was associated with greater critical thinking.
Depending on AI for low-stakes tasks like proofreading might appear benign but “can lead to significant negative outcomes in high-stakes contexts,” the authors write, pointing out that it’s “risky for users to only apply critical thinking in high-stakes situations.” And researchers find that without routinely keeping that thought process active on a routine basis, “cognitive abilities can deteriorate over time.”
“While AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving,” write the researchers, citing past findings that dependence on technology deprives humans the ability to constantly hone their judgement skills and “leav[es] them atrophied and unprepared when the exceptions do arise.”
“Across all of our research, there is a common thread: AI works best as a thought partner, complementing the work people do. When AI challenges us, it doesn’t just boost productivity; it drives better decisions and stronger outcomes,” one of the authors, Lev Tankelevitch, Sr. Researcher, Microsoft Research, said in an emailed statement to Fortune. Noting that there’s been some evidence where AI enhances critical thinking when it was led by humans and guided by educators, Tankelevitch adds that “on the flip slide, our survey-based study suggests that when people view a task as low-stakes, they may not review outputs as critically.”
Researchers find that those who use AI in place of critical thinking are also more likely to end up with a “less diverse set of outcomes for the same task, compared to those without.” Noting that while GenAI has the capacity to improve workers efficiency, the authors warn of long-term impacts.
“Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved,” they note.
Big Tech and the government’s embrace of AI
A whopping 68% of executives plan to invest from $50 million to $250 million in AI over the coming year, according to KPMG’s latest AI Quarterly Pulse Survey. The current administration is also quite supportive when it comes to said tools, as President Trump announced in late January that the government would be investing $500 billion in Stargate (a new company created to promote AI growth). Even Microsoft is engaging in its own AI-coded space race with its tool Microsoft Copilot.
Major arguments for AI from SIlicon Valley are often based on their supposed aid to productivity and ability to offshore dull tasks so that humans can engage in larger-level responsibilities. But it turns out that AI might have a more sinister impact than optimists push, potentially dampening our ability to tackle the more complex issues when they come our way.