Generative AI (GenAI) has been hugely popular in the public for over a year now, but adopting a technology with such disruptive potential in the business world takes longer. In fact, many organizations still remain resistant to introducing GenAI into their business models in any capacity, with security concerns being a primary reason for their apprehension.
The potential productivity benefits of GenAI are outweighed, in the minds of senior executives, by the danger of data finding its way into the wrong hands. But what if this refusal to adopt AI in a formal capacity, or create resolute policy on AI use, is creating a culture of covert AI usage and risking sensitive data already?
We have seen this same dynamic play out only a few years ago, with shadow IT.
What is shadow IT?
Shadow IT describes technology, whether software or hardware, that is utilized in workplace practices or activities without the IT team’s knowledge or approval. This could be as simple as accessing secure workplace networks through an unprotected phone, accessing confidential files through a personal laptop, or using a cloud service without central IT management oversight.
For IT professionals, the term “shadow IT” should cause alarm bells. Unknown devices or software accessing organizations' data can lead to security vulnerabilities, data breaches and infringement on data regulations; all serious events that can compromise the integrity and reputation of any business.
In today's technological landscape, AI tools are the latest innovations providing potential value across all levels of business, while also representing the new concern for IT teams looking to keep their organisations risk-free. Without adequate measures in place, businesses face a considerable challenge: ensuring that their employees are using AI responsibly, without resorting to the sorts of practices that we used to call “shadow IT”.
However, history has shown that with any major new technology advancement that promises new capabilities and substantial benefits, this sort of unofficial use is likely to rear its head…
iPhones and GenAI - not so different, you and I
In many ways, the current atmosphere around GenAI is comparable to the iPhone boom of the late 2000s; the monumental launch that first defined shadow IT and its potential impacts.
During this time, businesses found that an increasing number of employees were adopting this new technology, which they weren’t adequately prepared for. Previously, only a small minority of workers would bring personal laptops or early smartphones into the workplace, but the launch of the iPhone created a significant demand for new mobile management practices.
Where such policies were not implemented, employees would take it upon themselves to change the way they worked with their new devices, often rerouting mail and documents to their personal accounts in order to use their iPhone. Many organizations recognized the risks that came about from a wave of new technology accessing sensitive data, but also saw the benefits of enjoying a more accessible workforce on the go, with the iPhone essentially giving many of the capabilities of a portable computer without the size and weight of a laptop. The biggest losers of the situation were the IT leaders that risked their organization's security by not addressing the problem at all.
GenAI in the workplace
Similarly to the iPhone, GenAI has entered the mass-market consumer space. Despite some businesses refusing to acknowledge the new technology within their midst, many people are already using it to enhance their work. Services like Grammarly and ChatGPT have become commonplace tools that have justified their status with demonstrated business value, and the workforce knows it.
The fact of the matter is, businesses that are not recognizing and monitoring the wave of AI tools and services that are already being adopted, may find themselves unprepared for the evolutions to the threat landscape it has brought about. Employees are seeing the benefits of using AI for productivity, and aren’t shying away from using it in the workplace, whether sanctioned to by the IT department or not.
In fact, SnapLogic's recent study into Generative AI found that 40% of office workers surveyed had used GenAI for their work without disclosing it to their employer or colleagues. Pair this with a slightly more concerning figure: over two-thirds saying they don’t have enough understanding of AI for their role, and it is made evident that a lack of instruction and education from organizations isn’t deterring employees from using GenAI in areas that may be risking sensitive data.
The survey results also show an opportunity for businesses to adopt a policy of education and championing of AI best practices with their workforce. Workers are using GenAI because they can see the benefits it can provide them, with 47% saying they think GenAI could save them 6-10 hours of work per week in the future. This isn’t just a risk to be mitigated; it’s an opportunity to take advantage of technology that provides measurable business value, when the correct safety measures are taken.
The smart business leader will see these parallels and learn from the mistakes of previous examples of shadow IT, fostering an AI strategy that allows employees to harness the transformative productivity benefits of generative AI while avoiding the risks that could come about through unidentified AI tools or unapproved practices.
Moving forward
The key takeaway here is not to take a completely hands-off approach to the wave of GenAI tools that are available. IT departments need to instead enter a consultative partnership with the rest of their business, providing much-needed guidance on how to engage with AI in a manner that encourages safe use while maximizing productivity where possible. This should be combined with defined guardrails that allow employees to experiment without exposing the business to risk. The ultimate goal is to take inspiration from mass-market consumer AI and replicate its many productivity benefits in a business environment, taking advantage of the organization's own data to simplify processes, improve accuracy, and make activities more user-friendly - by using natural language prompts to access internal systems while staying compliant with relevant security and regulatory policies, for example.
Creating an atmosphere of openness and curiosity will lead to visibility over any AI adoption that enables proper evaluation for its risks and benefits. By providing this, IT teams will be able to build trust within their organization, while derisking what could potentially be a game changer for productivity and efficiency.
We've featured the best cloud computing.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro