影子AI:企业过度封锁可能适得其反

本文探讨企业如何应对员工使用未经授权的AI工具(影子AI)带来的风险。研究显示53%的知识工作者使用未批准AI工具,过度封锁反而导致数据泄露风险加剧,文章提出结合策略管控与用户教育的平衡方案。

随着影子AI的出现,有时治理手段比问题本身更糟糕

组织需要围绕AI生产力工具实施政策和限制,但他们也需要确保这些政策不会弊大于利。

很久以前,Informa TechTarget网络上有一个名为ConsumerizeIT.com的网站,处理IT消费化问题—— essentially how IT was being affected by users who were far more savvy than ever before。用户知道他们想要如何工作,使用哪些设备、工具和应用程序,更重要的是,他们知道如何在没有IT帮助的情况下获得所需。我们称之为FUIT,发音为“foo-it”,这是拉丁语中“曾经”的意思,就像“IT曾经负责”。实际上,它只是意味着“F U, IT”。

这一切始于智能手机的引入,以及以千禧一代和Z世代为基础的年轻劳动力,他们从未知道没有互联网的世界。在过去的几年里,我们设法适应了。或多或少,用户能够与IT和谐工作,后续世代进入劳动力市场。但FUIT的根源可以追溯到商业计算的最早 days, where IT took a heavy-handed approach to policies and tools。可以说是IT的方式或高速公路。

这 simply because end users didn’t know how to use these tools,尽管今天事情更加以最终用户或用例为中心,但当新事物出现时,我们倾向于 fall back on these more draconian ways。正如您可能猜到的那样,其中之一是AI。

影子AI是新的FUIT

AI生产力工具的扩展代表了这一趋势的新浪潮,最终用户的AI使用是我关注的关键领域。事实上,我最近进行的研究表明,79%的组织正式支持并向最终用户部署AI服务,如ChatGPT、Copilot或Gemini。也许最有趣的是组织对影子AI的 rampant use。

我的研究 titled “AI at the Endpoint: Tracking the Impact of AI on End Users and Endpoints” 显示,53%的企业知识工作者承认使用未经授权的AI工具——也称为影子AI。尽管组织努力监控、管理和阻止影子AI,44%的用户表示他们或他们的同事不仅使用影子AI,还将特权、私人或机密数据放入这些未经授权的工具中。

将这与FUIT联系起来,我发现有趣的是,IT仍然认为它可以 outright block things and expect users to comply。他们会 simply find another way around,就像你在蚂蚁线前放一片叶子一样。

这采取了许多形式。一些组织发布自己的定制AI工具供内部使用。这可能有用, especially if that tool is tied into corporate data, policies, HR and more。这里的主要挑战是 keeping that customized tool up to date with rapid progression in large language model technology and features。

其他人可能围绕特定的公共模型进行标准化, so they can always be assured of having the latest capabilities。

这是以安全、数据丢失预防(DLP)、企业知识产权(IP)保护和其他许多事情的名义进行的。许多这些原因都有道理。问题是,如果任何工具在技术上落后或缺乏功能 compared to tools that end users were accustomed to,用户将 inevitably find ways around it。那些方式 often less secure and more problematic。

以阻止公共模型如Google Gemini为例——尽管这可能适用于任何模型。如果用户习惯于这个工具并且不想适应新法令,阻止对他们几乎没有影响。想想所有绕过的方法。其中一些方法很 ridiculous,但它们也表明IT overreaction of IT straight-up blocking things:

  • 用户可以拍摄内容照片并上传到手机上的Gemini应用程序。如果在网络级别被阻止?好——只需关闭Wi-Fi。
  • 他们可以禁用设备上的安全控制。
  • 他们可以手动输入屏幕上的内容。
  • 他们可以开始使用个人设备完成所有工作, resulting in a potentially devastating DLP scenario that is likely far worse than the one the company is avoiding by blocking Gemini。
  • 他们可以将文档通过电子邮件发送到个人电子邮件地址。
  • 他们可以做更疯狂的事情, like uploading it to their personal Google Drive, where it could get pulled into Gemini anyway, then downloading it elsewhere。所以现在你也必须阻止Google Drive, except lots of people use that for non-AI but still work-related things, so that will bring added complications。

列表继续,它最终揭示了一个关于 scorched-earth IT policies 的宇宙真理:你不能阻止一切。它也让我想起多年前被告知的事情:有时“解决方案”比问题更糟。问问自己:什么更糟?一个模型以某种方式在我的数据上训练,可能在遥远的将来将一些随机企业IP纳入响应,或者我的用户将那个IP完整地粘贴到易于访问、不安全且未监控的位置?两者都很糟糕,但要清楚,什么更糟?

主动教育将有助于AI政策 rollout

必须有中间立场。我们不能 just have a free-for-all, anything-goes scenario, right? Especially when there’s such rapid change and an explosion of both good and bad tools that are nearly impossible to tell apart。 Seriously, just search for ChatGPT in your phone’s app store and see how many things look exactly the same。

政策必须考虑到所有相关方的需求, not just blanket, heavy-handed edicts。

方法必须灵活,能够满足业务、IT和安全团队以及最终用户的需求。它几乎 certainly includes a combination of the following:

  • 战略性地阻止应该被阻止的网站和服务—— such as knockoff ChatGPT middlemen——或者 maybe an allowlist of reputable platforms, so you don’t have to try to keep up with maintaining a blocklist。
  • 清晰理解最终用户如何将AI融入他们的工作流程。我的研究数据显示,在很大程度上,最终用户进行影子AI的原因与业务想要使用AI的原因相同:生产力、自动化和内容质量。IT可以在此基础上 build on that。 Depending on the stance IT has taken thus far, that might require some amnesty from the powers that be。 Better to offer that now than later, when bad habits could be even more entrenched。
  • 政策 created with the needs of all involved parties in mind, not just blanket, heavy-handed edicts。 They should state which platforms can be used for which purposes, what classifications of data can be used on those platforms, and other things such as only allowing paid subscriptions with training turned off。
  • 教育最终用户 to explain the existence and reason behind those policies so that they think twice about the data they post and how they interact with it。

AI政策只是前进道路的一部分

有时组织认为政策足够了,一旦每个人阅读了它们,他们就理解了。在现实中,政策 only go so far。 They’re great at giving organizations reasons to fire people “for cause,” but it’s not always clear why a policy exists or what it means。 Of course, that’s incentive to create higher-level policies like “No AI tools but the one we make you use,” but that’s how we got into this situation in the first place。

这里真正成功的关键是教育, once again, this is something that we have research on。 Just 19% of corporate knowledge workers said they were completely confident in their ability to assess the security, compliance and privacy risks of using unauthorized AI tools, which indicates that we can do more to train end users。

Similarly, 74% of knowledge workers said their organization had not done a thorough job of communicating the risks associated with AI, which again points to an opportunity for education。

试图阻止一切 often feel like the most secure approach, but it might actually make things worse。 This is the reality in 2025, where end users are savvy enough to find ways around the blockages put in place。 Heck, they can just ask AI how to circumvent the blockage。 You can’t keep up with that。

前进的道路 requires an understanding that, in most situations, you can’t block everything。 Policies are important。 Blocking certain tools is important。 Delivering company-specific, integrated tools is useful。 But you must do all those things, and in a way that meets the needs of the end users in addition to those of the business and security teams。

IT有 knee-jerk reaction and block everything 是可以理解的。 Just be sure you know that there are more unexpected consequences than expected ones。

Gabe Knuth是Enterprise Strategy Group的首席分析师, covering end-user computing, now part of Omdia。 Enterprise Strategy Group是Omdia的一部分。其分析师与技术供应商有业务关系。

comments powered by Disqus
使用 Hugo 构建
主题 StackJimmy 设计