One in 10 use ChatGPT for work

Plus, why you should use AI and what Microsoft will charge for Copilot

Hello, and welcome to Issue 8 of The AI Writer, your weekly update on workplace AI.

This week, we reveal just how many people are sharing confidential company information with ChatGPT. (Short answer: a lot.)

And we report on how Microsoft is launching a corporate version of its search chatbot that might stop leaders lying awake at night fretting about such security issues.

Plus, we’ve got news of what the tech giant plans to charge users for AI functions in Word, Excel and Powerpoint (when it eventually releases them).

And if you’re still hesitating about using AI, check out our new series on 101 reasons to try ChatGPT (safely).

This issue will take you 5 minutes 7 seconds to read in full. Too long? Just read the summaries in italics in a mere 23 seconds instead.

Let’s dive in.

– Rob Ashton

In this issue

  • Thousands share sensitive data with ChatGPT

  • Microsoft reveals fee for AI functions in Word …

  • … but adds them free to corporate web browsers

  • New series: 101 ways AI can make work easier

  • Why we’ve created The AI Writer

SECURITY

75,000 employees revealed to have shared sensitive data with ChatGPT

Bar chart showing cumulative percentages of employees who have used ChatGPT at work, and how. By June 2023, nearly 11% have used it, more than 8% had pasted in company data and almost 5% had fed it confidential data.

Cumulative figures for employees using ChatGPT at work (Cyberhaven)

More than one in ten employees have tried ChatGPT at work. And half of those have uploaded sensitive corporate information, including internal data, computer code and customer details.

The unchecked use of ChatGPT in the workplace poses a growing risk to data security, according to research.

Figures released by the makers of Cyberhaven – an app that tracks the flow of corporate information – reveal that an increasing number of employees are prompting the chatbot with confidential company data as more start to use it for routine tasks.

Tracking data

When the software provider examined tracking data from its 1.6 million users, it found that more than one in ten (10.8%) have now tried using the generative AI app at least once in the workplace.

Most of those (8.6%) have fed it company data. And almost one in 20 (4.7%) have pasted in confidential information – a total of around 75,200 Cyberhaven users alone.

Sensitive internal data was the information most commonly leaked to the chatbot.

Sensitive internal data was the information most commonly leaked to the chatbot, followed by computer code and client details.

Such security slip ups are easy to make but can have serious consequences. An executive might type in bullet points from his company’s 2023 strategy document into ChatGPT and ask it to rewrite them in the format of a PowerPoint slide deck.

But if a competitor later asks ‘what are [company name]’s strategic priorities this year?’ the fear is that ChatGPT could answer based on the information the executive had provided.

Company bans

This issue has already led a growing number of high-profile companies to restrict (or ban) ChatGPT.

Apple recently followed such moves by Amazon and JP Morgan. And earlier this year, so did Samsung after it reportedly discovered employees using ChatGPT to debug source code as well as uploading transcripts of internal meetings to summarise them.

But once people discover the power of generative AI to ease their workload, the temptation to use it with or without their employers’ blessing is often irresistible. With the genie well and truly out of the bottle, education may be a more effective strategy than outright bans.

AI ROLLOUT

Microsoft to charge $30 a month for AI in Word, Excel, PowerPoint, Teams

Close-up image of Satya Nadella

Smiling again: The fee could ease CEO Satya Nadella’s AI money worries (Microsoft)

Microsoft has announced it will charge only users who opt in to its Copilot AI functions rather than spreading the cost among everyone.

AI technology is expensive for the companies who provide it. ChatGPT, for example, is said to be costing OpenAI $700,000 a day. Ultimately, someone has to pay for it.

We reported rumours from a reliable source back in early June that Microsoft was weighing up two options to cover its own, hefty costs for Copilot, the AI sidekick that it’s building into Microsoft 365 (formerly called MS Office).

The first was to jack up the price for everyone. The second? Charge a premium only for those who want access to AI functions.

Now, it looks like it’s gone for the latter. The company has announced that, when it goes live, Microsoft 365 Copilot will be available for commercial customers for an extra fee of $30 per user per month.

They could face an uphill battle though. According to a new survey, only 18% of companies are investing in generative AI at the moment, even though far more are experimenting with it.

WHY AI?

NEW: 101 reasons to use AI (safely)

Screenshot extract of ChatGPT’s response to the prompt: ‘Explain blockchain to me as if I were five years old.’ Extract reads: ‘Alright, imagine ou have a special notebook that you share with all your friends at school. Instead of just one person keeping the notebook, everyone has a copy, and they all keep track of the same things together.’

ChatGPT explaining blockchain to a five year old (see below).

Not sure why you should use ChatGPT? Try asking it to help you understand a complex, unfamiliar topic.

After reading our lead story on how many professionals are leaking data to ChatGPT, you might reasonably conclude two things: everyone is using it; and those who aren’t, shouldn’t.

Yet while it’s true that the majority of workers have now heard of ChatGPT, most of them still haven’t actually tried it.

And there are many ways that it can ease your workload and speed progress that don’t involve uploading the corporate Crown Jewels to the internet.

I discussed why most people still aren’t using it in a recent LinkedIn post. But there was another, big thing I missed: they don’t know how it can help them.

It’s time to fix that glaring omission. So this week, we’re launching a new series focusing on use cases for AI that might not be so obvious.

Important: Make sure you check out our article on how to use AI safely and (obviously) never paste confidential or sensitive information into ChatGPT or similar bots.

OK. Public service announcement over.

Here’s the first use case that you might not have thought of.

Use 1: Explain a complex topic

ChatGPT and other generative AI chatbots can be a great way to ease into subjects that would otherwise be difficult to understand. And the best bit: you can set the level of explanation according to how much you know already.

Just adapt this prompt by filling in your topic and current level of knowledge.

Explain [blockchain] to me as if I were [five years old/a graduate with a Bachelor of Arts degree/the finance director of a Forbes 500 company].

Take the bot’s answer with a pinch of salt. It’s probably not a good idea to copy and paste it into a report on the topic (at least without a little fact checking and editing first). But it’s a great way to get started.

If you use the Bing Chatbot that now comes with Microsoft’s Edge web browser, it will automatically include links to its sources, to make it easier to verify its response.

Asking the chatbot what it needs first is another good way to improve results (generally – not just for getting better explanations).

I would like you to explain [electric vehicle batteries] to me. Before you do, please ask me five questions to help you tailor your response. 

WORKPLACE AI

AI-powered secure search coming to corporate web browsers

Screenshot of Bing Chat Enterprise

Microsoft is adding a secure chatbot to the Enterprise version of its Edge web browser, so employees can use Bing to search internal data securely.

Microsoft has announced it will be integrating AI chat into the web-browsing software it provides for corporate users, in a move that it claims will boost productivity without compromising data security.

It made the announcement in an official blog post last week, saying the aim is to give organisations AI-powered chat with commercial data protection.

With Bing Chat Enterprise, chat data isn’t saved, Microsoft can’t view a customer’s employee or business data, and customer data isn’t used to train the underlying AI models.

The move appears to be an attempt to allay the fears of the companies and other organisations who’ve so far been reluctant to grant employees access to the tech giant’s AI tools.

‘We’ve heard from many corporate customers who are excited to empower their organisations with powerful new AI tools but are concerned that their companies’ data will not be protected,’ explained Frank Shaw, Microsoft’s chief spokesperson.

Data protection

But he says that Bing Chat Enterprise is a different beast from a consumer-grade AI chatbot, claiming it will offer much more robust data protection than ChatGPT.

The move will affect millions of corporate users. Bing Chat Enterprise will be accessible on Bing.com Chat and the Microsoft Edge sidebar, and will eventually be embedded in Windows 11.

The issue now is whether the company can reassure the grown ups that this doesn’t represent an existential threat to companies through the back door.

There are ways in which companies can protect themselves and their data. But it might take a high-profile breach to convince some users of the risks.

Why The AI Writer?

When I founded Emphasis, 25 years ago, it was just me, a cat and a kettle. It’s since grown to become the most trusted provider of business writing training in the world and helped more than 80,000 people, from 32 countries. We’ve worked with tech giants, top 10 law firms and major financial institutions, as well as at the highest levels of government. (We’ve even sent trainers to work with clients in the Himalayas.)

I would not claim that we’re experts in everything, as we’re certainly not. But, after working with the authors of around 100,000 documents, it’s safe to say that business writing is something we do know a thing or two about. And that includes witnessing all the various ways in which organisations get it wrong.

Now, we’re at the forefront of using AI to help them get it right.

Of course, I’ve seen nothing like the AI revolution that’s hitting us. Nobody has. But I do feel a responsibility to use our experience to help people navigate this brave new world of communication bots.

So here’s the deal. You get on with your job, while we immerse ourselves in the latest developments in workplace AI. We’ll worry about keeping up so you don’t have to. Then we’ll tell you exactly what you need to know to stay ahead of the curve.

Quick poll: Which ONE topic do you most want us to cover?

Login or Subscribe to participate in polls.

Please forward this email

Feel free to forward this email to anyone who might find it useful. Each issue of The AI Writer takes days to research and write, so the more people it helps, the better.

And if someone forwarded this to you, you can grab your own, free subscription here.

You can also catch up on all our previous issues.

Until next time, have a great week.

– Rob

PS. I’m now regularly posting more on the topics we cover on LinkedIn, too.

This week’s writers were Rob Ashton and Christian Doherty.