Microsoft's Copilot spills the beans, summarising emails it's not supposed to read
“The bot couldn't keep its prying eyes away. Microsoft 365 Copilot Chat has been summarising emails labelled “confidential” even when data loss prevention policies were configured to prevent it.”
Given that this is Microsoft, it is difficult to distinguish between this being devious or plain sloppiness. One would have thought with a company the size of Microsoft, neither should be the case. But it was only a year ago that we had that Windows screen recording debacle that had to be withdrawn and re-published.
It is probably just never a good idea to let any AI have raw access to your documents or e-mails. It is better to create a separate area with documents you have vetted, or to manually upload what you want to have processed.
AI is proving to be an incredible way of worming into private document and information repositories where it can just vacuum everything up. As an end user, you have zero control once you've opened that door.
I can see government's funding AI one day, as it is a tremendous way to spy on citizens, or opponents. As much as the USA warns us about China and their AI, I'm just wondering how much of the same behaviour is being perpetrated by the USA itself. Users basically invite AI into their private areas, and then let it scoop everything up. If the country owning the AI company, has warrantless search to data, just think of the possibilities.
We've also heard about AI reorganising user data, and then apologising for deleting it. AI is not infallible, and with zero contextual knowledge, it is also not actually intelligent.
You want to keep AI at an arm's length, and use it with cation, just like any other tool.
See
Copilot Chat bug bypasses DLP on 'Confidential' email
: Data Loss Prevention? Yeah, about that...
#
technology #
AI #
Microsoft #
privacy