Let’s backtrack on Slack

Let’s backtrack on Slack

Over the past few weeks, I have seen tech professionals perform a ‘public service announcement’ regarding Slack. Slack has an opt-out process for using data to train AI models. The campaign has been successful, and I’ve seen plenty of people proudly displaying their opt-out confirmations.

Many of us harbour uncertainty and fear around AI. We have witnessed it fundamentally disrupt creative industries, automate manual tasks, and produce reasonably accurate expert knowledge.

While I am uncomfortable with having conversation data used to train AI, I also dislike the wider community’s reactive and emotive knee-jerk responses. It comes down to this: humans feel deeply uncomfortable with uncertainty and will do what they can to minimise it.

Security is contextual. Companies want to reach an adequate level of protection to operate safely. Excessive security can become an opportunity cost, introduce unnecessary friction to business processes, and create retention challenges and reputational consequences.

Most businesses leverage Slack to structure corporate communication and share information easily and safely between staff and external collaborators. There is fear that someone can expose this data, putting private conversations and internal business communications in the public domain. But Slack has always been a security-conscious entity. I’ve always considered them an industry role model due to their use of GoSDL, sensible policy documents, and pragmatism with product security. Where they messed up was with their communication.

Since the backlash, Slack highlighted how:

  • Customer data never leaves Slack.
  • Large Language Models do not use customer data for training.
  • Slack AI only operates on data that the current user can see.
  • Slack AI meets all existing enterprise security and compliance requirements.

If you want to learn more, I encourage you to read the following three articles from Slack to understand how they manage privacy and security concerns:

  1. How Slack protects your data when using machine learning and AI
  2. Privacy principles: search, learning and artificial intelligence
  3. How we built Slack AI to be secure and private

I wrote this because I think people react to social media commentary rather than pause and reflect. Whether this is rushing to patch a newly released vulnerability, trying to identify whether you’re impacted by a third-party supplier breach, or reading about potential solutions to an issue you face.

Patching can lead to system outages. Scrutinising a supplier after the fact means you haven’t prepared. Choosing band-aid solutions from LinkedIn means you aren’t doing your due diligence. By opting out, you now have generic search results, worse channel recommendations and inferior auto-complete suggestions. For a productivity platform, these matter. So be careful what you read online and pause to reflect on whether you must follow that call to action.