top of page

AI Security: Just because "McKinsey said so?"

  • Writer: Scott Galliart
    Scott Galliart
  • Jun 3
  • 1 min read


ree

We all know that AI is accelerating business, streamlining processes and driving efficiency. One of the aspects we've been giving thought to is how do growing businesses and enterprises engage with AI safely. We read an article today in Entrepreneur Magazine https://entm.ag/rNQALJ about McKinsey using an internally built AI tool Called Lilli. They claim it's the only safe way they have of entering client information.


How would a client know this to be true? The human factor at play here could be that a client either takes McKinsey's word for it, or has confirmation bias because they (the client) had the brilliant idea to hire McKinsey.


The good news is that there are tools to at least partially verify that McKinsey's claims of security are believable.


Lilli operates on Microsoft Azure, offering enterprise-level security. This is a common choice, as many companies utilize Azure. Furthermore, McKinsey indicates that Lilli is trained on internal proprietary data, meaning it doesn't depend on external sources like public AI models. It's up to you and your business to decide whether to trust that. The real measure of confidentiality would be if McKinsey subjects Lilli to third-party security audits, complies with ISO 27001 or SOC 2 standards, and employs a zero-trust architecture to prevent unauthorized access.


When considering an AI advisor, make sure to ask the right questions! Trust Krainium to have the answers.

 
 
 

Comments


© 2025 KRAINIUM, LLC.  All Rights Reserved.

bottom of page