JP Morgan Chase’s CISO Patrick Opet recently wrote an open letter (Apr. 29, 2025) to 3rd party software and SaaS suppliers, calling for a dramatic increase in focus on security. The call echoes other alarm bells ringing industry wide. According to data from Statista.com, the amount of breached or leaked accounts for individuals online jumped by a factor of 4x in 2024 versus 2023.
After all the steps and progress forward we’ve made we are still two steps behind- and the gap is rapidly widening. The professional and public services we use online commit to upholding our privacy, security and safety. Yet, time and time again, these services fail to keep our data safe. Is it simply impossible to keep up with change? Perhaps we need a rethink.
Organisations certify their digital products and services via numerous 3rd party certification, governance and standards organisations (e.g. ISO, SOC, GDPR, HIPAA, PSD2… etc.). These are meant to enhance and ‘layer in’ the necessary governance for online use. By way of certifying, we are supposed to be building secure and safe digital products for other businesses, governments, and people to use. But breaches and data leaks are just going up. Many online services are not holding to their own standards, some are severely failing their own consumers.
Another problem, we need to wait for the outcome, which is how ultimately we determine just how bad a leak or breach was. Massive fines get doled out (are they ‘massive’ enough?) a bit of reputation burned, and for the business or organisation often, life and business goes on as usual. However, in the meantime, key digital services are shut down. Accounts are closed and customers, citizens and consumers are locked out from the services they now need for their everyday lives. Everyone loses, we suffer some pain, then we rinse and repeat.
Our collective trade off
It’s 2025 and AI has swept the world off its feet. We are starting to deploy ‘AI services’ across major business and government, in an effort to reduce manual work and increase efficiency. The opportunity is enormous: tedious administration to be done? AI. Meeting notes needed? AI. Information harmonization across the wider organisation? AI.
The promise is huge, but so is the problem.
The algorithms driving LLM’s scrape and pull data from where they can find it. While there is an attempt to ‘glean’ and purge the results of any sensitive data (and an attempt to hold up to the privacy and security policies governing the realm a user would normally operate in when directing the prompts for LLM’s to search, categorize and provide information back) we are seeing private, sensitive and even copyright information emerge as part of AI-prompt results.
Just like the ‘race to cloud computing’ during 2010-ish onwards, privacy, security and safety in the age of AI is an afterthought.
We are collectively charging at the opportunities presented by automation and AI and forgetting how hard it is- and how important it is- to control our data online. If a chatbot returns sensitive information about one colleague to another from, for instance, HR repositories hosted somewhere else in an organization, the employer suddenly has a serious internal breach. Circumstances depending, information of this type may be highly valuable on the darkweb, where it can be bought and sold every day for nefarious purposes. The world has left-shifted to embrace AI, at the high cost of privacy, security, and safety online.
New knowledge, new responsibility
If you were at RSAc last week, along with the world’s front-running cybersecurity thought-leadership, providers, and industry experts, you would have been blitzed-out by a rampant AI message from all angles. It quickly gets watered down. Being human, we quickly are exhausted. Because, security, in the age of AI, is harder, more complex and importantly more expensive than ever before. Yet, it is more imperative than ever.
AI tools and services create a new threat surface for an organisation; it is imperative we understand, create, and deliver digital products and services leveraging its benefits with security-by-design.
Patrick’s letter ends with some strong guidance to software and particularly SaaS vendors building and shipping digital products:
Traditional measures like network segmentation, tiering, and protocol termination were durable in legacy principles but may no longer be viable today in a SaaS integration model. Instead, we need sophisticated authorization methods, advanced detection capabilities, and proactive measures to prevent the abuse of interconnected systems.
The most effective way to begin change is to reject these [traditional measures and] integration models without better solutions.
It’s a call to action for providers. It’s also a message for the wider market. We need to do and expect better, collectively. We need to go beyond the standards derived by age-old governance bodies and meet the new responsibilities of the opportunities we’re chasing. We need to deliver security actually built ‘ground up’ inside our shiny new AI products and services, instead of ‘bolting on’ the functions needed as an afterthought. And it needs to prevent more data leaks and breaches than it creates - that is the game.
Meeting the security challenges of AI requires more than bold ambition—it demands the right approach. IndyKite is focused on helping organizations embed trust, control, and visibility directly into the data layer where it matters most.
For a deeper dive into how to secure AI systems, download the AI Security Playbook and get security right from the start.