In this episode, Bastien Cabirou sits down with Abbas Kudrati, Chief Identity Security Advisor APAC at Silverfort and former Microsoft Chief Cybersecurity Advisor for Asia, to discuss how security leaders should think about shadow AI, AI governance, and the emerging risks of agentic systems in the enterprise.
The conversation covers why employees will keep using AI even when it is blocked, why governance beats blanket bans, how agentic AI creates a new non-human identity problem, and what CISOs need to do now to stay ahead of the next wave of enterprise risk.
Episode highlights
- Why blocking AI tools often drives usage underground instead of reducing risk
- How shadow AI evolved from the same pattern as shadow IT and shadow SaaS
- Why governance, policy, and employee education matter more than blanket bans
- How agents inherit privileges and create new non-human identity attack surfaces
- Why prompt injection, uncontrolled tool access, and email exposure are major enterprise risks
- Why the future SOC likely includes agents at level zero and humans managing, tuning, and supervising them
About Abbas Kudrati
Abbas Kudrati has spent more than 27 years across IT, cybersecurity, ethical hacking, GRC, and executive security leadership. He previously served as Microsoft's Chief Cybersecurity Advisor for Asia and now works at Silverfort, where he focuses on identity security across human, non-human, and agentic identities.
Key takeaways from the conversation
1. Employees will use AI whether you approve it or not
One of Abbas's clearest points is that AI adoption behaves like earlier technology waves such as the internet, mobile phones, and BYOD. If organisations simply block AI, employees often route around controls by using personal devices or consumer accounts. That removes visibility right when visibility is most needed.
2. Good governance beats turning off the tap
Instead of defaulting to bans, Abbas argues that organisations should define clear guardrails: which tools are approved, what kinds of prompts are acceptable, what data can be used, and where paid enterprise versions should replace consumer-grade tools. The goal is secure enablement, not performative prohibition.
3. Shadow AI is becoming a visibility problem at enterprise scale
Security teams may know about a few mainstream tools, but Abbas notes that real environments can contain hundreds of AI applications with little or no oversight. Discovery matters because if you cannot see which tools are in use, what data is flowing into them, and which identities are tied to them, you cannot govern the risk.
4. Agentic AI creates a new identity security challenge
A major theme in the episode is that agents should be treated as part of the workforce. They inherit access, create sub-identities, connect to email and SaaS tools, and may trigger actions across multiple systems. That makes non-human identity security, privilege management, and auditability central to AI governance.
5. AI security is becoming agent-versus-agent
The discussion also explores a future where red-team agents test blue-team agents, and defensive systems continuously challenge, monitor, and retrain other AI systems. In practice, that means governance will extend beyond employee training into ongoing agent tuning, evaluation, and control.
Notable quotes
If you don't see it, you can't manage it.
Don't close it, because if you close them, they'll go. But educate them.
Hackers are not targeting human identity. They are targeting the non-human identity.
You need to learn how to manage the agent.
Why this matters for CISOs and security teams
The episode reinforces a practical point for enterprise leaders: AI risk is no longer just about whether employees paste sensitive data into public chatbots. It now includes shadow AI discovery, non-human identity exposure, prompt injection, agent sprawl, budget sprawl, and the operational challenge of supervising semi-autonomous systems connected to business tools.
For security leaders, the near-term priority is not to stop AI adoption entirely. It is to make AI usage visible, governed, approved where appropriate, and bounded by enforceable controls. That is where AI governance platforms, identity controls, and structured red-teaming start to matter.
Transcript
Below is a lightly cleaned transcript excerpt from the conversation for readers who prefer to scan the discussion in text form.
So yes, technologies are growing fast or getting more smarter. We have the... CNAPP applications and cloud application proxy and all. They are enabling them to leverage and identify what AI are being used, but they still can't look at what data is going inside or which identities are being enabled among them. And the new technologies and new vendors are coming up with those things, but it is still at an early stage, I would say. And not everyone have a budget to go and enable or do a landscape review and start doing it. So they simply, what they do, turn off the tap. Even if you turn off the tap, if they, if they do a proper scan and, and look at all the human and non-human identity, they would find still hundreds of AI applications which are still being, being used in the environment, which they have zero visibility. Again, going back to the governance, if you don't see it, you can't manage it.
If you don't... it's not visible, you are not able to manage them. Thank you, Abbas, for joining us today. Uh, it's great to have you here. Um, today, I would like to have your opinion on AI cybersecurity and, you know, everything that's happening in the space. Um, so thank you to be here. Thank you. My pleasure. Thanks for inviting me, and it's a beautiful day in Sydney today. Exactly. Exactly. So first, can you tell me a little bit about yourself? Yeah. So been in the industry for more than twenty-seven years. Uh, I'm Abbas Kudrati. Uh, worked in variety of companies, started as a system admin, uh, network engineer, then became an ethical hacker, GRC consultant, and became a CISO. Uh, my last job was with Microsoft as a chief cybersecurity advisor for Asia, and now I work for a company called Silverfort as their chief identity security advisor for Asia Pacific region.
Wow. So like, uh, you did so many type of jobs, always in cybersecurity, always in IT. Always started in IT because there was no cyber in those days, uh, when I started back in nineteen ninety-six, ninety-seven timeframe. So yeah, it's been a crazy journey all the way. And now we are living in the age of AI, as we know. Yeah, and that was a big wave. Yes, absolutely. Mm. Cool. So in term of your experience, like what was the most exciting time and what was the most exciting like, uh, wave in, in technology? Oh, my most exciting time was Y2K. I'm sure those, those, uh, those who have twenty-five, thirty years of experience, they would still remember those, uh, long hours and nights... when they were busy preparing for the Y2K, the clock to hit year 2000 and getting into the new millennium, as we said. That was a very crazy time for me, and I was very fresh in my, uh, in my, in my professional journey.
I think it was my second job or the third job, uh, back in India, uh, in, uh, in Pune, uh, where, uh, it was again, an American company... and all the requirement, uh, came from US saying that these are the things you need to do in terms of preparing ourself and our-- protecting our data... if clock hits in the middle of the night. So that was the one night where I was working till late night, till one, one AM, because I had to sit and watch all my servers, what's happening at, at the str- at the stroke of midnight. Mm. So my-myself and one more colleague, we were two of them, were in the room and everyone else is partying and enjoying millennium, while I was dedicated myself as a cyber hero or IT hero, watching how things happened and everything went off. And then I left the... after sending the last email. Looks like everything is working, all data is secure and protected, and, and everything is good.
I'm leaving for the home or going for the party. My party started one thirty AM in the morning. I had it early. That was the craziest time I had, I think, uh, from the professional point of view. The Y2K was, uh, I would cherish that memory forever. That's great. Yes. And so after that, you did like different type of jobs, and- Yeah. -I think when we met, you, you'd been working for Microsoft. Yes. Yes. Um, so big journey. How did you end up in Microsoft? So I've been a CISO for, for a while. I, I worked as a CISO for multiple companies, uh, starting from, uh, Middle East. In Kingdom of Bahrain, I was a CISO for whole of country of Kingdom of Bahrain, uh, electronic government. And then I moved to, uh, Kuwait, uh, for National Bank of Kuwait, where I was a deputy, uh, in, in the CISO office. And from there I moved to Australia, uh, Melbourne, back in twenty thirteen.
I joined Public Transport Victoria, uh, the train, tram, metro network. I was the CISO for them and then joined KPMG as a consulting, uh, company as a CISO for the internal work, not for the customer work. So I was a CISO for the Australia, New Zealand, Fiji, PNG of KPMG offices. And while I was doing the CISO role, I was thinking, what's next for me? And I saw the role coming up from Microsoft. They were looking for somebody who is an experienced CISO, who is a public, uh, speaker, and is comfortable in talking to a large audience and executives and, and stakeholder management, and, uh, who can believe in Microsoft technology as a... Microsoft as a, as a secure company, not a security company. At that time was a persona. So I applied for the role and went through a lot of interviews, and finally we both got convinced, Microsoft and myself, that, yep, looks like this is something I can do, become a CISO for customers there.
A customer CISO or a field CISO as we call it. That was an interesting time. Almost seven years I spent there, uh, in Microsoft, seeing how... they mature from what they call themselves a... secure company to a security company. So that was a transition I have been through, and now they're moving into a phase of AI. And then I thought, okay, now enough of Microsoft. Let me try to do something... new. And I, I'm a firm believer of identity as a control plane because all the attacks, what we see, are targeted towards, uh, human and non-human identity. And I came across Silverfort as a company who is a Microsoft partner. Been working with them for last three years from the Microsoft side as supporting them... on, uh, on, on implementing the zero trust architecture from on-prem environment. And, uh, finally, the CEO approached me and saying that, "Why don't you come on this side?" And we agreed, okay, let's take a move and try something different.
So here I am for last one year, two months, uh, working for Silverfort as their identity security. Exciting time, uh, from identity to, uh, human identity to non-human identity to now agentic identity. That's where I'm focusing these days. That's great. So in term of... And, and that's great. We're gonna talk about a little bit about AI and about like all the security around AI. Mm-hmm. Um, since you've been working with Microsoft, um, you've seen like the AI adoption in the workspace. What do you-- What, what kind of risk have you seen like growing in term of cybersecurity, um, around employee AI use, around agentic work? See, employees are curious. Uh, any new technology comes in, people always try to experiment and explore how it is doing and what it is doing. At the same time, the CISO and executives are, are, are afraid that this is something completely new.
I'm not gonna risk enabling it. But if they try to disable those kind of technology in the environment, as a human, you will go home and try it on your phone, on your personal laptop and try to see, because everyone talks about it. As you must have seen the statistics, how long it took for internet to be famous and mobile phone to be famous, and how long the ChatGPT, um, as we say, that that was the first thing when everybody started. Within few weeks, it became million users... because of the curiosity and the amount of, uh, uh, uh, the cool thing it does for everyone. So yes, as a human, you want to go and explore it, but of course, uh, we need to also look at the risk and the threat which comes along with it. Fantastic opportunity. That's where CISOs were worried that I want to allow my user to do it, but I, I need to put a guardrail, and that's where the good governance comes into the play.
If you don't set a good guardrail and, and governance, people would go haywire and use it. The same thing happened with the internet, and same thing happened with the mobile phone. You-- Are you allowing your, um, people to use their own phone in the company? The BYOD policy came into place. Same thing, the very first step the CISOs are now thinking or which I always recommend them to do it, is set the guardrail, set the policy... which technology you are allowing them to open it. Don't close it, because if you close them, they'll go. But educate them... that these are the risk which comes along with them. Yes, they can play with it, and if they have a corporate version of it, you enable the corporate version of it. Mm. There are many of them, like ChatGPT have an enterprise version. Microsoft will have its own Copilot, which is a professional version and a free version.
Because those, if you pay something, it comes with a guardrail. If you're paying something free, you are the product, as they say, right? So have a guardrail in place, have a policy in place, have a education or an awareness in place. This is what they are supposed to use. It's this kind of prompt they can ask, this kind of data they can use it to... play along with the, the technology and, uh, and work with your employee... to enable them more securely rather than just blocking it, you know. That's what I think. This is exactly what we recommend, and I feel like it's, um, we are really aligned on this. And, um, what do you think about all this new shadow AI and, and, and risk around what you don't know? Shadow AI, we started from shadow IT, and then we have a shadow security and shadow applications, right? That was the biggest challenge. Uh, when you ask any CIO or a CXO or a CISO in their own organization, the first thing they will say, "I'm not worried about what I know.
I'm worried about what I don't know,"... which is the shadow thing. People, people enabling their own servers in the environment, that's where shadow IT come in place. Then the people are enabling their own application on the cloud. Shadow, shadow application came into the place, and now shadow AI. Yes. People are trying to use all those kind of new product which are growing at a rapid speed. It was only ChatGPT, and today if you look at it, it's more than thousand-plus AI applications in the environment. And, uh, organization have zero clue... how many of them are used by their employee, and in those application, what are the corporate data which has been ingested to play with it. So yes, technologies are growing fast or getting more smarter. We have the CNAPP applications and cloud application proxy and all. They are enabling them to leverage and identify what AI are being used, but they still can't look at what data is going inside or which identities are being enabled among that.
And the new technologies and new vendors are coming up with those things, but it is still at a early stage, I would say. And not everyone have budget to go and enable or do a landscape review and start doing it. So they simply what they do, turn off the tap. Let's close it. Thinking that n- it's not there, but it's... Even if you turn off the tap, if they, if they do a proper scan and, and... look at all the human and non-human identity, they would find still hundreds of AI application which are still being, being used in the environment, which they have zero visibility. Again, going back to the governance, if you, if you don't see it, you can't manage it. If you don't-- it's not visible, uh, you are not able to manage them. I agree. I agree. What, what's actually very interesting is what you said about like blocking it is even worse because people are gonna use it anyway.
Yeah. Um, and you're gonna have zero visibility because they're gonna use their own personal laptop. They're gonna use like send document to their personal laptop. So things are gonna go around. Um, it's too good for people to not use it. Yeah. And so... they will use it. The question is, do you want to control it? Do you want to govern it? Or yeah, if you decide- Put guard- to block it- Yeah to go rather than it, if you... block it, people are still gonna use it, and you won't have any visibility. Absolutely. They talk to each other, right? Oh, I can create a code in five minutes rather than, uh, working on hours. Yes. Why don't I do it? Uh, I'll do it from home if the company's not allowing me to do- Yeah it from the office. Yeah. Exactly. When you have to write big documents like reports, when people write emails, like it's, it's, yeah.
Explore next
If this conversation resonates with the challenges you are seeing internally, Aona helps security and compliance teams discover shadow AI, understand how employees and agents are using AI tools, and apply governance guardrails without killing productivity.
Want to see how it works? Book a demo with Aona to see shadow AI discovery and AI governance in action.
