The enterprise industry is undergoing a sea change as artificial intelligence enters the picture. What was once dominated by static systems and manual workflows is rapidly transforming into an intelligent ecosystem led by chatbots and AI agents, giving new meaning to business productivity. With sophisticated “agentic” systems capable of autonomous work, this shift may impact millions of knowledge workers. For organisations looking to implement AI agents, it also raises important questions about how well their infrastructure is prepared to handle this change and how secure it will be for enterprises to trust these systems. At Splunk 2025, Hao Yang, Vice President and Head of AI at Splunk, who has led the development and implementation of AI and machine learning technologies into Splunk’s products and services, discusses how AI agents are reshaping enterprises and where the market is heading.
The AI market is evolving rapidly from simple language models that answer questions to autonomous agents capable of pursuing goals using multiple tools and iterative reasoning. Now, domain-specific AI agents are being touted as the next big thing, offering enterprises new opportunities to deploy AI directly within their business processes.
That’s actually the fun part about AI as a technology, it provides a general-purpose way of interacting with systems and getting intelligent answers. But then product managers come in and ask: How do I take this technology and turn it into a capability my users can benefit from? For example, at Splunk 2025, we announced the Malware Reversal Agent. What it does is this: when you are using Splunk Attack Analyser and you receive, say, a suspicious email — maybe a phishing or spam email you, as a user, report it. Then, the Attack Analyser inspects the email’s attachments and links to see if there’s any embedded malicious code.
Story continues below this ad
AI agents are autonomous software systems with reasoning capabilities to process large volume of data, compressed context and design complete workflows by interfacing with other systems and undertake actions on their own. (Image credit: Anuj Bhatia/Indian Express)
Today, when malware is detected maybe it’s a bash script or another kind of malicious script, an analyst typically has to look at the code and figure out what it’s doing. That’s not easy. It requires deep expertise and a lot of focused time and effort. So, this particular agent asks the question: Why not use generative AI to analyse the code? Some of these coding agents are now capable of doing just that. You can feed the malware sample into a GenAI system and ask: What is this script doing?
In some cases, the AI provides a full, detailed analysis. In other cases, it gives a high-level summary. Either way, it’s a huge help to analysts- giving them a quick understanding of what the script is doing, whether it’s malicious, and what its potential impact might be.
Most organisations are hesitant to adopt AI agents, harboring a pressing concern: data security? What is Splunk doing to address the issue?
When I think about trust, it goes far beyond just cybersecurity. For example, if my employees are using AI to do their jobs and they often handle sensitive customer data – I have to ask: What does the model provider do with that data? Am I accidentally exposing my customers’ information to someone else?
Another concern is the use of AI in mission-critical decision-making. If a bad actor, in some way or form, can influence or manipulate the AI’s output, then you are potentially walking into a trap without realising it. So, trust comes from multiple dimensions – not just from a cybersecurity perspective, but also from a risk, quality, and governance perspective. And that’s where Splunk can play a central role: by helping organisations build the trust they need to adopt AI safely and confidently.Story continues below this ad
The key to trusting any technology is observability- having complete visibility into what’s going on. You can’t trust a black box. But once you have a dashboard that unpacks everything – the number of tokens processed, the AI’s responses, quality assessments, and more, then you start building real trust. You understand the system, monitor it, and ensure it behaves as expected.
Artificial intelligence is often criticised for generating information that appears factual but is actually false, a phenomenon known as hallucination. How does Splunk address the challenge of AI hallucinations?
Hallucination is not something we can completely eliminate: the technology simply isn’t there yet. However, there are ways to tune the system to reduce the likelihood of hallucinations. For example, adjusting temperature settings, providing more context, and implementing validation steps after receiving AI-generated results.
The evolution of LLMs to advanced reasoning models have opened new opportunities for Agentic AI, allowing AI agents to playing an important role in organisations. (Image credit: Anuj Bhatia/Indian Express)
We are building multiple guardrails into the system to check and validate AI-generated content and reduce hallucinations. One of our core principles is maintaining a human-in-the-loop approach. In many cases, AI-generated results are reviewed by a human either before any action is taken or after the AI has performed a task to verify the outcome. We also follow a design principle of traceability. Think of it like writing a research paper: when you make a claim, you need to cite your sources. Similarly, when the AI says something like, “This user logged into the system five times,” we log the SPL queries that were used to generate that response. This creates auditable traces that users can inspect to understand how a conclusion was reached. And when mistakes do happen because they inevitably will, and we capture that feedback and feed it back into the system to help the AI learn and improve over time. This continuous feedback loop helps us deliver more accurate and trustworthy AI outcomes.
How can smaller organisations deploy AI agents? Do they have to pump more resources?
The best AI is the kind that’s embedded directly into your workflows Nobody wants to spend a huge amount of time or effort setting something up just to make it usable. It should just work. That’s our philosophy with all the AI assistants we are shipping – they will work out of the box. For example, our AI is embedded into enterprise security. Once you have enterprise security, the AI system just works : no extra setup required. It’s the same idea behind our first AI system for SPL: you don’t need to do anything special. You just download, install, and go. We believe this is the best way to people to onboard into AI. Story continues below this ad
Right now, many AI agent use cases are still experimental So when can we expect enterprises to have AI agents that are ready for production use, or at least mature enough to be readily adopted and embedded into their core business processes?
It really depends on the type of business a company is in. Some industries move faster than others due to factors like regulation, risk tolerance, or complexity. It also depends on the scope of the agent. Is it something that will change your financial operations? Or is it focused on improving how you manage internal workflows. I think the answer depends on a combination of factors. At the end of the day, it comes down to return on investment and how much does it take that includes the risk that you’re taking versus the impact you are seeing. That being said, the concept of AI agents is still relatively new. I remember when people first started talking about “agentic AI” that was less than 12 months ago. There’s certainly a lot of excitement around AI agents. But yes, it will take some time for people to get comfortable using them and to fully understand how to leverage them. It also takes time for companies like Splunk to learn from customer feedback and continue improving – making our agents more useful, more robust, and more reliable. This is just the beginning of the journey. But based on what I am seeing and hearing from customers, many are ready to start. I am already seeing a lot of pilots and early deployments happening.
As enterprises and organisations begin deploying AI agents at scale, what is the impact on non-technical workers and their roles? Broadly spaking…is the world ready for this shift?
If you think back to the 90s, when personal computers first became a thing, they initially were only used by the military and national labs. Then, with the advent of the personal computer, they became accessible to everyone. At first, only a small number of people knew how to use them, but businesses quickly started adopting computers because they saw the tremendous value digitalisation could bring.
That’s how I think about AI- it’s a technology that will fundamentally change how we work, live, and connect with each other. The trend is clear. Ten years from now, I believe everyone will be using some form of AI, but different people and industries will take varying amounts of time to get there.
From a business perspective, many organisations now have AI mandates, but how effective those are remains to be seen. In the software engineering world, for example, I have already companies mandating the use of vibe coding. To me, it’s important to be thoughtful about how we approach this. It’s less about forcing people to use AI and more about guiding them on how and when to use it effectively. If something doesn’t work for someone, you can’t just force them to adopt it. Just like with computers, people need to be trained, given the right tools, and the technology must be made easy to use.Story continues below this ad
The same is true for AI – it will take time. But with the speed of AI evolution I am seeing, one year is a long time. However, now the barrier to entry is actually lower now. What it really takes for someone is curiosity and determination -nothing else stands in the way. That’s why I believe AI adoption will happen much faster than the adoption of personal computers.