
Latest from ITMedia : Interview with CYFIRMA CEO Kumar Ritesh
Published on May 15, 2026 at 8:00 AM
Full Article (in Japanese) :
https://www.itmedia.co.jp/business/articles/2605/15/news010.html
Several years have passed since the onset of the Generative AI boom, and chatbots and text generation tools have now become a given for many individuals and companies alike. Some individuals have even grown fond enough of ChatGPT to give it nicknames like “Chappie.” However, the focus is now shifting toward a new way of utilizing AI that lies beyond that: the “AI Agent.”
AI Agents are not merely high-performance AIs; they are “autonomous digital talent” capable of restructuring corporate operations themselves. In this article, we will explore what an AI Agent is, how it differs from traditional AI models, and how it is poised to transform business operations and cybersecurity.
Traditional AI models, such as ChatGPT and Claude, excel at predicting and generating the “most likely output” for a given text or image input. While they demonstrate high precision in tasks like answering questions, summarizing documents, and translation, they are inherently “question-and-answer” oriented, predicated on a human providing instructions every time.
In contrast, AI Agents behave autonomously or semi-autonomously. They retrieve information from their environment (such as via web browsing or API data acquisition) and perform reasoning based on that information to formulate action plans. Furthermore, they actually operate tools and systems, adjusting their actions based on the feedback they receive. The fundamental difference is that they do not simply “wait for instructions”; they grasp the situation themselves and continue to work toward a goal through trial and error.
In short, traditional models are “reactive and static.” While they perform well on specific tasks, human oversight and detailed design are essential for linking tasks together or adapting to environmental changes. AI Agents, however, are “proactive and dynamic” entities. They maintain memory, utilize external tools at will, and collaborate with other agents to pursue complex, multi-step goals with minimal supervision.
There are three main points where AI Agents surpass traditional models. The first is the aforementioned high degree of autonomy and adaptability. Once given a set of rules or goals, they update their own actions even in unpredictable environments. Even when encountering new information or exceptional cases, they can flexibly shift their strategy by combining pre-learned knowledge with observed results.
Second is the capacity for end-to-end workflow execution. While traditional AI often stops at “proposals” or “recommendations,” AI Agents can actually access business systems, update records, and even handle the sending of emails. In other words, they don’t just advise that “it would be better to do this”—they execute the process themselves.
Third is scalability as a multi-agent system. By forming teams of experts—such as agents specialized in sales support, system monitoring, or financial analysis—and having them collaborate, it becomes possible to cover large-scale, complex operations that a single model could not handle alone.
The emergence of AI Agents with these characteristics is significantly changing corporate operating models. We are seeing a transition from the previous framework of “human-centric with AI providing partial support” to one where “AI Agents orchestrate operations, while humans focus on supervision and handling exceptions.”
As “AI employees,” agents handle repetitive tasks, analytical work, and even tasks involving a certain level of strategic judgment 24/7/365. In the near future, the possibility is becoming real that agents will provide end-to-end processing for tasks that previously spanned multiple departments, such as customer service, credit screening, fraud detection, and compliance checks.
Kumar Ritesh, CEO of CYFIRMA—a global cybersecurity firm based in Singapore and Tokyo that already utilizes AI Agents in its operations—says of AI Agents: “AI Agents will take over routine tasks. This allows humans to shift to more creative and high-value-added work, leading to an overall improvement in business operations.”
CEO Ritesh points out that a new operating model, which could be called the “Agentic Enterprise,” will likely emerge. He explains that autonomous agents will optimize decision-making, monitor processes, and act as growth drivers, enabling sophisticated operations even with limited personnel. Essentially, agents are what elevate AI from a “useful tool” to a “collaborative workforce,” and along with this, organizational structures, workflows, and even the sources of competitive advantage are being redefined.
However, in today’s digitalized world, we are entering an era where attackers also weaponize AI. In other words, it is not necessarily only well-intentioned users who have begun utilizing AI Agent capabilities in the most cutting-edge ways. Cyber attackers are actively leveraging AI to dramatically increase the speed and scale of their attacks.
Attack cycles that once took days or weeks are being compressed into hours through the use of AI. AI is automating much of the work that previously required high levels of expertise, granting advanced attack capabilities not only to nation-state actors but also to low-skilled criminals.
According to CEO Ritesh, typical patterns of AI utilization for cyber attackers include the following: “It becomes possible to scan corporate assets on the internet, enabling automated reconnaissance and vulnerability discovery, such as detecting unpatched vulnerabilities (gaps in security), misconfigurations, and performing source code analysis and automated generation of attack scenarios.” Furthermore, he says, “They can generate large volumes of personalized phishing emails and use voice or deepfakes for scams impersonating CEOs. By combining multiple agents, they can automatically progress from reconnaissance to system intrusion, lateral movement, data theft, and covering their tracks. A technique called ‘vibe hacking,’ which subtly mimics the behavior of legitimate users to bypass defense systems, has also already emerged.”
Experts estimate that by 2026, in many cases, “the time it takes for a vulnerability to be exploited will be cut in half.” Because the attacker side has fewer ethical constraints, there is a risk they will utilize advanced AI for offensive purposes before the defenders can act.
A symbolic case recently drawing attention in this context is “Claude Mythos,” an advanced model by Anthropic. The company postponed the initially planned public release and instead provided it only to major technology companies and infrastructure providers (such as Microsoft, Apple, AWS, NVIDIA, and Cisco) for strictly defensive purposes.
The reason for this lies in the capabilities the model demonstrated. Mythos reportedly discovered a vast number of previously unknown, high-risk vulnerabilities in nearly all major operating systems, browsers, and common applications. Some of these flaws were said to have existed for over 20 years.
If a model with this level of capability were released without restriction, there is an extremely high risk that malicious attackers would exploit it before defenders could apply patches. Anthropic adopted a “defense-first” strategy, limiting its use to helping critical software and infrastructure vendors fix vulnerabilities.
If AI of the Mythos class falls into the hands of attackers or nation-state adversaries, companies and society will face a terrifying threat. For example, CEO Ritesh warns, “Since it can autonomously discover large numbers of subtle system flaws and instantly convert them into attacks, the speed and scale of attacks will explode. Catastrophic attacks on critical infrastructure, large-scale ransomware attacks, and espionage could be carried out before defenders can respond. This would shake the trust in the entire digital infrastructure.”
The stir surrounding Mythos has once again demonstrated that state-of-the-art “frontier AI” is a powerful double-edged sword that can be used for both defense and attack. Designing the scope of disclosure and the intended use of advanced models is becoming an unavoidable theme not only for tech companies but also for the businesses that use them and for regulatory authorities.
How should private companies approach AI from here on? Companies need to position AI not merely as a cost-cutting tool or experimental technology, but as a “strategic partner.” Rather than rushing to implement it solely in pursuit of efficiency, establishing proper rules for ethics, risk, security, and AI monitoring will enable sustainable and secure integration. It will also be essential to continuously upskill employees, update regulations, and maintain an awareness of the threats and risk information surrounding the company.
Going forward, companies will also utilize AI to defend against risks such as cyberattacks. As attackers weaponize AI, whether a company can utilize equivalent or superior technology for defense will be a factor that determines its future competitiveness.
CEO Ritesh says, “By limiting permissions and combining AI with existing security principles like log monitoring, and integrating it into continuous monitoring and incident response plans, companies can reduce potential breach costs worth millions of dollars. The key is to combine the speed and scale of AI with human judgment, oversight, and robust data protection.”
AI Agents can be an extremely powerful lever for companies in both “offensive productivity enhancement” and “defensive cyber protection.” On the other hand, if that same technology falls to the attacker side, it creates risks of unprecedented scale and speed. Therefore, successfully making AI Agents an ally while preparing for their “shadow” side will become the watershed for corporate competitiveness in the years to come.