New AI deployments should be done in controlled environment

While generative AI provides a strong base, human intervention is still needed to contextualize the outputs.

In a conversation with CIO&Leader, Vamsi Krishna Ithamraj, CTO of Access Mutual Fund, shares his expertise on the current and future roles of AI and ML in cybersecurity. With over two decades of experience in the technology sector, Vamsi provides practical insights, successful use cases, and real-world applications.

Vamsi Krishna Ithamraj,
CTO, Access Mutual Fund

He emphasizes addressing AI challenges to ensure accurate and beneficial outcomes. He highlights how overcoming these challenges can help organizations fully harness AI’s potential in safeguarding data assets and combating emerging cyber threats.

CIO&Leader: Could you throw some light on how AI and large-scale models are transforming traditional cybersecurity practices and what are the main advantages of leveraging AI in cybersecurity?

Vamsi Krishna Ithamraj: I think generative AI more than AI itself has been the flavor of the season, extending for more than a year now and that doesn’t seem to be ending soon. There are immediate use cases in augmenting content preparation, benefiting teams in marketing and sales who thrive on readily exchanged content to enhance customer knowledge of products. Another key use is in collaboration, using generative capabilities to synthesize large amounts of data quickly, providing synthesized summaries and actionable insights promptly.

Cybersecurity leaders are also benefiting from generative AI, as it helps analyze vast amounts of data and event logs. This represents a significant change from traditional practices, allowing cybersecurity leaders to find new efficiencies. In BFSI, for example, cybersecurity was largely CISO-led with a focus on regulation, leaving little time for innovation. Now, there is a need for cybersecurity to adopt agile and scrum methods, allowing closer collaboration with business functions.

CIO&Leader: Can you share some insights on practical use cases of AI and ML in cybersecurity initiatives? What have been the key learnings?

Vamsi Krishna Ithamraj: For one, we have set up an isolated environment for generative AI use cases, ensuring no data leaks before wider deployment. In the asset management industry, research analysts benefit from generative AI by synthesizing vast amounts of data, such as industry reports and public filings, into actionable insights. This saves time and enhances productivity.

Another use case is in HR, where job descriptions can be quickly crafted using generative AI, streamlining the hiring process. Marketing functions also benefit by using generative AI to create collateral and support new business incubations. While generative AI provides a strong base, human intervention is still needed to contextualize the outputs.

CIO&Leader: What are the risks associated with leveraging AI for cybersecurity and replacing traditional techniques? How can these risks be prevented?

Vamsi Krishna Ithamraj: The biggest risk is the human element—who uses the technology and for what purpose. It’s crucial to ensure the technology is in the right hands. Another risk is cross-leveraging IP, where AI considers uploaded data as input without distinguishing between different organizations’ intellectual property. Organizations must be careful about what data they upload to avoid IP breaches.

Cost is another factor. While the technology itself may not be expensive, the infrastructure required to support it can be. It’s important to make each function accountable for their use of the technology to manage costs effectively.

CIO&Leader: For IT leaders looking to deploy AI-based solutions, whether for cybersecurity or other processes, what tips can you share to help them deploy solutions wisely and ensure best practices?

Vamsi Krishna Ithamraj: One of the first things is to look at enterprise-grade capabilities. It’s important to use enterprise editions of technologies to ensure support and protection through contracts. When deploying new tech, always roll it out in a controlled, isolated environment to minimize risks. Invest in zero-trust platforms with browser-isolated test capabilities for safe experimentation.

If budgets are tight, start with a small, diverse group of users and tech talent to develop business cases and demonstrate success while containing risks.

Image by Freepik

Share on