“India Stands Out Globally For Its Emphasis On Secure And Responsible GenAI Adoption”

In this exclusive interaction, Rajeev Ranjan, Editor of Digital Terminal, speaks with Sheetal Mehta, SVP and Global Head of Cybersecurity, NTT DATA, Inc., on how businesses can bridge this gap.
“India Stands Out Globally For Its Emphasis On Secure And Responsible GenAI Adoption”
Published on
5 min read

As GenAI adoption accelerates, Indian enterprises face a critical divide—CEOs see opportunity, while CISOs see risk. In this exclusive interaction, Rajeev Ranjan, Editor of Digital Terminal, speaks with Sheetal Mehta, SVP and Global Head of Cybersecurity, NTT DATA, Inc., on how businesses can bridge this gap. From regulatory readiness and governance gaps to secure innovation and upskilling, Sheetal offers sharp insights into how Indian security leaders are reshaping cybersecurity strategies for the GenAI era.

Rajeev: Why is there a disconnect between CEOs’ optimism and CISOs’ concerns about GenAI adoption, and how can Indian enterprises bridge this gap?

Sheetal: Indian enterprise CEOs are under immense pressure to demonstrate digital innovation, especially with the buzz around GenAI. However, CISOs view GenAI adoption differently. They are concerned about issues such as model hallucinations, data leakage, and compliance with the upcoming Digital Personal Data Protection (DPDP) Act. So, while CEOs see GenAI as a revenue enabler, CISOs perceive it as a source of risk exposure.

CISOs are often brought into business conversations late in the process, which creates added risks for enterprises. With shadow AI, these risks multiply even further. To add to this, many firms lack robust AI governance frameworks, which deepens the leadership disconnect.

To bridge this gap, enterprises can establish AI governance councils involving both business and cybersecurity leadership, ensuring joint ownership of the AI adoption strategy and risk oversight.

Rajeev: How are Indian security leaders approaching GenAI innovation vs. security risk? Any India-specific trends or best practices?

Sheetal: Indian security leaders are taking a cautious yet proactive approach to GenAI innovation, balancing opportunity with risk. Most enterprises are focusing on internal productivity use cases – such as chatbots and coding assistants – while delaying external-facing deployments due to regulatory uncertainty, particularly around the DPDP Act. Indian security leaders are still learning how to discover use cases for GenAI that will drive value, leading to POCs, pilots and experimentation. Even CISOs are trying to discover the asset class and the data it uses.

Sectors like BFSI and IT are leading in adopting Zero Trust frameworks integrated with AI, driven by tightening compliance requirements. There's also growing interest in privacy-preserving GenAI, including the use of local LLMs for sensitive workloads. It is imperative that leaders plan the handling of non-human identities with agentic AI solutions.

To manage risks, some organizations have established internal “Responsible AI” boards to evaluate innovation use cases for compliance and ethical alignment. Notably, Indian security leaders are shifting from a traditional ‘wait-and-watch’ stance to a ‘build securely while iterating’ approach, aligning innovation with evolving policy guardrails. Despite this progress, 54% of Indian security leaders report a significant gap between innovation and responsibility, citing security risks and a preference for safe approaches as the main reasons for this gap.

Strong leadership involvement is emerging as a best practice. The majority of organizations in India involve their CISOs in GenAI decisions and vendor selection, and emphasize the importance of having a named C-suite appointee responsible for GenAI strategy. GenAI safety capabilities are the top criteria for technology partnerships, and employee education on ethical GenAI use is increasingly seen as a key leadership responsibility. India stands out globally for its emphasis on secure and responsible GenAI adoption, with 90% of organizations increasing investment in security as a direct result of GenAI initiatives.

Rajeev: With 88% citing legacy infrastructure as a roadblock, what steps should Indian firms take to modernize and improve AI-era cybersecurity?

Sheetal: Indian firms should prioritize hybrid cloud adoption built on security-first architectures such as Zero Trust and Secure Access Service Edge (SASE), while leveraging cloud-native AI security tools to enhance threat detection and response.

As a starting point, firms should conduct comprehensive legacy risk assessments to identify systems that are incompatible with AI integration and pose security vulnerabilities. This should be followed by a comprehensive knowledge management and AI training program, using robust Proof of Concepts (POCs) and strong business case for infrastructure modernization. Next, enterprises should conduct digital and AI transformation programs that accelerate innovation, ideally in partnership with cloud providers that meet India’s data residency requirements.

Given that 87% of organizations report legacy infrastructure is affecting both business agility and GenAI adoption, it is essential to upskill cybersecurity teams. Focus areas should include AI observability, AI-powered security automation, and AI risk and governance management, ensuring teams are equipped to manage emerging threats and compliance challenges in the AI-driven landscape.

Rajeev: 72% of organizations lack formal GenAI usage policies. Why this lag in governance, and what frameworks should Indian enterprises adopt?

Sheetal: The lag in GenAI governance among Indian enterprises is largely due to the fact that they are in an experimentation phase, and CISOs are often brought into conversations around GenAI adoption too late. Many firms have yet to establish structured policies around responsible use, model lifecycle, and data governance.

To address this gap, enterprises should adopt Responsible AI principles and implement frameworks like AI TRiSM and the Microsoft Responsible AI framework, tailored to local compliance needs. Establishing internal AI governance boards with cross-functional representation – legal, risk, tech, and data – and creating use-case-specific risk policies will help ensure secure and ethical GenAI deployment.

Rajeev: 69% of CISOs say their teams lack GenAI skills. What are the top skills Indian cybersecurity teams need, and how can upskilling be accelerated?

Sheetal: Indian cybersecurity teams urgently need upskilling in key AI-era capabilities, including AI threat modelling and red teaming, AI model security (e.g., prompt injection, data poisoning), AI governance, compliance, and audit, and the use of Agentic AI and GenAI in security operations.

To accelerate upskilling, organizations should encourage IT teams to learn AI governance, not just coding – understanding bias, model explainability, and audit readiness is essential, especially under the DPDP Act. Practical learning can be driven through hands-on workshops with platforms like Microsoft, Google Cloud, and AWS, and by building internal Centers of Excellence (CoEs) focused on real-world use cases like secure code generation and SOC automation. Additionally, firms should leverage government-backed initiatives such as NASSCOM FutureSkills Prime and MeitY’s AI skilling programs to scale training across teams.

Rajeev: How is digital trust being redefined in India’s GenAI era? What steps should businesses take to build trust among stakeholders and regulators?

Sheetal: In India’s GenAI era, digital trust is being redefined by how explainable, secure, and compliant AI systems are. With rising privacy awareness among consumers and regulators, trust now hinges on transparency, accountability, and ethical AI use.

To build trust, businesses must implement AI observability, auditability, and watermarking to ensure traceability and combat misinformation – especially in sensitive sectors like BFSI, healthcare, and public services. Preparing for the DPDP Act and IndiaAI Mission is essential, with a focus on data minimization, consent, and ethical AI practices.

Indian firms should start by building an AI asset inventory, conducting continuous risk assessments and security testing, and communicating AI system limitations clearly to users. This will help foster stakeholder confidence and regulatory alignment.

𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐥𝐚𝐭𝐞𝐬𝐭 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐛𝐲 𝐣𝐨𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 WhatsApp Channel now! 👈📲

𝑭𝒐𝒍𝒍𝒐𝒘 𝑶𝒖𝒓 𝑺𝒐𝒄𝒊𝒂𝒍 𝑴𝒆𝒅𝒊𝒂 𝑷𝒂𝒈𝒆𝐬 👉 FacebookLinkedInTwitterInstagram

Related Stories

No stories found.
logo
DIGITAL TERMINAL
digitalterminal.in