By Dr. Tejasvi Addagada
Generative AI is no longer a siloed experiment. It is shaping customer interactions, redefining knowledge work, and altering the very nature of certain activities like autonomous decisions, summarization, search, etc. Yet, governance is yet to pace. Across industries, there are gaps in organizations that can result in reputational risk and strategic blind spots. If we are serious about responsible adoption, governance must evolve beyond principles into practice.
Fragmented Frameworks
AI governance today is fragmented and inconsistent. While the language of “fairness, accountability, and transparency” is widely adopted, few organizations have turned these into operating procedures. Regional or siloed approaches differ significantly, often leaving boards and leadership teams with a patchwork of policies that cannot be enforced. Without clarity, accountability remains diluted and will still remain a bottom-up implementation.
Organizational Readiness Gaps
Inside organizations, readiness levels for governing GenAI implementations can be uneven. Many deployments begin as bottom-up experiments, with employees using public GenAI tools and providing personal data in the absence of clear oversight. This creates exposure at scale, including risks like IP leakage, data misuse, and uncontrolled model behaviour. Boards and executives must recognize that frameworks for GenAI governance are not optional; they are strategic enablers that protect value in the long run. Yet, foundational elements such as AI registries, risk assessment tools, and continuous monitoring are still missing in most enterprises.
Technical and Lifecycle Blind Spots
Governance continues to focus on pre-deployment safeguards, such as testing and model alignment, while the greatest risks occur after deployment in high-impact domains like healthcare, finance, and information integrity. Bias, factuality, and fairness remain persistent weaknesses in large language models. Moreover, governance tools tend to prioritize developers, while neglecting other critical actors: deployers, business users, governance community, stewards along with the stakeholders impacted by AI decisions.
Regulatory and Policy Challenges
Regulators worldwide are introducing AI principles and sectoral laws, yet most fail to address the distinct risks posed by GenAI, including hallucinations, intellectual property disputes, and misinformation. Globally, regulatory capacity remains limited, enforcement is weak, and accountability is concentrated
among a few technology providers. This imbalance underscores the urgent need for regulators, policymakers, and industry to collaborate rather than operate in silos.
Bridging Policy and Technology
Perhaps the sharpest gap is between aspiration, experimentation, and execution. Safety, attribution, and transparency are prescribed in policy, but rarely defined in measurable, enforceable terms. This disconnect leaves both regulators and enterprises exposed. To close it, governance must integrate technical expertise into policymaking and embed governance mechanisms directly into AI systems. Only then will policies move from principle to practice.
Toward Responsible and Inclusive Governance
The direction of travel is clear. Governance must:
- Adopt multi-stakeholder approaches that involve regulators, business leaders, technologists, and impacted stakeholders
- Be adaptive and risk-based, evolving with technology and use cases.
- Operate at multiple levels that balance operational risk controls, enterprise strategy, and cross-sector collaboration.
Responsible governance is not simply about minimizing risk, it is about creating the conditions for sustainable value. Organizations that institutionalize GenAI governance early will not only reduce compliance and reputational risks but also build trust and resilience into their business models.
The Bottom Line
GenAI governance is underdeveloped, fragmented, and uneven. The gap between capability and safeguard is widening. Boards and leadership teams must treat GenAI governance as a strategic priority, integrating clear accountabilities, adaptive controls, and inclusive practices. Those who act now will not only protect their organizations but also lead in shaping the standards of trust for the AI-driven future.

Transforming Data Risk into Opportunity: Insights from Dr. Tejasvi Addagada
Dr. Tejasvi Addagada is a technology leader, bestselling author, and a prominent expert in data and AI governance. He led multiple Data and AI management and governance functions for global banks and is an advisor to Fortune 500 firms, he has over 15 years of experience in data strategy and risk management. He authored several best-selling books, including Data Risk Management and Data Management and Governance Services. Addagada has also contributed pioneering research on Generative AI within corporate data environments. His work spans various sectors, including banking and healthcare, where he focuses on transforming complex data challenges into strategic business opportunities.
Join Our Data Community
At Data Principles, we believe in making data powerful and accessible. Get monthly insights, practical advice, and company updates delivered straight to your inbox. Subscribe and be part of the journey!
