Bridging AI Governance & South Asia
Rapid advances in artificial intelligence are reshaping societies, economies, and institutions — yet South Asia remains underrepresented in global AI policy conversations. AI Governance LK bridges that gap, translating complex AI risks into accessible knowledge for policymakers, technologists, and citizens across the region.
As AI systems grow more capable, the risks they pose become more consequential. Understanding these risks — especially in the South Asian context — is the first step toward meaningful governance and policy. These are not distant science-fiction concerns: many are unfolding today.
AI systems may pursue goals that diverge from human values — not through malice, but through poorly specified objectives. As systems become more autonomous and capable, even minor misalignment could have catastrophic consequences at scale.
AI-powered surveillance — facial recognition, biometric tracking, social credit systems — poses profound risks to civil liberties. Without robust legal frameworks, governments and private actors can deploy these at mass scale with minimal accountability.
Advanced AI could allow a small number of states, corporations, or individuals to amass unprecedented economic, military, and informational power — fundamentally disrupting democratic checks and global stability.
Lethal autonomous weapons systems (LAWS) capable of selecting and engaging targets without meaningful human control represent a new class of weapons of mass destruction. The absence of international AI arms-control treaties is alarming.
AI-generated deepfakes, synthetic audio, and manipulated content can undermine democratic processes, inflame ethnic and religious tensions, and destabilise communities — with limited tools available to detect or counter them at speed.
AI systems trained on historical data can perpetuate and amplify existing inequalities across gender, caste, ethnicity, and religion. Biased AI in lending, policing, or judicial systems can have severe real-world consequences.
Rapid AI-driven automation threatens to displace large segments of the workforce, especially in industries that South Asian economies depend upon — including manufacturing, textile, call centres, and knowledge services.
These concepts form the intellectual scaffolding of responsible AI policy. Understanding them is essential for anyone engaging with AI governance discussions.
The technical and philosophical challenge of ensuring AI systems reliably pursue outcomes that humans genuinely intend. This includes corrigibility (AI being correctable), value learning, and robustness under distributional shift. Misaligned AI at scale is considered one of the most serious existential-level risks.
A research discipline focused on ensuring AI systems operate predictably, robustly, and without causing unintended harm. Encompasses technical safety (adversarial robustness, interpretability) and sociotechnical safety (human oversight, deployment safeguards, red-teaming).
The constellation of laws, policies, standards, institutions, and norms that regulate how AI is developed, deployed, and used. Effective governance is multi-layered — spanning international bodies, national regulators, corporate compliance, and civil society accountability mechanisms.
A practice-oriented framework for AI development that embeds ethical principles — fairness, transparency, accountability, privacy, and inclusiveness — into the design lifecycle. Distinct from AI safety but complementary; RAI is focused on values-by-design rather than purely on preventing catastrophic failures.
Interpretability is the degree to which a human can understand the internal mechanics of an AI model. Explainability is the capacity to describe its outputs in human-understandable terms. Both are increasingly required under regulatory frameworks (EU AI Act, GDPR) and are fundamental to accountability.
Scenarios where advanced AI could contribute to outcomes that permanently curtail humanity's long-term potential — ranging from AI-enabled totalitarianism to scenarios where misaligned superintelligent AI systems act contrary to human survival. Once considered fringe, x-risk concerns are now endorsed by leading AI researchers and policymakers.
The principle that individuals and nations have rights over data generated within their jurisdictions. For South Asia, localisation mandates and enforceable data protection laws (Sri Lanka PDPA, India DPDP) are central governance tools — though implementation gaps remain substantial.
The use of AI tools within organisations without formal approval, oversight, or governance structures. Shadow AI mirrors Shadow IT but with amplified risk — including data leakage, unreviewed model outputs informing decisions, and regulatory non-compliance — particularly in sectors like finance, law, and healthcare.
A curated library of books, papers, organisations, and tools for anyone wanting to go deeper into AI governance and safety.
A deeply reported investigation into the challenges of making AI systems do what we actually want. Accessible and essential for non-technical policymakers.
The foundational text on existential risk from advanced AI. Introduces the concept of an intelligence explosion and analyses paths to catastrophic misalignment.
Proposes a new paradigm for AI — machines that are uncertain about human preferences and defer to humans. A must-read for AI safety researchers.
Examines the material, political, and social costs of artificial intelligence — from mines to data centres to labour exploitation. Critical for a global south perspective.
An accessible analysis of how opaque, unaccountable algorithms affect real lives — in lending, criminal justice, and employment. Particularly relevant for emerging economies.
DeepMind co-founder's analysis of the transformative and dangerous wave of AI and synthetic biology — and an argument for containment. Highly relevant for policymakers.
Amodei et al. (2016) — landmark paper defining practical near-term AI safety challenges including reward hacking, safe exploration, and distributional shift.
Annual comprehensive report tracking AI capabilities, investment, policy, and governance worldwide. Includes data on AI adoption across Asia and the Global South.
Outlines challenges of highly capable AI for national security: strategic advantage, WMD escalation, AI-enabled deception, autonomous systems, and internal misuse.
The first global normative framework adopted by 193 countries. Covers AI lifecycle ethics, data governance, and cultural rights — critical reading for South Asian policymakers.
The world's first comprehensive AI law. Classifies AI by risk level and imposes obligations on providers and deployers. Sets a global benchmark that influences South Asian legislation.
Analysis of Sri Lanka's Personal Data Protection Act (No. 9 of 2022) and its implications for AI systems processing personal data — gaps, strengths, and reform priorities.
Leading research organisation working on reducing societal-scale AI risks, including the landmark statement on AI catastrophic risk signed by hundreds of researchers.
World's first graduate-level AI university, based in Abu Dhabi. Growing contributor to responsible AI research relevant to the Global South and Arab world.
AI safety company behind Claude. Publicly committed to developing AI safely and publishes research on Constitutional AI, scalable oversight, and interpretability.
UK's national institute for data science and AI. Produces policy-relevant research on AI ethics, governance, and fairness, with strong international outreach programmes.
Interdisciplinary research centre focusing on AI's social implications — power, labour, bias, and accountability. Strong emphasis on policy-relevant research and advocacy.
Existential Risk Alliance supports 10-week research fellowships focused on catastrophic risks from advanced AI, covering technical safety, governance, and policy.
Comprehensive database of AI risks catalogued from academic and industry sources. Useful for systematic risk assessment and governance gap analysis.
Voluntary framework to better manage risks posed to individuals, organisations, and society from AI. Widely adopted as a baseline for AI governance maturity assessment.
Standardised documentation templates for AI models and training datasets that promote transparency about intended use, performance characteristics, and limitations.
Open-source toolkit from IBM to examine, report, and mitigate discrimination and bias in ML models throughout the AI application lifecycle.
In-depth analysis at the intersection of AI governance, South Asian policy, and responsible technology deployment.
A critical review of Sri Lanka's Personal Data Protection Act (No. 9 of 2022) and how it applies to AI systems that process personal data — gaps, obligations, and reform priorities.
An exploration of AI-generated video and audio misinformation targeting Tamil-speaking populations — and how community education can serve as a first line of defence.
How organisations in Sri Lanka, India, and Pakistan can leverage AWS's generative AI services while building robust governance guardrails from day one.
Sri Lankan tech companies serving EU clients must understand the EU AI Act's extraterritorial reach. This article unpacks the obligations, timelines, and compliance strategies.
Employees across the region are using unauthorised AI tools at scale. Without detection and governance frameworks, organisations face serious data and compliance exposure.
Introducing TamilMMLU — a benchmark of 118 questions across 5 domains for evaluating large language models in Tamil. Why representation in benchmarking matters for South Asia.
"The AI Governance Translator" — Bridging Legal, Technical & Business Domains
I am an AI & Data Governance Specialist and Independent Consultant with a unique cross-disciplinary profile that bridges law, technology, and business strategy. My work is built on a simple conviction: AI governance is not a compliance checkbox — it is fundamentally about organisational culture and societal values.
As an Attorney-at-Law of the Supreme Court of Sri Lanka with an LL.B, MBA, and MSc in Information Technology, I bring a perspective to AI governance that few practitioners possess: the ability to translate between technical realities, legal obligations, and business imperatives simultaneously — hence my positioning as "The AI Governance Translator."
I hold five AWS certifications including the AWS Certified Generative AI Developer – Professional (Early Adopter, among the first globally to earn this certification), AWS Solutions Architect Professional, AWS Machine Learning Specialty, and others. I serve as a Research Supervisor for MSc Data Science (Responsible AI) at Coventry University, and am an AWS Community Builder in the Machine Learning category — the only one from Sri Lanka at the time of acceptance.
I co-founded iExam (Exam.lk), an EdTech platform that scaled to over 4 million users across South Asia, serving as CTO. I have completed over 400 consulting projects across 30+ countries with a 4.9/5 rating on Freelancer.com. My research contributions include the TamilMMLU benchmark for evaluating large language models in Tamil, and reviewing 138 papers for ArabicNLP 2025.
I attended the MBZUAI Machine Learning Winter School 2026 as one of 60 participants selected from over 2,400 applicants, and participated as an All Builders Welcome Grant recipient at AWS re:Invent 2024 in Las Vegas. I am multilingual in English, Arabic, Tamil, and Sinhala — enabling me to communicate AI governance concepts across South Asian and Middle Eastern communities.
AI Governance LK is my contribution to building South Asia's capacity to engage meaningfully with global AI governance debates — ensuring that the region's voices, values, and contexts are represented in the frameworks that will shape our collective AI future.
Educational Purpose: All content on AI Governance LK is provided strictly for educational and informational purposes. It does not constitute legal advice, regulatory guidance, or professional consultation of any kind.
Accuracy: While every effort is made to ensure accuracy, the field of AI governance, safety, and policy evolves rapidly. Information may become outdated, and readers should verify all material against primary and authoritative sources before relying on it for any purpose.
No Liability: The founder and contributors of AI Governance LK disclaim any liability for decisions made on the basis of content published on this site. References to legislation, regulations, or standards are provided for informational context only and should not be interpreted as definitive legal interpretations.
Third-Party Sources: This site references third-party reports, organisations, and publications. Such references do not constitute endorsement. Readers are encouraged to evaluate all sources critically and independently.
AI-Assisted Content: Some content on this site may be drafted with the assistance of AI writing tools. All such content is reviewed and edited by the founder for accuracy, balance, and appropriateness prior to publication.
Whether you are a policymaker, researcher, technologist, or student interested in AI governance, responsible AI, or the South Asian policy landscape — I welcome meaningful conversations and collaboration.
Research Supervisor · Coventry University
MSc Data Science (Responsible AI) · Supervising graduate research in AI ethics, data governance, and responsible ML deployment.
I typically respond within 2–3 business days.