🏠 Home ⚠️ AI Risks 📚 Resources ✍️ Blog 👤 About ✉️ Contact
South Asia's AI Governance Hub

Understanding AI Risks
Governing AI Responsibly

Bridging AI Governance & South Asia

Rapid advances in artificial intelligence are reshaping societies, economies, and institutions — yet South Asia remains underrepresented in global AI policy conversations. AI Governance LK bridges that gap, translating complex AI risks into accessible knowledge for policymakers, technologists, and citizens across the region.

0
Core AI Risk Areas Covered
0
+ Curated Resources
0
South Asia Nations Focused
⚖️AI Governance
🛡️AI Safety
🔍AI Alignment
📜AI Policy
AI
GOVERNANCE
AI Alignment Responsible AI AI Safety Research PDPA Compliance AI Governance Frameworks Machine Learning Risks South Asia Policy Data Protection Algorithmic Accountability AI Ethics Existential Risk AI Regulation AI Alignment Responsible AI AI Safety Research PDPA Compliance AI Governance Frameworks Machine Learning Risks South Asia Policy Data Protection Algorithmic Accountability AI Ethics Existential Risk AI Regulation

Core AI Risks Facing Our World

As AI systems grow more capable, the risks they pose become more consequential. Understanding these risks — especially in the South Asian context — is the first step toward meaningful governance and policy. These are not distant science-fiction concerns: many are unfolding today.

🎯

AI Misalignment

AI systems may pursue goals that diverge from human values — not through malice, but through poorly specified objectives. As systems become more autonomous and capable, even minor misalignment could have catastrophic consequences at scale.

Governance gap in South Asia's AI deployment frameworks leaves alignment untested in high-stakes domains like credit scoring and hiring.
🕵️

Surveillance & Erosion of Privacy

AI-powered surveillance — facial recognition, biometric tracking, social credit systems — poses profound risks to civil liberties. Without robust legal frameworks, governments and private actors can deploy these at mass scale with minimal accountability.

Sri Lanka's PDPA (2022) and India's DPDP Act (2023) mark progress, but enforcement capacity remains limited.

Concentration of Power

Advanced AI could allow a small number of states, corporations, or individuals to amass unprecedented economic, military, and informational power — fundamentally disrupting democratic checks and global stability.

South Asian nations risk deepening dependency on foreign AI platforms, reducing data sovereignty and strategic autonomy.
🤖

Autonomous Weapons & Military AI

Lethal autonomous weapons systems (LAWS) capable of selecting and engaging targets without meaningful human control represent a new class of weapons of mass destruction. The absence of international AI arms-control treaties is alarming.

South Asia's unresolved territorial disputes make the region particularly vulnerable to escalation via autonomous military systems.
📰

Disinformation & Synthetic Media

AI-generated deepfakes, synthetic audio, and manipulated content can undermine democratic processes, inflame ethnic and religious tensions, and destabilise communities — with limited tools available to detect or counter them at speed.

Tamil, Sinhala, and other South Asian language communities face elevated risk from AI misinformation given limited local-language detection tools.
⚖️

Algorithmic Bias & Discrimination

AI systems trained on historical data can perpetuate and amplify existing inequalities across gender, caste, ethnicity, and religion. Biased AI in lending, policing, or judicial systems can have severe real-world consequences.

Caste and ethnicity data embedded in historical records pose acute risks for AI deployed in government services across South Asia.
💼

Economic Disruption & Labour Displacement

Rapid AI-driven automation threatens to displace large segments of the workforce, especially in industries that South Asian economies depend upon — including manufacturing, textile, call centres, and knowledge services.

Sri Lanka's export-dependent and services-heavy economy requires urgent policy planning for AI-led disruption of its formal labour market.

Foundational Concepts in AI Governance

These concepts form the intellectual scaffolding of responsible AI policy. Understanding them is essential for anyone engaging with AI governance discussions.

01

AI Alignment

The technical and philosophical challenge of ensuring AI systems reliably pursue outcomes that humans genuinely intend. This includes corrigibility (AI being correctable), value learning, and robustness under distributional shift. Misaligned AI at scale is considered one of the most serious existential-level risks.

02

AI Safety

A research discipline focused on ensuring AI systems operate predictably, robustly, and without causing unintended harm. Encompasses technical safety (adversarial robustness, interpretability) and sociotechnical safety (human oversight, deployment safeguards, red-teaming).

03

AI Governance

The constellation of laws, policies, standards, institutions, and norms that regulate how AI is developed, deployed, and used. Effective governance is multi-layered — spanning international bodies, national regulators, corporate compliance, and civil society accountability mechanisms.

04

Responsible AI (RAI)

A practice-oriented framework for AI development that embeds ethical principles — fairness, transparency, accountability, privacy, and inclusiveness — into the design lifecycle. Distinct from AI safety but complementary; RAI is focused on values-by-design rather than purely on preventing catastrophic failures.

05

Interpretability & Explainability

Interpretability is the degree to which a human can understand the internal mechanics of an AI model. Explainability is the capacity to describe its outputs in human-understandable terms. Both are increasingly required under regulatory frameworks (EU AI Act, GDPR) and are fundamental to accountability.

06

Existential Risk (x-risk)

Scenarios where advanced AI could contribute to outcomes that permanently curtail humanity's long-term potential — ranging from AI-enabled totalitarianism to scenarios where misaligned superintelligent AI systems act contrary to human survival. Once considered fringe, x-risk concerns are now endorsed by leading AI researchers and policymakers.

07

Data Sovereignty & PDPA

The principle that individuals and nations have rights over data generated within their jurisdictions. For South Asia, localisation mandates and enforceable data protection laws (Sri Lanka PDPA, India DPDP) are central governance tools — though implementation gaps remain substantial.

08

Shadow AI & Governance Gaps

The use of AI tools within organisations without formal approval, oversight, or governance structures. Shadow AI mirrors Shadow IT but with amplified risk — including data leakage, unreviewed model outputs informing decisions, and regulatory non-compliance — particularly in sectors like finance, law, and healthcare.

Essential Reading & Resources

A curated library of books, papers, organisations, and tools for anyone wanting to go deeper into AI governance and safety.

📕

The Alignment Problem — Brian Christian

A deeply reported investigation into the challenges of making AI systems do what we actually want. Accessible and essential for non-technical policymakers.

2020 · W. W. Norton & Company
📗

Superintelligence — Nick Bostrom

The foundational text on existential risk from advanced AI. Introduces the concept of an intelligence explosion and analyses paths to catastrophic misalignment.

2014 · Oxford University Press
📘

Human Compatible — Stuart Russell

Proposes a new paradigm for AI — machines that are uncertain about human preferences and defer to humans. A must-read for AI safety researchers.

2019 · Viking
📙

Atlas of AI — Kate Crawford

Examines the material, political, and social costs of artificial intelligence — from mines to data centres to labour exploitation. Critical for a global south perspective.

2021 · Yale University Press
📓

Weapons of Math Destruction — Cathy O'Neil

An accessible analysis of how opaque, unaccountable algorithms affect real lives — in lending, criminal justice, and employment. Particularly relevant for emerging economies.

2016 · Crown Publishers
📒

The Coming Wave — Mustafa Suleyman

DeepMind co-founder's analysis of the transformative and dangerous wave of AI and synthetic biology — and an argument for containment. Highly relevant for policymakers.

2023 · Crown
📄

Concrete Problems in AI Safety

Amodei et al. (2016) — landmark paper defining practical near-term AI safety challenges including reward hacking, safe exploration, and distributional shift.

arXiv:1606.06565
📄

AI Index Report — Stanford HAI

Annual comprehensive report tracking AI capabilities, investment, policy, and governance worldwide. Includes data on AI adoption across Asia and the Global South.

Published Annually · Stanford University
📄

Five Hard National Security Problems — RAND

Outlines challenges of highly capable AI for national security: strategic advantage, WMD escalation, AI-enabled deception, autonomous systems, and internal misuse.

RAND Corporation · PEA3691-4
📄

UNESCO Recommendation on the Ethics of AI

The first global normative framework adopted by 193 countries. Covers AI lifecycle ethics, data governance, and cultural rights — critical reading for South Asian policymakers.

UNESCO · 2021
📄

EU AI Act — Full Text

The world's first comprehensive AI law. Classifies AI by risk level and imposes obligations on providers and deployers. Sets a global benchmark that influences South Asian legislation.

European Union · 2024
📄

Sri Lanka PDPA: A Critical Analysis

Analysis of Sri Lanka's Personal Data Protection Act (No. 9 of 2022) and its implications for AI systems processing personal data — gaps, strengths, and reform priorities.

AI Governance LK · 2024
🏛️

Centre for AI Safety (CAIS)

Leading research organisation working on reducing societal-scale AI risks, including the landmark statement on AI catastrophic risk signed by hundreds of researchers.

safe.ai
🏛️

MBZUAI — Mohamed bin Zayed University of AI

World's first graduate-level AI university, based in Abu Dhabi. Growing contributor to responsible AI research relevant to the Global South and Arab world.

mbzuai.ac.ae
🏛️

Anthropic

AI safety company behind Claude. Publicly committed to developing AI safely and publishes research on Constitutional AI, scalable oversight, and interpretability.

anthropic.com
🏛️

The Alan Turing Institute

UK's national institute for data science and AI. Produces policy-relevant research on AI ethics, governance, and fairness, with strong international outreach programmes.

turing.ac.uk
🏛️

AI Now Institute — NYU

Interdisciplinary research centre focusing on AI's social implications — power, labour, bias, and accountability. Strong emphasis on policy-relevant research and advocacy.

ainowinstitute.org
🏛️

ERA Fellowship

Existential Risk Alliance supports 10-week research fellowships focused on catastrophic risks from advanced AI, covering technical safety, governance, and policy.

erafellowship.org
🛠️

AI Risk Repository — MIT

Comprehensive database of AI risks catalogued from academic and industry sources. Useful for systematic risk assessment and governance gap analysis.

airisk.mit.edu
🛠️

NIST AI Risk Management Framework

Voluntary framework to better manage risks posed to individuals, organisations, and society from AI. Widely adopted as a baseline for AI governance maturity assessment.

nist.gov/artificial-intelligence
🛠️

Model Cards & Datasheets for Datasets

Standardised documentation templates for AI models and training datasets that promote transparency about intended use, performance characteristics, and limitations.

Google Research · 2018/2019
🛠️

AI Fairness 360 (AIF360)

Open-source toolkit from IBM to examine, report, and mitigate discrimination and bias in ML models throughout the AI application lifecycle.

IBM Research · github.com/Trusted-AI

Latest Articles & Insights

In-depth analysis at the intersection of AI governance, South Asian policy, and responsible technology deployment.

🏛️
Policy Analysis

Sri Lanka's PDPA and Its Implications for AI-Powered Systems

A critical review of Sri Lanka's Personal Data Protection Act (No. 9 of 2022) and how it applies to AI systems that process personal data — gaps, obligations, and reform priorities.

March 2025 Read More →
🎭
AI Safety

Deepfakes in Tamil: How AI Misinformation Threatens South Asian Communities

An exploration of AI-generated video and audio misinformation targeting Tamil-speaking populations — and how community education can serve as a first line of defence.

February 2025 Read More →
☁️
Cloud & AI Governance

Generative AI on AWS: Governance Considerations for South Asian Enterprises

How organisations in Sri Lanka, India, and Pakistan can leverage AWS's generative AI services while building robust governance guardrails from day one.

January 2025 Read More →
🌐
Global Governance

What the EU AI Act Means for Sri Lankan Technology Exporters

Sri Lankan tech companies serving EU clients must understand the EU AI Act's extraterritorial reach. This article unpacks the obligations, timelines, and compliance strategies.

December 2024 Read More →
👻
Organisational AI

Shadow AI in South Asian Organisations: The Invisible Governance Challenge

Employees across the region are using unauthorised AI tools at scale. Without detection and governance frameworks, organisations face serious data and compliance exposure.

November 2024 Read More →
🔬
Research

TamilMMLU: Building an AI Benchmark for Tamil Language Evaluation

Introducing TamilMMLU — a benchmark of 118 questions across 5 domains for evaluating large language models in Tamil. Why representation in benchmarking matters for South Asia.

October 2024 Read More →
MN
Mohamed Nizzad
The AI Governance Translator
⚖️ Attorney-at-Law 🎓 LL.B · MBA · MSc IT ☁️ 5× AWS Certified 🌏 Sri Lanka · UAE 🔬 Research Supervisor 🏗️ AWS Community Builder

Mohamed Nizzad

"The AI Governance Translator" — Bridging Legal, Technical & Business Domains

I am an AI & Data Governance Specialist and Independent Consultant with a unique cross-disciplinary profile that bridges law, technology, and business strategy. My work is built on a simple conviction: AI governance is not a compliance checkbox — it is fundamentally about organisational culture and societal values.

As an Attorney-at-Law of the Supreme Court of Sri Lanka with an LL.B, MBA, and MSc in Information Technology, I bring a perspective to AI governance that few practitioners possess: the ability to translate between technical realities, legal obligations, and business imperatives simultaneously — hence my positioning as "The AI Governance Translator."

I hold five AWS certifications including the AWS Certified Generative AI Developer – Professional (Early Adopter, among the first globally to earn this certification), AWS Solutions Architect Professional, AWS Machine Learning Specialty, and others. I serve as a Research Supervisor for MSc Data Science (Responsible AI) at Coventry University, and am an AWS Community Builder in the Machine Learning category — the only one from Sri Lanka at the time of acceptance.

I co-founded iExam (Exam.lk), an EdTech platform that scaled to over 4 million users across South Asia, serving as CTO. I have completed over 400 consulting projects across 30+ countries with a 4.9/5 rating on Freelancer.com. My research contributions include the TamilMMLU benchmark for evaluating large language models in Tamil, and reviewing 138 papers for ArabicNLP 2025.

I attended the MBZUAI Machine Learning Winter School 2026 as one of 60 participants selected from over 2,400 applicants, and participated as an All Builders Welcome Grant recipient at AWS re:Invent 2024 in Las Vegas. I am multilingual in English, Arabic, Tamil, and Sinhala — enabling me to communicate AI governance concepts across South Asian and Middle Eastern communities.

AI Governance LK is my contribution to building South Asia's capacity to engage meaningfully with global AI governance debates — ensuring that the region's voices, values, and contexts are represented in the frameworks that will shape our collective AI future.

400+
Consulting Projects Completed Across 30+ Countries
4M+
Users on iExam (Exam.lk) Co-Founded Platform
AWS Certifications incl. Generative AI Developer – Professional
30+
Research Citations & 138 Papers Reviewed (ArabicNLP 2025)

AWS Certifications

🏆
AWS Certified Generative AI Developer – Professional
Amazon Web Services
Early Adopter
☁️
AWS Certified Solutions Architect – Professional
Amazon Web Services
🤖
AWS Certified Machine Learning – Specialty
Amazon Web Services
🔒
AWS Certified Security – Specialty
Amazon Web Services
📊
AWS Certified Data Analytics – Specialty
Amazon Web Services
⚠️

Content Disclaimer

Educational Purpose: All content on AI Governance LK is provided strictly for educational and informational purposes. It does not constitute legal advice, regulatory guidance, or professional consultation of any kind.


Accuracy: While every effort is made to ensure accuracy, the field of AI governance, safety, and policy evolves rapidly. Information may become outdated, and readers should verify all material against primary and authoritative sources before relying on it for any purpose.


No Liability: The founder and contributors of AI Governance LK disclaim any liability for decisions made on the basis of content published on this site. References to legislation, regulations, or standards are provided for informational context only and should not be interpreted as definitive legal interpretations.


Third-Party Sources: This site references third-party reports, organisations, and publications. Such references do not constitute endorsement. Readers are encouraged to evaluate all sources critically and independently.


AI-Assisted Content: Some content on this site may be drafted with the assistance of AI writing tools. All such content is reviewed and edited by the founder for accuracy, balance, and appropriateness prior to publication.

Connect & Collaborate

Whether you are a policymaker, researcher, technologist, or student interested in AI governance, responsible AI, or the South Asian policy landscape — I welcome meaningful conversations and collaboration.

Research Supervisor · Coventry University
MSc Data Science (Responsible AI) · Supervising graduate research in AI ethics, data governance, and responsible ML deployment.

Send a Message

I typically respond within 2–3 business days.