Information Security Manager · PhD Researcher, Newcastle University
Founder, Buniah Association · CISO-level Practitioner
"Accountability cannot be automated."
Riyadh, Saudi Arabia
Information Security Manager at Khebra
Founding President, Buniah Association
I sit at the intersection of AI governance, cybersecurity, and public sector accountability. My work spans hands-on security engineering, academic research, and institution-building — with a consistent thread: how do we make intelligent systems answerable for their decisions?
As Information Security Manager at Khebra, a regulated Saudi financial institution, I operate daily within NCA ECC, SAMA CSF, and PDPL frameworks. My research goes a layer deeper — examining how AI systems used in critical decision-making can be audited, governed, and held accountable at the institutional level.
Through Buniah Association for AI and Data Analytics, I am building Saudi Arabia's civic infrastructure for responsible AI — working with youth, regulators, and practitioners to close the gap between policy aspiration and operational reality.
AI governance in cybersecurity incident response within Saudi public sector organisations. Mixed-methods: SLR, semi-structured interviews, survey research.
AI governance, data ethics, and digital capacity building for Saudi institutions and youth.
NCA ECC, SAMA CSF, and PDPL compliance. GRC intelligence and cybersecurity governance.
Thesis: Peer-to-Peer Federated Learning for Intrusion Detection in Autonomous Vehicles. Novel SR-1CNN architecture achieving 99.97% accuracy.
Targeting digital transformation for government and private sectors. Products include the Midad executive KPI dashboard.
Provided application support across enterprise systems within Safari Group, building hands-on experience in IT operations and systems management.
Bridging deep technical research with governance frameworks relevant to regulated environments.
Proposed a fully decentralised federated learning architecture enabling autonomous vehicles to collaboratively detect cyber-attacks without sharing raw data. Introduced the novel SR-1CNN model achieving 99.97% accuracy on the Car Hacking dataset and 94.39% on NSL-KDD. Tested under both IID and Non-IID data distributions across 3, 4, and 6 vehicle topologies.
Examining how AI systems used in incident response within Saudi government organisations can be governed, audited, and held accountable. Mixed-methods: systematic literature review, semi-structured interviews, and survey research.
A comprehensive framework for evaluating AI governance readiness across Saudi public sector organisations. Ten dimensions, five maturity levels, mapped to SDAIA, NCA ECC, NDMO, PDPL, and ISO/IEC 42001.
Proposes an operational framework for governing AI systems within cybersecurity incident response pipelines. Addresses accountability gaps when automated systems make consequential security decisions.
From GRC intelligence dashboards to AI accountability systems — practical instruments for regulated environments.
A practical audit instrument for documenting, tracing, and reviewing AI-driven decisions within organisational workflows. Built as both a standalone tool and a potential PhD research instrument. Postgres backend via Netlify Functions.
An AI governance readiness assessment platform for Saudi public sector organisations. Grounded in the GARAF framework with ten dimensions and five maturity levels. Mapped to SDAIA, NCA ECC, NDMO, PDPL, and ISO/IEC 42001.
An internal compliance intelligence dashboard targeting NCA ECC and SAMA CSF frameworks. Provides real-time visibility into control status, risk exposure, and regulatory posture for a regulated Saudi financial institution.
An executive-level KPI and operational intelligence platform developed under Orevix Technologies. Two verticals: government performance tracking and debt collection operations. Built for decision-makers who need signal over noise.
Open-source implementation of the MSc thesis framework. SR-1CNN model with ADMM-based asynchronous peer-to-peer federated learning across vehicle mesh networks. Achieves near-perfect detection with zero raw data sharing.
A Firebase-connected correspondence and document management system built for Buniah Association. Deployed on Netlify with a bilingual web portal prototype. Streamlines governance operations for the non-profit.
A licensed Saudi non-profit dedicated to AI governance, data ethics, and digital capacity building for institutions, practitioners, and youth.
🌐 Visit Buniah Website →Founded and led by Bassam Alotaibi, Buniah exists to close the gap between AI policy aspiration and operational reality in Saudi Arabia. It works across three tracks: governance standards, practitioner capacity, and youth accountability literacy.
Strategic budget: 18.5M SAR over 2026–2030. Institutional partnerships with SDAIA. Home of Saudi Arabia's first AI accountability programme for youth — Dhameer (ضمير).
Developing Saudi-contextualised AI governance frameworks aligned with national regulations (NCA, NDMO, PDPL) and international standards (ISO/IEC 42001, OECD AI Principles).
Training programmes, workshops, and the Buniah Academy LMS for security professionals, data practitioners, and public sector employees operating with AI systems.
Dhameer and related initiatives equipping Saudi youth with the conceptual and practical tools to engage critically with AI as citizens, creators, and future decision-makers.
From international financial technology conferences to regulatory engagement and academic forums.
Presented on the intersection of AI adoption and cybersecurity governance in regulated financial environments. Addressed accountability gaps in AI-driven security operations for a global fintech audience.
Engaged with SDAIA on AI governance readiness frameworks. Presented findings from GovAI Pulse and the GARAF framework as practical instruments for Saudi public sector AI accountability.
Pitched Saudi Arabia's first youth-focused AI accountability programme to grant evaluators. Presented a bilingual programme design grounded in ethical AI literacy and civic responsibility.
Presented MSc thesis research on peer-to-peer federated learning for autonomous vehicle intrusion detection. Demonstrated the SR-1CNN architecture and its implications for privacy-preserving AI security.
Whether you are a regulator thinking about AI governance frameworks, a researcher working at the intersection of AI and security, or an organisation looking to build accountability into your AI systems — I would like to hear from you.
I am particularly interested in conversations around AI governance in the Saudi public sector, federated learning for privacy-preserving security, and building civil society capacity for responsible AI.