Policy Consultation for The Office of the Prosecutor of the International Criminal Court on Crimes Under the Rome Statute policy consultation
Policy consultation advances targeted recommendations to strengthen the International Criminal Court’s Draft Policy on Cyber-Enabled Crimes by enhancing its capacity to detect corruption, prevent organised crime infiltration, protect digital evidence, and address cross-border cyber operations including AI-enabled disinformation, surveillance, and indirect cyber facilitation thereby improving the Court’s responsiveness to evolving technological, jurisdictional, and policy-integrity risks.
The European Union's Approach to Artificial Intelligence Governance and Regulation literature review
The European Union has emerged as a global leader in artificial intelligence governance, pursuing a human-centric and ethical regulatory approach that seeks to balance innovation with the protection of fundamental rights, and this literature review examines the EU’s strategy, regulatory mechanisms, ethical considerations, and its potential “Brussels Effect” on the global AI ecosystem.
Sovereign Risk and governance accountability gap analysis for critical infrastructure under statutory national security obligations. Security of Critical Infrastructure Act 2018 (SOCI Act) reforms. 2026 Master's research focus
Proportionality frameworks for SOCI s42B corporate board governance synchronisation. Public-private justice accountability gap analysis for critical infrastructure under statutory national security obligations.
Decision Making in the Cyber Environment: Strategic Intelligence for Operational Resiliency strategic cyber intelligence cycle
Systematic and continuous governance tool for strategic cyber intelligence cycle for analysing the means and motives of threat actors and the digital environment in which operations occur. It integrates ethical, legal, and professional trust considerations to ensure activities remain lawful, proportionate, and accountable. By identifying data, intelligence feed, and supply chain risks, cyber intelligence cycles can strengthens strategic resilience and operational readiness while maintaining alliance confidence and integrity. Providing a tool for decision-making under ambiguity, risk weighting, and assurance for high-value funding decisions. Follows on from AML/CTF Brief and informs SOCI research area.
Addressing and Countering Digital Money Laundering and Terrorism Financing in Australia policy proposal
This proposal examines Australian policy interventions addressing digital money laundering and terrorism financing, focusing on unexplained wealth, the FATF Travel Rule, and remittance providers, and finds that while asset confiscation measures have been effective in disrupting organised crime, persistent compliance gaps in the remittance sector and capacity constraints at AUSTRAC continue to undermine efforts to address emerging risks posed by virtual asset service providers. Recommendations include AUSTRAC Suspicious Matter Reporting through AI-Enabled Dynamic Monitoring.
The Imperative of Adaptive AI Governance: Integrating Compute Control and Liability Frameworks literature review
As AI models demand increasingly specialized computational resources, compute governance emerges as a uniquely effective proactive policy lever, offering regulators a tangible, detectable, and quantifiable point of intervention by implementing compute thresholds, establishing an international AI chip registry, and adopting privacy-preserving monitoring techniques such as differential privacy and federated learning, policymakers can directly regulate the development and deployment of advanced AI, enhancing oversight, preventing misuse, and balancing innovation with ethical and security considerations. Preliminary research informs SOCI research area.
Integrating Cybersecurity Frameworks for Critical Infrastructure Resilience literature review
Critical infrastructure operators face challenges in navigating multiple cybersecurity frameworks (Essential 8, NIST, AESCSF), understanding which controls mitigate specific hazards, and adapting to evolving requirements during the reporting period. Best practice requires linking frameworks to threat types, embedding continuous improvement cycles, and sharing lessons learned from SOCI breaches to strengthen resilience. Intelligence-style MOUs offer a pathway to standardise nomenclature, coordinate threat reporting, and enable timely information sharing between operators and state regulators, leveraging existing investigative powers where necessary for emerging risks. Preliminary research informs SOCI research area.
Governing AI Through Compute Control - A Focus on Global Registry and Disablement policy analysis
Explores emerging discourse around governing artificial intelligence (AI) through controlling access to computational power, specifically focusing on the concept of a global AI chip registry and the capability to remotely disable AI chips.
Understanding AI Deception Risks and Informing Policy policy analysis
This article analyses AI deception including strategic misrepresentation, sycophancy, deepfakes, deceptive alignment and their legal implications across fraud, consumer protection, and criminal justice applications. Operationalising Safe and Responsible AI Framework Principle 3 (Safety), it proposes human oversight mandates for high-risk deception-capable systems directly addressing Office of the Australian Information Commissioner Generative AI Guidance deepfake/disinformation risks through explainable risk scoring validated against experienced police assessments. By bridging technical deception typology with national regulatory standards, the analysis equips anti-corruption agencies with governance frameworks for deploying AI in misconduct detection while mitigating deception-induced false positives that undermine investigative integrity.
Bridging Global AI Ethics Frameworks with Proportional Surveillance Governance literature review
A review of leading international AI governance standards from IEEE Ethical Standards (2020), UN AI for Good (2021), OECD AI Principles (2020), and EU AI Act (2021)—to address the tension between AI-enhanced regulatory visibility and surveillance proportionality. While these frameworks universally mandate human oversight, transparency, and risk-based regulation, their application to enforcement contexts demands nuanced implementation. The analysis proposes a proportionate governance model integrating differential privacy, federated learning, and tiered compute reporting that satisfies EU AI Act high-risk system requirements while aligning with OECD human-centred values. Operationalised through human-in-loop validation mechanisms, this approach ensures AI delivers legitimate regulatory benefits without eroding privacy rights or democratic accountability, providing anti-corruption agencies with legally defensible intelligence capabilities.
AI Tension Between Mitigating Risks and Fostering Innovation literature review
This literature review synthesises academic and policy perspectives on artificial intelligence governance, examining the tension between risk mitigation and innovation, the strategic roles of states, corporations, and individuals in the global AI race, and the implications of open-source development, power concentration, and surveillance, while arguing for a holistic and inclusive governance approach capable of addressing high-stakes societal and humanitarian risks.
A Review of AI Impacts, Ethics, and Governance literature review
Artificial intelligence delivers transformative capabilities across sectors but introduces unprecedented deception risks, strategic misrepresentation, sycophancy, deepfake generation, and deceptive alignment, that demand explicit governance beyond generic ethical principles. This literature review dissects AI deception typologies, their manifestations in real-world systems (CICERO betrayal, AlphaStar feints), and proposes calibrated policy architectures distinguishing learned deception (explicit training) from emergent deception (unintended optimisation consequences or historical bias). By operationalising human oversight thresholds, explainable risk scoring, and mandatory validation against domain expertise, the analysis constructs defensible governance for high-stakes deployments.
The Complex Landscape of AI and Digital Governance literature review
Global digital platform regulation and AI governance constitute a complex and contested landscape in which powerful multinational corporations simultaneously influence policy development at state, national, and international levels, including human rights frameworks such as the ICC Rome Statute, while marketing and deploying AI and surveillance tools to the same stakeholders, creating inherent conflicts between commercial incentives, regulatory oversight, and ethical accountability. Rapid technological innovation outpaces regulation, divergent national strategies, from the EU’s interventionist approach to the US’s market-driven framework, challenge coordination, while emerging concepts of digital sovereignty and digital nationalism fragment the international ecosystem, highlighting the intricate interplay of technological scale, corporate power, jurisdictional divergence, ethical dilemmas, and geopolitical competition. Preliminary research for Policy Consultation for The Office of the Prosecutor of the International Criminal Court on Crimes Under the Rome Statute
Deception, Cybersecurity, and National Intelligence: Australian Defence Force Imperatives Argumentative Essay
Artificial intelligence is reshaping military operations but introduces unprecedented deception risks, strategic misrepresentation (CICERO), deepfake generation, sandbagging, and deceptive alignment that demand explicit ADF governance beyond generic ethics. This analysis dissects AI deception typologies and their operational manifestations in Australian Defence contexts (TrapRadio, AUKUS ISR, Pacific information operations), proposing calibrated policy architectures distinguishing learned deception from emergent deception in high-stakes military deployments. By operationalising human oversight thresholds, explainable risk scoring, and mandatory ADF intelligence validation—aligned with Defence AI and Autonomy Principles (2024) and Method for Ethical AI in Defence (DSTG)—the framework constructs defensible governance ensuring sovereign capability protection while maintaining strategic advantage against peer adversaries.. Preliminary research for Decision Making in the Cyber Environment: Strategic Intelligence for Operational Resiliency.
Black Market AI Chips, Over-Governance Risks, and Security Control Implications literature review
Critical Minerals Strategy 2025 targets Lynas rare earths and copper processing to hedge these risks, securing upstream inputs for sovereign AI compute and TrapRadio systems amid TSMC dependency. This report analyzes the current landscape of AI chip governance, highlighting the emergence of black markets for restricted chips, potential dangers of over-regulation stifling innovation, and necessary security controls positioned as supply-chain resilience, sovereign risk mitigation, and long-term investment viability. Australia's high-performance compute import dependency exposes strategic vulnerabilities, alongside supply chain risks from TSMC production concentration and emerging market distortions driven by export controls. Integrating NVIDIA Omniverse digital twin technology into government procurement supports sovereign AI capabilities for national security applications, enabling resilient operations amid geopolitical fragmentation. Preliminary research for Decision Making in the Cyber Environment: Strategic Intelligence for Operational Resiliency.
AI–Human Hybrid Detection of Police Corruption research proposal
This research proposes an AI–human hybrid framework prioritising police judgment for detecting historical police corruption, operationalised as an explainable AI (XAI) pipeline combining transformer‑based NLP models (BERT‑style encoders) with gradient‑boosted decision‑tree classifiers such as XGBoost/LightGBM. The study directly compares AI‑generated risk flags against experienced police officers' assessments in realistic operational settings, evaluating when and how AI insights enhance (rather than replace) human judgment on accuracy, interpretability, and governance. By requiring police officers to validate, contest or override AI‑surfaced patterns, the project informs transparent, accountable AI governance where human expertise remains sovereign for misconduct detection in Australian policing and public service. Insights will directly support refinement of policy under the Crime and Corruption Act 2001 (Qld), ensuring AI tools remain proportionate intelligence aids subordinate to specialist investigative powers and police judgment.
Risk Management in Catastrophic Risks Related to Artificial Intelligence policy proposal
While artificial intelligence offers significant societal benefits, it also introduces catastrophic risks that existing regulatory frameworks are ill-equipped to manage due to their reactive design and reliance on outdated cost–benefit models; this proposal argues for the establishment of a Catastrophic Risk Review process to systematically identify, assess, and mitigate AI-related risks, ranging from autonomous weapons and mass surveillance to uncontrollable systems and socioeconomic disruption, through proactive, interdisciplinary, and scenario-based governance. Preliminary research for Policy Consultation for The Office of the Prosecutor of the International Criminal Court on Crimes Under the Rome Statute.
Integrating Cybersecurity Frameworks and Legislative Levers for Critical Infrastructure Resilience policy evaluation
Research exploring how SOCI reporting and critical infrastructure oversight can be integrated with multi-agency intelligence frameworks to detect and prevent transnational organised crime and board-level corporate corruption. It examines the potential application of coercive powers and asset confiscation to compel evidence and enforce compliance, including novel approaches targeting financial and operational risks at the board level. By evaluating these mechanisms within a coordinated, intelligence-driven model, the study aims to inform future policy development, adaptive governance, and rapid threat response across Australia’s critical infrastructure landscape. Preliminary research informs SOCI research area.
AI-Human Hybrid Detection of Police Corruption: Empirical Methodology for Law Enforcement Oversight research proposal
Research proposal for a mixed-methods approach to investigate the accuracy of Al-driven tools versus human oversight in the detection of historical corruption in police misconduct investigations to inform Al governance frameworks in Australian law enforcement.
AML/CTF Brief video of applied strategic and operational cyber intelligence cycles
Anti-Money Laundering (AML) and Counter-Terrorism Financing (CTF) intelligence brief for the New South Wales government, focusing on open-source (OSINT) and financial intelligence (FININT) to identify suspicious financial patterns and laundering techniques in NSW pubs and clubs. Electronic Gaming Machine (EGM) research informed Decision Making in the Cyber Environment.