Updated regularly with the most recent developments related to my research and me.
Medical-purpose software and Artificial Intelligence ('AI')-enabled technologies ('medical AI') raise important social, ethical, cultural, and regulatory challenges. To elucidate these important challenges, we present the findings of a qualitative study undertaken to elicit public perspectives and expectations around medical AI adoption and related sociotechnical harm. Sociotechnical harm refers to any adverse implications including, but not limited to, physical, psychological, social, and cultural impacts experienced by a person or broader society as a result of medical AI adoption. The work is intended to guide effective policy interventions to address, prioritise, and mitigate such harm.
Click here to access the paper ...
INSYTE is our classification framework for traditional to agentic AI systems, designed to support cross-stakeholder communication, to inform design, development and deployment decisions, to facilitate safety engineering and assurance, to provide a classification system for regulatory and/or certification purposes, and to help inform decisions about liability
Click here to access the paper ...
AgileAMLAS tightly integrates Agile software engineering, Agile systems thinking, and ML engineering with our proven safety engineering methodology, AMLAS. This novel approach extends AMLAS by weaving together software and ML engineering artefacts and processes, with safety engineering artefacts and processes, and creating a truly through-life approach. This through-life development and assurance lifecycle is a crucial missing piece in the current literature.
Our goal with AgileAMLAS is to provide practical guidance to designers, developers, and safety practitioners. AgileAMLAS provides clear, step-by-step guidelines for developing and deploying ML for autonomous systems using DevOps and MLOps principles.and for generating compelling safety cases using the established guidance from AMLAS.
Click here to access the paper ...
Our work is developing robust safety assurance mechanisms and thorough validation processes to argue the reliability of these autonomous robots in dynamic, real-world environments. Our robots will adhere to stringent safety standards and regulatory frameworks to ensure risks are reduced as low as reasonably practicable (ALARP).
By developing a use case using our own on-site solar farm, we aim to demonstrate that we can safely use AMRs for accurate inspections, to trigger timely cleaning and maintenance to improve energy efficiency, to reduce fire risk, to predict where structural repairs are needed, and to extend the lifespans of panels.
Click here to find out more ...
AI-based robots and vehicles are expected to operate safely in complex and dynamic environments,
even in the presence of component degradation. In such systems, perception relies on sensors such as
cameras to capture environmental data, which is then processed by AI models to support decision-making.
However, degradation in sensor performance directly impacts input data quality and can impair AI inference.
Specifying safety requirements for all possible sensor degradation scenarios leads to unmanageable complexity
and inevitable gaps. In this position paper, we present a novel framework that integrates camera noise factor
identification with situation coverage analysis to systematically elicit robustness-related safety
requirements for AI-based perception systems. We focus specifically on camera degradation in the automotive
domain.
Click here to access paper ...
This paper presents a testing approach named SCALOFT for systematically assessing
the safety of an autonomous aerial drone in a mine. SCALOFT provides a framework for
developing diverse test cases, real-time monitoring of system behaviour, and detection
of safety violations. Detected violations are then logged with unique identifiers for
detailed analysis and future improvement. SCALOFT helps build a safety argument by monitoring
situation coverage and calculating a final coverage measure.
Click here to access paper ...
Underground mines are extremely challenging for autonomous drones as
there is limited infrastructure for Simultaneous Localisation and Mapping (SLAM),
for the drone to navigate. For example, there is no Global
Navigation Satellite System (GNSS), poor lighting, and few distinguishing landmarks.
Additionally, the physical environment is extremely harsh,
affecting the reliability of the drone. This paper describes the impact
of these challenges in designing for, and assuring, safety.
Click here to access paper ...
The three-dimensional swimming tracks of motile microorganisms can be used to identify
their species, which holds promise for the rapid identification of bacterial pathogens.
Digital holographic microscopy (DHM) is a well-established, but computationally intensive method
for obtaining three-dimensional cell tracks from video microscopy data.
We accelerate the analysis by an order of magnitude, enabling its use in real time.
This technique opens the possibility of rapid identification of bacterial pathogens in drinking water or clinical samples.
Click here to access article ...
Aerial drones are increasingly being considered as a valuable tool for inspection in safety critical contexts. Nowhere is this more true than in mining operations which present a dynamic and dangerous environment for human operators. Drones can be deployed in a number of contexts including efficient surveying as well as search and rescue missions. Operating in these dynamic contexts is challenging however and requires the drones control software to detect and adapt to conditions at run-time.
In this paper we describe a controller framework and simulation environment and provide information on how a user might construct and evaluate their own controllers in the presence of disruptions at run-time.
A virtual machine containing the artifact can be found here: Aloft GitHub repo