Complexity, uncertainty and the Safety of ML - 42nd International Conference on Computer Safety, Reliability and Security Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Complexity, uncertainty and the Safety of ML

Simon Burton
  • Fonction : Auteur
  • PersonId : 1278299
Benjamin Herd
  • Fonction : Auteur
  • PersonId : 1278300

Résumé

There is currently much debate regarding whether or not applications based on Machine Learning (ML) can be made demonstrably safe. We assert that our ability to argue the safety of ML-based functions depends on the complexity of the task and environment of the function, the observations (training and test data) used to develop the function and the complexity of the ML models. Our inability to adequately address this complexity inevitably leads to uncertainties in the specification of the safety requirements, the performance of the ML models and our assurance argument itself. By understanding each of these dimensions as a continuum, can we better judge what level of safety can be achieved for a particular ML-based function?
Fichier principal
Vignette du fichier
SAFECOMP_2023_paper_3459.pdf (267.24 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04191756 , version 1 (30-08-2023)

Identifiants

  • HAL Id : hal-04191756 , version 1

Citer

Simon Burton, Benjamin Herd. Complexity, uncertainty and the Safety of ML. SAFECOMP 2023, Position Paper, Sep 2023, Toulouse, France. ⟨hal-04191756⟩

Collections

LAAS SAFECOMP2023
77 Consultations
97 Téléchargements

Partager

Gmail Facebook X LinkedIn More