Dec 3, 2023: Workshop location: San Francisco Marriott Marquis, Foothill G2
Aug 30, 2023: Accepted papers and keynotes posted here!
Important Dates
Camera-ready copies due: September 4, 2023 (updated)
Workshop date: December 4, 2023
SE4SafeML: Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components
Colocated with FSE 2023
San Francisco, California, United States, December 4, 2023
Location: San Francisco Marriott Marquis, Foothill G2
Overview
Machine learning components are becoming an intrinsic part of
safety-critical systems, from driver-assisted vehicles to medical
imaging devices. Since undesired behaviours in such systems can
lead to fatal accidents, the dependability and trustworthiness of
such systems, including qualities such as reliability and integrity, is
paramount for their broad and safe adoption. However, contrary to
traditional system development, we are ill equipped to ensure the
dependability of systems with learned components. For example,
robustness testing of ML components may apply targeted noise
to inputs, but that type of noise may never be encountered in a
deployment context. Meanwhile, environments with unexpected
features may affect learned components in unsafe ways.
Goal
ML-components are intrinsically different from traditional software in many ways: (1) they lack precise requirements (relying instead on proxies); (2) lack accuracy; (3) they depend on data with multiple sources of provenance; (4) they rely on architectural considerations that have to do with capacity to efficiently achieve accuracy and not on dependability; (5) they are implemented through an optimization process that is fraught with undeterminism and has many degrees of freedom with subtle dependencies;
and (6) their performance does not come with a human-consumable explanation.
These differences are creating development challenges that are crucial to the deployment of safety-critical systems with ML components that we are going to discuss through this workshop.
The divergence of ML and SE development practices becomes an SE issue when ML components are incorporated into a system. ML components are inherently vulnerable, and features of safety-critical systems, such as fallback routines in the event of a failure, require an understanding of those vulnerabilities to be properly created and deployed.
Because these features are often written during system development, the vulnerabilities of ML at that point become the responsibility of the system.
Topics of interest
Include but are not limited to:
Safety requirements and specification of ML components
Model-based safety analysis of ML components
Architectures to manage scale, uncertainty, and safety of ML-components
Dataset development for ML-based safety-critical components
Verification and validation methods of ML components
Trust and trustworthiness of ML-based safety-critical systems
Privacy analysis of ML-based safety-critical systems
Explainability of ML-based safety-critical components
Safety and security guidelines, standards and certification of systems with ML components
Hazard analysis of ML-based safety-critical systems
Safety and security assurance cases of ML-based systems
Risk assessment and reduction of ML-based safety-critical systems
ML safety education and awareness
Establishing a baseline of techniques for ensuring dependability of ML-enabled systems