1.1 Trustworthy and Controllable Autonomy [Autonomy]

CARS introduce a new paradigm to computing that is different from conventional systems in a very important way: they must learn, adapt, and evolve with minimal or no supervision. A fundamental question therefore, is what rules and principles should guide the evolution of CARS? In natural life forms, this is achieved via natural selection – a random trial and error method that, over time, ensures that only the fittest survive. That approach, however, may not be acceptable for man-made CARS. Alternate approaches to guide the evolution of CARS are necessary.

The key research goal is how to ensure the “Do no Harm” principle.

This raises related security related questions in multiple research areas:

  • CARS Stability: A key requirement for CARS is Stability. This is so because, like any well-designed control system, you don’t want CARS to spiral into undesirable states (e.g. chain reactions that place the CARS into harmful states). The goal is to regulate the autonomous behavior to keep it within acceptable bounds or detect and mitigate behaviors that are out of given bounds.
  • CARS Compliance: How should the CARS behave, and how do we ensure it will do so accordingly. The first challenge we run into here, is how to formally specify the behavior of the CARS, and how do we ensure that the specification is consistent (i.e. not contradictory) and meets expectations of regulators and users. . The second challenge becomes, how do we show that the constructed CARS will behave and remain in compliance with the spec? What are appropriate bounds of autonomy?
  • CARS Accountability: CARS don’t exist in a vacuum. Rather, they become part of our everyday social infrastructure. CARS therefore must account for the Human Factors that it affects, in particular it must be held accountable to some human entity.   So, how do we build CARS that support ethics, legal liability and audit?  How much responsibility should the CARS undertake, and how much the human in the loop? How can this be specified, validated, and enforced? CARS by definition has the ability to take decisions about its own operation. There is a real need for a human entity to recreate and interpret this decision pathway to figure out “what happened and why”?
  • CARS Risk Management: How do we quantify the risks of CARS, and its impact? How do we decide if the risks are acceptable?

 

1.2 Fair and safe collaboration tolerating failures and attacks [Collaboration and 5G]

This area of research aims at collaborating systems. Its goal is to tolerate misbehavior of individual systems among a group of collaborators. State of the art in this area is to filter/block individual compromised messages or to execute complex so-called “byzantine agreement” protocols.

The research questions that we deem promising to pursue are:

  • How can we design protocols that tolerate failures and malicious attacks of individual players without introducing an undue performance penalty?
  • Can secure hardware improve the quality and efficiency of collaboration? Can, e.g., byzantine agreement protocols be replaced with more efficient versions if hardware security can be leveraged? Although there has been some early work, there has been no comprehensive exploration of this question nor have any practical systems been built.
  • To what extent can novel agreement approaches be used to defeat, detect or isolate compromised players? Can they benefit from underlying hardware security features?
  • What are the effects of large-scale shutdown/compromise on collaborative infrastructures and how can we support the recovery and re-initialization in such scenarios?
  • One basic challenge in this context and particularly in the context of V2X (vehicle to X communication) is the efficient detection of compromised inputs. In these systems you typically have thousands of incoming messages that may need to be checked for authenticity and, if safety is of concern, also for plausibility (e.g. to achieve a certain level of fault tolerance against undetected compromised peers that send false information with valid authentication credentials).

Efficient and secure schemes for agreement are also essential for detecting faulty/malicious nodes among a group of CARS nodes and isolating them.

1.3 Intelligent Security Strategies for Self-Defense and Self-Repair [Security and ML]

Resistance and resilience in CARS requires a correctly functioning immune-system.  The key research question is: What are the principles on which such defenses could be designed, and how do we ensure that the security systems themselves are robust against attacks? How can machine learning be used to protect against unknown risks?

 

A new area of research is actual Self-Protection and Self-Repair.  The research questions are:

  • Is it possible to design CARS that can detect its own deviations from expected behavior, determine the attack that caused this undesirable behavior, and then mitigate this (formerly unknown) attack vector?
  • Can compromised systems be recovered and returned to a known good state?
  • How do traditional intrusion detection techniques need to evolve in order to be applicable to CARS scenarios? For example, can fusion of “physical features” gathered via on-board sensors help?
  • What research in the area of “moving target defenses” can be extended to provide a substantial security advantage for practical systems.
  • Can the notion of adjustable/adaptive autonomy (where the level of autonomy in a system is adjusted based on external conditions) extended for security by having CARS node “seek help” when it is under attack?
  • Autonomous systems (robots, cars, drones) use motion planning to determine the best way to operate. Could similar approach be used for planning security and recovery strategies?
  • What security and safety guarantees can be made once machine learning is used for safety critical decisions? How can one validate and certify machine learning algorithms?

 

The goal of this area of research is to enable systems to autonomously recover from a compromise and return to its normal state of operation. One simple example is a hardware-enabled trusted computing base that monitors the system and can reinstall and reboot if attacks are detected. If possible, this should be more resource (i.e. cost, area, power) efficient that just adding redundancy, e.g. a second system that performs exactly the same operations as the first system.

A related area is the safety and security of machine learning in an adversarial environment, specifically the robustness against data injections (e.g. via V2X) and confidentiality guarantees that can be achieved when the adversary obtains a learned model.

1.4 Integration of Safety, Security, and Real-time Guarantees [Resilience]

CARS usually affect real-world systems. As such, misbehavior or failure can affect the safety of persons. Today, safety and security are usually not integrated. This research area aims at developing integrated approaches that guarantee safety (functional safety in the presence of random errors) and security (mitigation of malicious attacks). A special case of this integrated approach is sustained fail safety under attack. In particular, research should explore fail-safety strategies in presence of a malicious adversary.

One example to illustrate the tension between safety and security is parcel delivery using drones. Traditional fail safety of a drone triggers a safe landing if a drone is confused (e.g. if its radio is jammed). If, however, an adversary maliciously jams the radio to steal delivered parcels, the normal fail safe behavior of a safe landing delivers the parcel directly into the hands of the adversary.

Research topics that we envision could contribute to this area:

  • How can we design and implement security systems that provide sufficient safety?
  • What processes, standards and design guidance can help integrating safety and security?
  • Can we provide practical approaches to security and functional safety that are composable, i.e. under certain conditions, a composition of resilient components yields a resilient system?
  • How do safety and security guarantees translate into safety of CARS?
  • How can we enhance testing and validation to provide stronger security and safety guarantees?
  • How can we ensure determinism, real-time behavior, and tolerance of faults for given security subsystems? Security often contradicts real-time requirements. A research question would be how to overcome this, i.e. to have real-time compliant security/cryptography?
  • What changes can be introduced to Intel platforms to make safety guarantees easier?

 

1.5 Autonomous Systems, Ecosystem Scenarios, Requirements, Case Studies, and Validation [Applications]

This practical research aims at understanding real-world requirements and validating our results in realistic scenarios and case studies of selected technologies. The organizations that apply are not necessarily required to provide security expertise but are expected to provide a deep understanding of autonomous systems and the related ecosystem. The goal is to gain confidence that the developed technologies indeed meet customer demand. One vehicle to engage with the ecosystem is collaboration in publicly funded projects. Another vehicle is joint prototyping and experimentation with Intel and partners.

Example research questions include:

  • What are realistic usages for autonomous systems?
  • What are realistic real-world resilience requirements for given types of autonomous systems?
  • What are the ecosystem expectations and feedback for newly developed research results?
  • What are the most promising usage scenarios for given results?
  • What are joint scenario to focus the collaboration and joint research results on high value industry problems.
  • How can we demonstrate our research with industrial partners?

 

1.6 Advanced Platform Security for Long-term Autonomy [Platform Security]

Autonomous systems may operate in a hostile environment and may be offline for a longer period of time. The goal of this research area is to create truly resilient individual systems that are largely self-sufficient. This is particularly challenging if the systems are long-lived while their manufacturers can only invest little in their maintenance. While some systems can be patched to protect them from newly found vulnerabilities and have their attack mitigation rules updated, many systems are offline or neglected are being left without defense. The fact that every published patch is basically an announcement of newly discovered vulnerabilities further renders the risk of offline systems more pronounced.

State of the art to mitigate this substantial risk is:

  • Attack Surface Minimization: State of the art would be to minimize the attack surface while offline. E.g. to shut down non-essential connectivity channels.
  • Hardware-enhanced Security: A second way to mitigate this risk is to use platform security features to monitor a system while offline. This trusted computing base can detect attacks.

 

This research area focuses on advanced systems and platform security concepts.  This may include but is not limited to:

  • How is the integrity and authenticity of internal and environmental data (e.g. sensor data) that drives the operation of the CARS established?
  • How can we ensure correct notions of time and position are critical to the correct operation of CARS (especially in mobile CARS e.g. self-driving cars, drones, and guidance systems). How Time & Space (e.g. location) are represented, secured, determined, and managed in CARS? How are errors (deliberate or otherwise) in time and space detected and corrected? For example, can a hacker exploit features in 5G to track the location of a 5G node?
  • How can we maintain security of systems that are neglected or offline? How can systems that almost never reboot be kept secure? Is hot-patching a solution?
  • How can the basic remote attestation paradigm be extended to CARS scenarios to guarantee assured behavior such as assured containment within a physical perimeter? For example can we devise run-time attestation of policy invariants to be used by CARS nodes to validate other nodes?
  • How can we provide SW protection for distributed applications in CARS (exploitation defense, IP protection)?