Missed our insightful webinar on 29 November on building trust in AI for automated vehicles? The replay is now available!

This webinar aimed to delve into the challenges and opportunities of AI in the context of road safety, congestion reduction, and environmental impact. It primarily focused on what it takes to build trust and the conditions for adoption of AI-driven solutions. It gathered 58 attendees, notably from the industry, and also some researchers (from the CEA for instance)​, beyond of course AI4CCAM partners.

The webinar also aimed to shed light on the psychological and emotional factors that influence the adoption of AI in automated vehicles. By understanding these dynamics, we can contribute to an informed and well-rounded perspective on AI-driven mobility.

In this 1-hour webinar, join Marc Eynaud, Arnaud Gotlieb, Lucie Regereau, and Isabelle Vallet as they unveil user acceptance nuances, emotions, and barriers in automated mobility.

  • Discover key insights from the European project AI4CCAM (AI for Connected Cooperative and Automated Mobility). Arnaud Gotlieb, the project coordinator, emphasizes the importance of trustworthy AI for autonomous driving.
  • Explore Lucie Regereau’s methodology, delving into the emotions of daily car users in Paris, Berlin, and Warsaw. Understand why user acceptance is crucial for automated driving’s future.
  • Isabelle Vallet shares fascinating results on traffic regulations, attitudes, and representations’ impact on automated vehicle adoption. See how these factors shape perceptions and trust.

Watch the full replay

AI4CCAM interviewed Nafsica Papakosta, Project Manager and Communications Specialist at INLECOM, leader of the WP5, which involves Communication, Dissemination and Exploitation for the AI and CCAM ecosystems.

In this interview, Nafsica addresses the methodology employed to identify and assess the outcomes of the AI4CCAM project with the greatest exploitation and innovation potential.

What is the purpose of the AI4CCAM’s Innovation & Exploitation Methodology?
As AI4CCAM’s Innovation and Exploitation Manager, INLECOM employs the Innovation & Exploitation Methodology (IEM), a well-established and validated framework to efficiently address the AI4CCAM project’s objectives in the areas of Exploitation, Innovation Registry, and Patent Filing. This methodology enables the documentation and evaluation of all project outcomes based on the owners’ input, with the purpose of producing a collection of outcomes and assessing their potential for innovation and exploitation. The AI4CCAM IEM is comprised of 3 phases and is being used to account for both exploitable and innovative project outcomes.

How does the Outcomes Registry phase of the IEM methodology work?
The initial phase of the IEM focuses on the completion of the Outcomes Registry via a web form that consists of 11 to 15 questions which help us pinpoint AI4CCAM’s innovative outcome(s) and key exploitative results filled out by project partners The Outcomes Registry is essentially a list of significant expected project outcomes together with preliminary descriptions of their characteristics. Partners submit their expected outcomes to the registry and assist in categorizing outcomes according to a certain exploitation path. The first phase of AI4CCAM’s IEM Methodology identified 11 expected Key Exploitable Results (KERs).
A feedback procedure will then be established to assist partners in better addressing difficulties, exploitation queries, and concerns. Based on the material collected, the initial Innovation & Exploitation assessment is performed, based on seven criteria: Solution Readiness, Anticipated market Interest, Anticipated size of the market in question, Level of innovation, Competitive landscape, Market readiness, and Solution reproducibility and re-usability.

What is the purpose of the AI4CCAM Innovation Registry and how will the decision on which patents to file be made?
On the innovation, IP, and IPR management front, INLECOM will use its experience as an Innovation Manager to help establish the AI4CCAM Innovation Registry, which will include novel AI-driven models used in safety-critical CCAM applications.
The methodology for cataloguing and supporting AI4CCAM outputs with high innovation potential will be provided in the third phase of the IEM. The AI4CCAM consortium partners will be able to fully explore and comprehend the innovation potential of their IP assets as a direct result of the innovation management initiatives guided by INLECOM, which will evaluate the outcomes’ innovation and commercial ambitions. A preliminary assessment of the outcomes’ innovative dimension will take place using the criteria of utility, novelty, and non-obviousness.
We will then be collaboratively evaluating, scoring, and prioritizing the areas where we will focus the IP related actions that have a planned commercial trajectory and/or strategic interest to AI4CCAM partners.
INLECOM intends to formally protect the AI4CCAM outputs through the submission and issuance of two patents. Our work will be guided by the EU’s legislative framework and the patent filing procedural requirements, as well as significant experience from past projects.

The CCAM Partnership Multicluster Meeting which will take place on 21 and 22 November in Brussels (for members of the CCAM Association only).

The CCAM Partnership Multicluster Meeting will discuss achievements and ongoing initiatives. The CCAM Partnership was born on 2021 to create a more user-centered and inclusive mobility system, increasing road safety while reducing congestion and environmental footprint; develop a more collaborative research, testing and demonstration projects in order to accelerate the innovation pace and implementation of automated mobility; work together at European level to help remove barriers and contribute to the acceptance and efficient rollout of automation technologies and services.

AI4CCAM, represented by the project coordinator Arnaud Gotlieb, Simula Research, will be attending the event as an auditor. This will be an important opportunity for the project to network and establish a dialogue with sister initiatives.

AI4CCAM continues its webinars focused on the different technical aspect of the project, exploring AI’s impact on connected and automated vehicles.

On 29 November, jointly organised with BVA, the webinar “On the Roads of Tomorrow: Securing Trust in AI for Automated Vehicles” will be held!

As a global leader in insights, data & consulting powered by behavioral science, The BVA Family is eager to delve into the challenges and opportunities of AI in road safety, congestion reduction, and environmental impact. This webinar aims to share valuable insights on building trust and conditions for adopting AI-driven solutions. We’ll also discuss the psychological and emotional factors influencing AI adoption in automated vehicles.

The webinar will also be the perfect opportunity for an open discussion, engaging in a dialogue with experts in the field and fellow participants to explore the future of AI in CAVs.

Curious about AI and connected/automated vehicles (CAVs)? Register (for free) here!

AI4CCAM has just released its public deliverable on Methodology for trustworthy AI (Artificial Intelligence) in the scope of Connected, Cooperative and Automated Mobility (CCAM).

The methodology relies on current European guidelines, namely the report Trustworthy Autonomous Vehicles produced by the Joint Research Center of the European commission in 2021, a first instantiation in the autonomous vehicles scope of previous initiatives including the AI Act , (European Commission, 2021) and the ethics guidelines for Trustworthy AI (Expert Group on Artificial Intelligence, 2019). It is also based on the developments of the confiance.ai program, a multi-sector research program tackling trustworthiness of AI in critical systems.

In this document, the proposed methodology is based in a macro decomposition of phases in a pipeline to ensure trustworthiness when developing a given AI-based system for CCAM, inspired from the confiance.ai program. Within such pipeline, specific activities in the project are circumscribed at a high- level and trustworthiness properties are targeted for each one of these phases. These trustworthiness attributes are based on the current developments at a European level, namely those published by the Joint Research Centre report on autonomous vehicles in 2021. All properties identified in the confiance.ai program are provided as support to complete the identified trustworthiness attributes depending on the studied use case.

Application of AI developments will be developed and applied in the use cases in future months.
Within the context of the AI4CCAM project the methodology should be instantiated in 3 uses cases addressing complementary views on AI use and perception. The methodology is instantiated in only one of the use cases of the project for first preliminary guidelines, this is: in AI-enhanced ADAS for trajectory perception. Subsequent activities in the project should see its application to other use cases. In the same logic, one scenario of many to come has been modeled for this specific use case.

Read the document!

AI4CCAM interviewed Pavan Vasishta, Akkodis, leader of the project WP4 working on “Use Case Implementation and Validation”.

Pavan is a Senior Research Scientist in Akkodis, and in this interview he tells us more what validation and impact mean when dealing with Artificial Intelligence (AI) for Autonomous Vehicles.

As leader of the WP4 of the project, what kind of work you did to define a validation process able to include a variety of CCAM use cases?

Our work in WP4 of the project deals mainly with validating the various AI models that will come out of this project in perception and trajectory prediction. Along with other project partners, we are developing guidelines on what validation means in terms of AI for Autonomous Vehicles.

For this, we are working on creating a Digital Twin – a recreation of the real world in simulation – that will act as a playground for all these models. Within this microcosm, we will be able to simulate a variety of behaviours, weather conditions and test out many different scenarios. Each use case and scenario will be studied in depth and simulated within the Digital Twin and compared against ethical and technological criteria for Vulnerable Road User acceptance of Connected and Autonomous Mobility.

What is the impact and the role of AI in the use cases you are working on within the project?

Explainable AI is at the heart of the use cases we are working on within the project. A major problem in the acceptance of AI today is its perceived “black box”-ness. One does not know what goes on within an AI model after inputting certain data. We aim to keep explainability at the heart of our work, especially when it comes to perception and trajectory prediction of VRUs.

While we are working on improving and validating Advanced Driver Assistance Systems and the robustness of AI-based perception systems for CAVs, we are also actively contributing to the development of trustworthy AIs in safe trajectory prediction. We have managed to get some very good results in predicting pedestrian pedestrian behaviour in urban scenarios.

How can AI4CCAM impact the user acceptance of CCAM let us say, in a 5-year horizon?

Autonomous Vehicles can be a game changer in human behaviour in the long run, providing autonomy, independence and safety to many, many people around the world. One of the main issues plaguing user acceptance is the opacity of vehicle behaviour and manoeuvres on open roads and in the presence of other road users. With all the work that we are putting into the explainabilty of the vehicles’ intentions in a variety of scenarios, within the ambit of AI4CCAM, it is my hope that more and more people feel comfortable around AVs so that we can unleash the full potential of Connected Mobility.

The Covenant of Mayors has recently released the publication “Policy options to reduce emissions from the mobility sector: inspiring examples and learning opportunities.”

The Covenant of Mayors is a European initiative that solicits voluntary commitments by local governments to implement EU climate and energy objectives. With transport as one of its key sectors, the Covenant plays a significant role in climate mitigation. Transport accounts for approximately 16% of actions submitted by Covenant signatories and contributes to 26-28% of total emissions, according to the Joint Research Committee’s Baseline Emission Inventories (BEI, Covenant of Mayors 2019 Assessment). The Covenant also tackles transport in its climate adaptation pillar by using transport-related indicators such as the vulnerability of transport infrastructure to extreme weather events.
In 2022, the Covenant of Mayors further expanded its focus by introducing an Energy Poverty Pillar, which includes indicators related to transport poverty. These metrics assess the accessibility and availability of public transport services, giving insights into how mobility influences social inclusion.

In the publication, AI4CCAM is included among the inspiring projects for improving public transport.

According to the International Transport Forum, public transport buses and trains can release up to a fifth of CO2 emissions per passenger-Km than ride-hailing and about a third of a private vehicle. A strong and well-integrated public transport network can also help provide equal access to jobs, education, services and other economic opportunities, particularly to those without private vehicles. Investing in public transport is one of the most effective measures to reduce transport emissions and bring cities closer to reaching their climate targets. It can increase equity and foster economic development. Therefore, ensuring that public transportation is accessible, affordable, and inclusive is of paramount importance to reach wider climate and societal goals set by cities.

Download the publication and find AI4CCAM at page 7!

AI4CCAM interviewed Karla Quintero, SystemX, leader of WP1 of the project focusing on “AI-based integrated Trustworthy AI framework for CCAM”.

Karla is Senior Research Engineer/Systems Engineering Architect at SystemX.

In this interview, Karla tells us more about the most innovative aspects of AI4CCAM and the regulation of the use of Artificial Intelligence

As the project manager of AI4CCAM for IRT SystemX, what do you foresee as the greatest innovation of the project?

The AI4CCAM project addresses the assessment of trustworthiness in artificial intelligence for Cooperative, Connected and Automated Mobility. A very strong element for innovation is a methodology including the application of the scenario approach and allowing to evaluate key requirements proposed at a European level for trustworthy AI. The results in the project should provide richer knowledge on the challenges and means for evaluating these key requirements given that, to this day, no consensus is yet established and it is then a very active field of research.

The use cases in the project provide solid ground to apply this methodology as they are complementary approaches in the mobility scope, namely: AI for trajectory prediction and, on the other hand, user acceptance from the perspective of Vulnerable Road Users.

In Europe, the trend is to regulate the usage of AI. How do you see that coming in the Automotive Industry?

In the automotive industry, a layer of regulation of such a technology is already ensured intrinsically though traditional safety evaluation procedures.

However, many actions are already under way such as initiatives related to the explainability of AI algorithms embedded in vehicles. Some of the undergoing work, also tackled by IRT SystemX (among others in the PRISSMA project – Research and Investment Platform for the Safety and Security of Autonomous Mobility), consists in providing recommendations to regulate certification of AI-based systems, going from individual sensors up to the entire vehicle, and fleets of vehicles. In this sense, regulation emerges from surpassing the black box paradigm of AI, and requiring explainability all the while protecting intellectual property from providers. This is both the goal and the challenge, which is why intellectual property traceability has become a topic of high interest in the automotive industry.

Another issue in the automotive industry, as in many others, is the protection of privacy which has been of interest before and even more after the GDPR (General Data Protection Regulation) came into effect. In this sense, AI is consequently, yet partly, regulated. In this regulation scope, similar initiatives are under research regarding cybersecurity.

Some initiatives that can be mentioned include the very well-known AI-Act but also the “Grand Défi”, which “[Ensures] the security, reliability and certification of systems based on artificial intelligence”. It aims at building the process, methods and tools to guarantee the trustworthiness placed in products and services that integrate artificial intelligence. Consequently, it aims to provide a “technical” framework for the future European regulation proposal on AI. As part of this “Grand défi”, Confiance.ai brings together a group of 13 major French industrial and academic partners (Air Liquide, Airbus, Atos, Naval Group, Renault, Safran, Sopra Steria, Thales and Valeo, as well as the CEA, Inria, IRT Saint Exupéry and IRT SystemX) to take up the challenge of industrializing artificial intelligence for critical products and systems.

What is coming in the next few months?

Future advances expected in the ecosystem are namely:

  • Application of AI-based systems for User Experience assessment, e.g. passenger monitoring, situational awareness, alerting passengers,
  • Addressing trustworthiness through normalized levels and scales for AI-based systems,
  • Using the scope of CCAM in order to extend the ODD, increase reliability of current ODD, and normalize interaction with law enforcement authorities.

Some upcoming events of interest where IRT SystemX contributes in the European scope are:

  • “Announcing the European AI, Data, Robotics Forum” https://adr-association.eu/blog/announcing-the-european-ai-data-robotics-forum-08-09-11-2023-versailles/ by the AI Data Robotics Association that will be held in November 2023.
  • “Confiance.ai Days” annual event organized by the Confiance.ai program, gathering conferences, roundtables, exhibition villages and results from all partners involved. The exact date of the event will be announced shortly (January 2024). The previous edition was held in 2022.

AI4CCAM interviewed Atia Cortes, Barcelona Supercomputing Center (BSC), leader of WP3 of the project, working on “Trustworthy AI for CCAM: Ethical, Social and Cultural Implication”.

Atia Cortes is a PhD, Recognised Researcher, Social Link Analytics Unit – Life Sciences Department in BSC.

In this interview, Atia tells us more about ethical issues related to automated vehicles and how to reach a responsibly AI in automated mobility.

As leader of the WP3 in the AI4CCAM project, what are the main ethical issues and risks due to the usage of AI in the automated vehicle?

The unique nature of the future autonomous vehicles, which operate in the physical world, makes ethical considerations especially important at the same level as legal considerations. The safety of passengers and other road users is a significant concern, and the responsibility and accountability for accidents involving autonomous vehicles are complex issues that must be addressed during the design phase. Additionally, the legal and ethical decision-making required of autonomous vehicles, such as deciding between the safety of passengers and other road users, raises difficult questions that require careful consideration. Finally, the potential impact of autonomous vehicles on employment and the economy is another ethical consideration that needs to be addressed by the designer and society.

How do we reach a responsible AI in automated mobility?

It is essential to embrace responsible AI as defined by the EU. Responsible AI makes it possible to consider societal, ethical, and legal values while designing the system, avoiding undesirable outcomes as much as possible. Responsible AI will bring transparent and accountable autonomous mobility.

How can AI4CCAM impact the EU AI-regulation landscape, let us say, in a 5-year horizon?

AI4CCAM must promote the design of transparent and accountable algorithms for autonomous driving. Establishing accountability for AI-based decisions in autonomous vehicles and making AI-based systems as transparent as possible is crucial. Understanding this process will allow us to support the legislator in designing new regulations for the sector.
AI4CCAM has to play a leading role in setting the global gold standard for AI-based autonomous mobility, which will be essential for the new legislation.