AI4CCAM continues its webinars focused on the different technical aspect of the project, exploring AI’s impact on connected and automated vehicles.

On 29 November, jointly organised with BVA, the webinar “On the Roads of Tomorrow: Securing Trust in AI for Automated Vehicles” will be held!

As a global leader in insights, data & consulting powered by behavioral science, The BVA Family is eager to delve into the challenges and opportunities of AI in road safety, congestion reduction, and environmental impact. This webinar aims to share valuable insights on building trust and conditions for adopting AI-driven solutions. We’ll also discuss the psychological and emotional factors influencing AI adoption in automated vehicles.

The webinar will also be the perfect opportunity for an open discussion, engaging in a dialogue with experts in the field and fellow participants to explore the future of AI in CAVs.

Curious about AI and connected/automated vehicles (CAVs)? Register (for free) here!

AI4CCAM has just released its public deliverable on Methodology for trustworthy AI (Artificial Intelligence) in the scope of Connected, Cooperative and Automated Mobility (CCAM).

The methodology relies on current European guidelines, namely the report Trustworthy Autonomous Vehicles produced by the Joint Research Center of the European commission in 2021, a first instantiation in the autonomous vehicles scope of previous initiatives including the AI Act , (European Commission, 2021) and the ethics guidelines for Trustworthy AI (Expert Group on Artificial Intelligence, 2019). It is also based on the developments of the confiance.ai program, a multi-sector research program tackling trustworthiness of AI in critical systems.

In this document, the proposed methodology is based in a macro decomposition of phases in a pipeline to ensure trustworthiness when developing a given AI-based system for CCAM, inspired from the confiance.ai program. Within such pipeline, specific activities in the project are circumscribed at a high- level and trustworthiness properties are targeted for each one of these phases. These trustworthiness attributes are based on the current developments at a European level, namely those published by the Joint Research Centre report on autonomous vehicles in 2021. All properties identified in the confiance.ai program are provided as support to complete the identified trustworthiness attributes depending on the studied use case.

Application of AI developments will be developed and applied in the use cases in future months.
Within the context of the AI4CCAM project the methodology should be instantiated in 3 uses cases addressing complementary views on AI use and perception. The methodology is instantiated in only one of the use cases of the project for first preliminary guidelines, this is: in AI-enhanced ADAS for trajectory perception. Subsequent activities in the project should see its application to other use cases. In the same logic, one scenario of many to come has been modeled for this specific use case.

Read the document!

AI4CCAM interviewed Pavan Vasishta, Akkodis, leader of the project WP4 working on “Use Case Implementation and Validation”.

Pavan is a Senior Research Scientist in Akkodis, and in this interview he tells us more what validation and impact mean when dealing with Artificial Intelligence (AI) for Autonomous Vehicles.

As leader of the WP4 of the project, what kind of work you did to define a validation process able to include a variety of CCAM use cases?

Our work in WP4 of the project deals mainly with validating the various AI models that will come out of this project in perception and trajectory prediction. Along with other project partners, we are developing guidelines on what validation means in terms of AI for Autonomous Vehicles.

For this, we are working on creating a Digital Twin – a recreation of the real world in simulation – that will act as a playground for all these models. Within this microcosm, we will be able to simulate a variety of behaviours, weather conditions and test out many different scenarios. Each use case and scenario will be studied in depth and simulated within the Digital Twin and compared against ethical and technological criteria for Vulnerable Road User acceptance of Connected and Autonomous Mobility.

What is the impact and the role of AI in the use cases you are working on within the project?

Explainable AI is at the heart of the use cases we are working on within the project. A major problem in the acceptance of AI today is its perceived “black box”-ness. One does not know what goes on within an AI model after inputting certain data. We aim to keep explainability at the heart of our work, especially when it comes to perception and trajectory prediction of VRUs.

While we are working on improving and validating Advanced Driver Assistance Systems and the robustness of AI-based perception systems for CAVs, we are also actively contributing to the development of trustworthy AIs in safe trajectory prediction. We have managed to get some very good results in predicting pedestrian pedestrian behaviour in urban scenarios.

How can AI4CCAM impact the user acceptance of CCAM let us say, in a 5-year horizon?

Autonomous Vehicles can be a game changer in human behaviour in the long run, providing autonomy, independence and safety to many, many people around the world. One of the main issues plaguing user acceptance is the opacity of vehicle behaviour and manoeuvres on open roads and in the presence of other road users. With all the work that we are putting into the explainabilty of the vehicles’ intentions in a variety of scenarios, within the ambit of AI4CCAM, it is my hope that more and more people feel comfortable around AVs so that we can unleash the full potential of Connected Mobility.

The Covenant of Mayors has recently released the publication “Policy options to reduce emissions from the mobility sector: inspiring examples and learning opportunities.”

The Covenant of Mayors is a European initiative that solicits voluntary commitments by local governments to implement EU climate and energy objectives. With transport as one of its key sectors, the Covenant plays a significant role in climate mitigation. Transport accounts for approximately 16% of actions submitted by Covenant signatories and contributes to 26-28% of total emissions, according to the Joint Research Committee’s Baseline Emission Inventories (BEI, Covenant of Mayors 2019 Assessment). The Covenant also tackles transport in its climate adaptation pillar by using transport-related indicators such as the vulnerability of transport infrastructure to extreme weather events.
In 2022, the Covenant of Mayors further expanded its focus by introducing an Energy Poverty Pillar, which includes indicators related to transport poverty. These metrics assess the accessibility and availability of public transport services, giving insights into how mobility influences social inclusion.

In the publication, AI4CCAM is included among the inspiring projects for improving public transport.

According to the International Transport Forum, public transport buses and trains can release up to a fifth of CO2 emissions per passenger-Km than ride-hailing and about a third of a private vehicle. A strong and well-integrated public transport network can also help provide equal access to jobs, education, services and other economic opportunities, particularly to those without private vehicles. Investing in public transport is one of the most effective measures to reduce transport emissions and bring cities closer to reaching their climate targets. It can increase equity and foster economic development. Therefore, ensuring that public transportation is accessible, affordable, and inclusive is of paramount importance to reach wider climate and societal goals set by cities.

Download the publication and find AI4CCAM at page 7!

AI4CCAM interviewed Karla Quintero, SystemX, leader of WP1 of the project focusing on “AI-based integrated Trustworthy AI framework for CCAM”.

Karla is Senior Research Engineer/Systems Engineering Architect at SystemX.

In this interview, Karla tells us more about the most innovative aspects of AI4CCAM and the regulation of the use of Artificial Intelligence

As the project manager of AI4CCAM for IRT SystemX, what do you foresee as the greatest innovation of the project?

The AI4CCAM project addresses the assessment of trustworthiness in artificial intelligence for Cooperative, Connected and Automated Mobility. A very strong element for innovation is a methodology including the application of the scenario approach and allowing to evaluate key requirements proposed at a European level for trustworthy AI. The results in the project should provide richer knowledge on the challenges and means for evaluating these key requirements given that, to this day, no consensus is yet established and it is then a very active field of research.

The use cases in the project provide solid ground to apply this methodology as they are complementary approaches in the mobility scope, namely: AI for trajectory prediction and, on the other hand, user acceptance from the perspective of Vulnerable Road Users.

In Europe, the trend is to regulate the usage of AI. How do you see that coming in the Automotive Industry?

In the automotive industry, a layer of regulation of such a technology is already ensured intrinsically though traditional safety evaluation procedures.

However, many actions are already under way such as initiatives related to the explainability of AI algorithms embedded in vehicles. Some of the undergoing work, also tackled by IRT SystemX (among others in the PRISSMA project – Research and Investment Platform for the Safety and Security of Autonomous Mobility), consists in providing recommendations to regulate certification of AI-based systems, going from individual sensors up to the entire vehicle, and fleets of vehicles. In this sense, regulation emerges from surpassing the black box paradigm of AI, and requiring explainability all the while protecting intellectual property from providers. This is both the goal and the challenge, which is why intellectual property traceability has become a topic of high interest in the automotive industry.

Another issue in the automotive industry, as in many others, is the protection of privacy which has been of interest before and even more after the GDPR (General Data Protection Regulation) came into effect. In this sense, AI is consequently, yet partly, regulated. In this regulation scope, similar initiatives are under research regarding cybersecurity.

Some initiatives that can be mentioned include the very well-known AI-Act but also the “Grand Défi”, which “[Ensures] the security, reliability and certification of systems based on artificial intelligence”. It aims at building the process, methods and tools to guarantee the trustworthiness placed in products and services that integrate artificial intelligence. Consequently, it aims to provide a “technical” framework for the future European regulation proposal on AI. As part of this “Grand défi”, Confiance.ai brings together a group of 13 major French industrial and academic partners (Air Liquide, Airbus, Atos, Naval Group, Renault, Safran, Sopra Steria, Thales and Valeo, as well as the CEA, Inria, IRT Saint Exupéry and IRT SystemX) to take up the challenge of industrializing artificial intelligence for critical products and systems.

What is coming in the next few months?

Future advances expected in the ecosystem are namely:

  • Application of AI-based systems for User Experience assessment, e.g. passenger monitoring, situational awareness, alerting passengers,
  • Addressing trustworthiness through normalized levels and scales for AI-based systems,
  • Using the scope of CCAM in order to extend the ODD, increase reliability of current ODD, and normalize interaction with law enforcement authorities.

Some upcoming events of interest where IRT SystemX contributes in the European scope are:

  • “Announcing the European AI, Data, Robotics Forum” https://adr-association.eu/blog/announcing-the-european-ai-data-robotics-forum-08-09-11-2023-versailles/ by the AI Data Robotics Association that will be held in November 2023.
  • “Confiance.ai Days” annual event organized by the Confiance.ai program, gathering conferences, roundtables, exhibition villages and results from all partners involved. The exact date of the event will be announced shortly (January 2024). The previous edition was held in 2022.

AI4CCAM interviewed Atia Cortes, Barcelona Supercomputing Center (BSC), leader of WP3 of the project, working on “Trustworthy AI for CCAM: Ethical, Social and Cultural Implication”.

Atia Cortes is a PhD, Recognised Researcher, Social Link Analytics Unit – Life Sciences Department in BSC.

In this interview, Atia tells us more about ethical issues related to automated vehicles and how to reach a responsibly AI in automated mobility.

As leader of the WP3 in the AI4CCAM project, what are the main ethical issues and risks due to the usage of AI in the automated vehicle?

The unique nature of the future autonomous vehicles, which operate in the physical world, makes ethical considerations especially important at the same level as legal considerations. The safety of passengers and other road users is a significant concern, and the responsibility and accountability for accidents involving autonomous vehicles are complex issues that must be addressed during the design phase. Additionally, the legal and ethical decision-making required of autonomous vehicles, such as deciding between the safety of passengers and other road users, raises difficult questions that require careful consideration. Finally, the potential impact of autonomous vehicles on employment and the economy is another ethical consideration that needs to be addressed by the designer and society.

How do we reach a responsible AI in automated mobility?

It is essential to embrace responsible AI as defined by the EU. Responsible AI makes it possible to consider societal, ethical, and legal values while designing the system, avoiding undesirable outcomes as much as possible. Responsible AI will bring transparent and accountable autonomous mobility.

How can AI4CCAM impact the EU AI-regulation landscape, let us say, in a 5-year horizon?

AI4CCAM must promote the design of transparent and accountable algorithms for autonomous driving. Establishing accountability for AI-based decisions in autonomous vehicles and making AI-based systems as transparent as possible is crucial. Understanding this process will allow us to support the legislator in designing new regulations for the sector.
AI4CCAM has to play a leading role in setting the global gold standard for AI-based autonomous mobility, which will be essential for the new legislation.

AI4CCAM is organising a series of webinars focusing on technical aspects of the project.
On 15 September, a new webinar was held on “Detection of Unknown Unknowns or how to make your Perception Safe” by Deepsafety.

What does safety mean in autonomous driving? Ralph Meyfarth answered this question underlining the role of artificial intelligence, and the concept of safe perception.
Deep learning and data set were discussed as well.

Go to our Library and download the presentation!

AI4CCAM interviewed Ganesh Gowrishankar, CNRS (Centre National de la Recherche Scientifique), leader of the WP2 of the project, working on “Advance AI-driving CCAM sense-plan-act predictive models”.

Ganesh Gowrishankar is a Senior Researcher (Directeur de Recherche), Interactive Digital Human group;
CNRS-UM Laboratoire d’Informatique, de Robotique et de Microelectronique de Montpellier (LIRMM).

In this interview, Ganesh tells us more about the interaction between humans, and their behavior, and automated cars.

As leader of WP2, can you tell us about the research directions explored in the project?

WP2 is the scientific WP of the project. In the WP we are specifically interested in VRU prediction, which is a major challenge for automated vehicles. The WP aims to develop a more explainable and trustworthy AI framework to predict VRU movements. We plan to do this by developing a ‘hybrid AI model’ that integrates traditional end to end data based AI models with human behavioral models developed using neuroscientific psychophysical experiments and techniques. WP2 involves DEEPS and AKKODIS who will provide AI models for VRU prediction. VIF and SKODA will help develop the scenarios to be tested and augment data using GAN to help train these models. CNRS will provide a behavioral model of VRU that will be integrated with the AI model/s. SIMULA will develop techniques and test the explainability of the developed model/s.

You are specialist of human-machine interactions and especially the role of neuro-science in this important field of research, can you summarize what the research challenges of the project in this area?

When we are driving and see a pedestrian near the road for example, we are able to get a good prediction of their next moves by just looking at his/her physical features and the environment. We will predict differently for a kid, compared to an adult for example, and predict differently if an adult is walking alone compared to in a group. To efficiently interact with humans, automated cars need to do the same.

However, this is a major challenge for automated cars (and machines and robots that interact with humans) because humans behaviors are complex. Human behaviors both with their environment and with other humans, are characterized by complex dynamics that change with an interacting individual’s physiology, age and pathology, and also depend on emotional factors like fear and anxiety. Furthermore, human behaviors are determined by their current observations as well as by predictions of behavioral models they possess of their environment and of the agents they interact with (often investigated as Theory of mind), which themselves are continuously adapted with day to day experiences.

Due to this complexity, and diversity of behaviors across humans, VRU behaviors are very difficult for AI systems to predict. We will therefore utilize behavioral experiments to get a better insight into these aspects of VRU movement prediction. Using virtual reality, we will develop experiments in which participants will be put in daily situations of interaction with cars (in the virtual reality) and evaluate how their future behaviors can be predicted from their current behaviors and environmental conditions. We will try to integrate this model with the AI model to improve overall VRU prediction behavior.

What are the main research breakthroughs that can be achieved by the project and how will they impact the future in a 5-years horizon?

Ideally, we would be able to develop a behavioral model that will be integrated with the current state of art AI models of VRU prediction to develop a hybrid model of VRU prediction. Such a model can improve VRU prediction, while being more explainable due to its neuroscientific parts.

AI4CCAM was present at Safecomp 2023, the 42nd International Conference on Computer Safety, Reliability and Security, taking place in Toulouse, France, from 19 to 22 September 2023.

The Conference was established in 1979 by the European Workshop on Industrial Computer Systems, Technical Committee 7 on Reliability, Safety and Security (EWICS TC7).

AI4CCAM was involved in the “Software testing & Reliability” session with the project coordinator Arnaud Gotlieb, Simula Research Laboratory, holding a speech on Constraint-guided Test Execution Scheduling: An Experience Report at ABB Robotics.
AI4CCAM was also present in the exhibition area.

During the Conference, a contact was established with the Japanese AI2X Co-evolution project, which advocates for a human-centered AI framework in the automated driving safety context: a potential future internationalization development for AI4CCAM.