The first newsletter of the AI4CCAM project will be out soon.
Make sure you don’t miss any relevant news, events and project developments by subscribing to our newsletter.
Subscribe here
On 26 and 27 June, the AI4CCAM partner SystemX hosted the first General Assembly of the project.
After the January kick off meeting, this was a great new opportunity for all partners to meet in person and discuss the progresses of AI4CCAM towards the development of an open environment for Trustworthy AI for Connected, Cooperative and Automated Mobility.
Several scientific aspects of the project were discussed such as Automated Driving Scene Interpretation with Qualitative Constraint Acquisition and Model Robustness/Adversarial Attack and diffusion models, along with dissemination and communication aspects, also including the cooperation with sister CCAM projects and initiatives.
AI4CCAM, represented by its coordinator Arnaud Gotlieb, Simula, participated in the VIVATECH event, held in Versailles, France, on 15 and 16 June.
VIVATECH aims at accelerating innovation by connecting startups, tech leaders, major corporations and investors responding to big challenges.
Simula presented the Trustworthy AI approach in a panel session titled “Integrating blockchain and trustworthy AI in the manufacturing industry”. The presentation is related to the Trustworthy AI approach as developed in AI4CCAM and is related to Simula’s participation in the MARS – Manufacturing Architecture for Resilience and Sustainability project.
To find out more about the event, click here
AI4CCAM, represented by its partner Deepsafety, will be attending the ADAS&Autonomous Vehicle Technology Expo 2023 event, held in Stuttgart, Germany on 13, 14 and 15 June.
Over 120 expert speakers will be discussing the key topics concerning the development and test of safe autonomous driving and ADAS technologies; including software, AI and deep learning, sensor fusion, virtual environments, verification and validation of autonomous systems, testing and development tools and technologies, real-world word test and deployment, and standards and regulations.
Deepsafety, with Ralph Meyfarth, will give a talk on 14 June on “Safe perception AI by detection of unknown Unknowns”: deep learning AI has triggered the first revolution in autonomous driving. The breakthrough on the mass market is currently prevented by the fact that this technology is inherently uncertain. Modern deep learning models hit a reliability limit of about 95 percent because of the unknowns. Given an unknown input, the output of a deep learning model is undefined. We have developed a very efficient new methodology to detect the unknowns based on uncertainty in real-time. With our approach, we can develop safety proofs and enable certification/homologation of autonomous vehicles without driving billions of kilometers.
For more information on the event, click here
AI4CCAM is organising a series of Scientific webinars focusing on specific technical aspects of the project.
The first webinar was held on 14 April, focusing on “ODD (Operational Design Domain) concept and usage in the automotive industry”, with a presentation the project partner IRT SystemX.
SAE J3016 defines ODD as “Operating conditions under which a given driving automation system or feature thereof is specifically designed to function, including, but not limited to, environmental, geographical, and time-of-day restrictions, and/or the requisite presence or absence of certain traffic or roadway characteristics”.
ODD is dedicated to a system level feature with automation capabilities. The aim of ODD is to define the restrictions of Operational Domain required by the automated feature to work properly, taking into account technical limitations.
Also the link between ODD and driving automation level is discussed during the webinar.
AI4CCAM took part at the EUCAD 2023 exhibition on 3 and 4 May, which was located at the Autoworld Museum in Brussels. Arnaud Gotlieb, Project coordinator, and Victor Talpaert, project WP leader, both explained the main ongoing activities of the project to the visitors.
EUCAD was the right place and occasion to meet sister projects such as AIthena and Selfy. The dialogue among the project coordinators was the beginning for fruitful synergies in the field of the CCAM partnership.
The conference and exhibition attracted many representatives from the European Institutions, Cities, Road operators, public transport operators, regulators, insurance companies as well as researchers and industrial participants, coming mostly from Europe but also from the US, Japan and other non-European countries.
Among the main insight of the two-days event, the fact that so far the trend, as for the CCAM sector, seems to have completely moved from demonstrators to AV operations. Connectivity is likely the next step before large scale AV deployments, but we saw that AI is often an underlying technology spanning over most AD components.
AI4CCAM is in line with this trend, focusing on the development of trustworthy Artificial Intelligence for automated driving assistance, pursuing 4-5 driving automation level, meaning high and full automation.
AI4CCAM will apply AI models to ethical, social and cultural choices, focusing on three use cases for: travel assist function, explainable AI and predictability of environment and trajectory, user acceptance of automated vehicles equipped with vulnerable road-user (VRU) sensing.
Automated Cars must explain their decisions:
- for the safety and comfort of the car driver and passengers
- to inform the other road users such as other vehicles and pedestrians
- to sustain an audit in case of accident
But explanations are often difficult to build, because automated decision-making results sometimes from situation misunderstandings; user acceptance can only come from full transparency. And today’s explanations are only based on quantitative analysis.
AI4CCAM builds explanations from patio-temporal qualitative constraints sequences. This work is especially carried out within the WP2, focusing on Advanced AI-driving CCAM sense-plan-act predictive models.
Simula released a video explaining the work carried out in WP2.
New mobility trends and technologies are driving radical change in our transport systems. This change will have a profound impact on the environment, transport users and businesses. Automated transport is a crucial element in this transformation. It has the potential to reduce road fatalities to near zero, improve accessibility of mobility services and reduce harmful emissions from transport by making traffic more efficient.
The EU is investing substantial financial resources to achieve an intelligent transport network, integrating information and communication technologies with transport infrastructure, vehicles and users. While this creates huge opportunities, it also comes with challenges, such as ensuring the automation of transport in a connected, cooperative and safe way.
CINEA is responsible for the implementation of a growing number of EU projects that develop, test and exploit innovative solutions funded under the EU’s research and innovation programme, Horizon Europe. The EU is contributing around €500 million to support the development and take-up of highly automated and connected driving systems through the programme.
CINEA has released a new brochure presenting a comprehensive overview of CCAM projects, managed by CINEA, and funded under the first two calls (2021 and 2022) of the EU’s Horizon Europe programme for research and innovation, and covering a range of domains, from data ecosystems to infrastructure support, validation methodologies to environmental aspects, and more. AI4CCAM is proudly among these projects.
For the brochure, click here
AI4CCAM has released its first video!
Created by the project partner INLECOM, the video is a general overview of AI4CCAM that will focus on the development of trustworthy Artificial Intelligence (AI) for automated driving assistance, pursuing 4-5 driving automation level, meaning high and full automation.
How can AI technology improve road users’ safety?
AI4CCAM will apply AI models to ethical, social and cultural choices, focusing on three use cases for:
- travel assist function
- explainable AI and predictability of environment and trajectory
- user acceptance of automated vehicles equipped with vulnerable road-user (VRU) sensing.