Economic valorisation of IA research

UPDATE

We are in the press:

https://www.regional-it.be/detached/trail-trusted-ai-labs-maximise-impact-de-la-recherche-ia-sur-le-tissu-socioeconomique/

A little over a year ago, the TRAIL initiative was launched to boost AI research and enable it to accelerate this strategic field in the Walloon Region by facilitating the adoption of AI by the socio-economic fabric.

On Friday 6 May, this desire became a reality with the meeting of key industrial players such as  AISIN, I-care, Sagacify and other AI user companies, but also AI service providers. This meeting took place in A6K premises. On this occasion, more than 150 people from the academic world, the industrial world and public actors active in this field exchanged on their technological challenges. The aim was to pool knowledge and resources to enable Wallonia to position itself among the recognised AI ecosystems in Europe.


This event was organised under the auspices of the Vice-President of Wallonia and Minister of Digital Affairs, Willy Borsus.
 
 
 

In order to help to solve many society’s problems, AI Technologies must be of high quality and they must be developed and used in a way they gain the trust of citizens

As a specialist in signal processing and machine learning, Emmanuel Jean is a researcher in the AI group of Multitel. His areas of interest are data analysis and deep learning. His current research focuses on the development of trustworthy Artificial Intelligence.

In VIADUCT, his work focuses on a new multimodal, adaptive and speech-centric human/machine interface in semi-autonomous cars. As the new advanced driver assistance systems (ADAS) are little used due to a lack of trust, a voice assistant has been developed to reassure the driver by providing the information necessary for its use. He also leads the working group on ‘Trusted artificial intelligence for critical systems’ of the ARIAC project.

The industrialization of AI is a crucial issue for industrial and economic competitiveness. However, AI-based systems are increasingly complex and appear as black boxes, which creates mistrust and hinders the adoption of its new technologies, particularly in sensitive sectors such as aeronautics, space or Medicine. In order to make AI-based systems trustworthy, it is necessary to develop tools and methods to industrialize certified AI based on the principles of explainability, robustness and compliance with ethics and legal and regulatory frameworks.

 

Towards an artificial intelligence that integrates

Geraldin Nanfack, a researcher at UNamur, has been working since December 2018 on his thesis as part of the EOS VeriLearn project. His research work aims to ensure that Machine Learning algorithms meet properties or certain constraints. For example, in one of his works [1] published at the ESANN’21 conference, he proposed a method to force decision tree algorithms to make fair decisions with respect to sensitive characteristics such as race or gender. In another work [2] published at UAI’21, he developed a method to force so-called “black-box” and differentiable models such as neural networks to be easily and globally explained by decision rules so that non-expert users of this artificial intelligence can understand how the decisions were made.

His work aims at having a more reliable Artificial Intelligence (AI), which can be endowed with faculties to provide faithful explanations of its behavior. This evolution of AI is valuable for several sensitive sectors such as health or banking that could use an AI predisposed to provide reliable explanations of its decisions while guaranteeing or maximizing fairness in its predictions.

[1] Nanfack, G., Delchevalerie, V., & Frénay, B. (2021). Boundary-Based Fairness Constraints in Decision Trees and Random Forests. In The 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, https://proceedings.mlr.press/v161/nanfack21a.html.

[2] Nanfack, G., Temple, P., & Frénay, B. (2021, December). Global explanations with decision rules: a co-learning approach. In Uncertainty in Artificial Intelligence (pp. 589-599). PMLR, https://pure.unamur.be/ws/portalfiles/portal/61249591/ES2021_69.pdf.

Towards new privacy-preserving learning methods in renewable-dominated energy systems

Jean-François Toubeau is a post-doctoral researcher of the Fund for Scientific Research (F.R.S.–FNRS) at the University of Mons.

Eager to leverage artificial intelligence for improving the operation of modern energy systems, his research lies at the crossroad of Machine Learning and optimization under uncertainties.

His current research works focus on the development of new privacy-preserving  Machine Learning algorithms.

The privacy protection is achieved through two main pillars. First, federated learning is used to ensure that local measurements are not exchanged throughout learning and test procedures. Second, the learning is augmented with differential privacy, which injects calibratednoise to offer formal guarantees that the trained model cannot bereversed-engineered, thus reducing exposure to adversarial attacks.

This project, developed in collaboration with Prof. Yi Wang (The University of Hong Kong), received the IIF-SAS award (10,000$) annually attributed by the International Institute of Forecasters.

The success of our energy transition strongly relies on the propercoordination between stakeholders that need to team up to unlock the full potential of their individual resources. By collaboratively training a generic Machine Learning model, the value of personal data is unlocked(e.g., by capturing the explanatory power contained in all measurements), which is key to improve performance, and thus enhancethe satisfaction and engagement of all stakeholders.

This is an essential step towards an improved management of modern energy systems, which is key to reduce overall energy costs, while ensuring higher security of supply.

To learn more about his work, visit ResearchGate.

 

In real situation, a metric is generally not sufficient to describe the performance of a model,

Mathilde Brousmiche is a researcher in the AI department of Multitel. She completed a thesis in cosupervision between the University of Mons and the University of Sherbrooke (Canada). During her thesis, she worked with different modalities such as image and sound and more particularly on the fusion of audio-visual information with deep neural networks in the context of scene analysis.

The major axis of his current work concerns the tracking of objects. On the one hand, her research consists in tracking several objects regardless of their category, contrary to most researches that track only one category of objects at a time. On the other hand, the AI models developed must be adapted to a real situation with material constraints and the results must be interpretable and explainable to the user.

The AI automatically tracks a large number of objects, allowing a better understanding of the scene or anticipation of the next events. The automatic tracking allows, for example, to monitor and model the traffic flows in order to improve the mobility in urban areas. Several metrics exist to compare the performance of AI models. However, they do not always reflect the expected results in real situations. The value of the metric may be lower but the result obtained may better match the user’s expectations. This is why the concrete interpretation of the performances and the identification of the limits of the implemented solutions allow a better understanding from the users and to propose models that better match the expectations.

L’axe majeur de ses travaux actuels concerne le suivi (tracking) d’objets. Ses recherches consistent d’une part à réaliser le tracking de plusieurs objets quelle que soit leur catégorie contrairement à la plupart des travaux réalisant le tracking d’une seule catégorie d’objets à la fois. D’autre part, les modèles d’IA développés doivent être adaptés à une situation réelle avec des contraintes matérielles et les performances doivent être interprétables et explicables à l’utilisateur.

L’IA réalise automatiquement le tracking d’un grand nombre d’objet permettant une meilleure compréhension de la scène ou encore une anticipation de la suite des évènements. Le tracking automatique permet par exemple de superviser et modéliser les flux de trafics pour améliorer la mobilité dans les centres urbains. Plusieurs métriques existent pour comparer les performances des modèles d’IA. Cependant elle ne reflète pas toujours les résultats attendus en situation réelle. La valeur de la métrique peut être plus faible mais le résultat obtenu mieux correspondre aux attentes de l’utilisateur. C’est pourquoi, l’interprétation concrète des performances et l’identification des limites des solutions implémentées permettent une meilleure compréhension de la part des utilisateurs et de proposer des modèles qui correspondent mieux aux attentes.

En situation réelle, une simple métrique n’est généralement pas suffisante pour décrire les performances d’un modèle, il est nécessaire d’avoir une interprétation et explicabilité des performances afin de déterminer les limites réelles des modèles.

Innovation of artificial intelligence applied to the analysis of medical images in senology and interventional cardiology

Xavier Lessage is a researcher in the Data Science department at CETIC. His main interests are artificial intelligence, cloud computing and distributed data processing (high performance computing).  One of his main areas of interest is health and more specifically, the use of artificial intelligence in health care.

The major axes of his work concern breast cancer and interventional cardiology using traditional and federated architectures. His research consists, on the one hand of evaluating Deep learning algorithms (binary classification, anomaly localisation, explicability, etc.) in the field of medical imaging with private databases (retrospective study). On the other hand, to validate the models retained in hospitals but on new images (prospective study) with the aim of analysing the behaviour of the AI in real situations.

By helping to reduce the cost and workload of doctors by combining two intelligences: the first, artificial, to make an initial analysis and the second, human, to interpret the results and make the right diagnosis. In the context of breast cancer in particular, the interpretation of a mammographic image is a difficult task and requires verification by a second reader, or even a third (in the event of discrepancies) in order to reduce the number of false negatives. The role of the second reader could be taken over by an AI, leaving time for the second reader to perform other tasks, such as that of the first reader.

To learn more about his current projects or publications : https://cutt.ly/gOigulY

An artificial intelligence learns with me how to find the causes of genetic diseases in scientific articles

Passionate about the secret behind genetic diseases, Charlotte Nachtegael, after obtaining a master’s degree in biomedical sciences at UMONS, began a master and then PhD in Bioinformatics on the study of complex genetic disorders at the (IB)2 (Interuniversity Institute in Bioinformatics in Brussels) and at the MLG (Machine Learning Group) at ULB.

Her work consisted first to find in the scientific literature combinations of mutations causing complex genetic diseases, gathered in a database (publication currently under review). This enormous biocuration effort encouraged her to focus on text mining techniques to be able to automatically extract this data from the text and make it easily available. She is also using the principle of active learning, directly involving the human expert in the development of the artificial intelligence.

This automatic extraction of data on complex genetic disorders should bring an advantage to the medical and bioinformatic fields, moreover with the increase of genetic data and the related publications on the subject. This data can be used afterwards to study the causes of rare genetic diseases or to develop prediction tools during pregnancy… Additionally, this opens a direct interaction between human and artificial intelligence with the use of active learning, where the human teaches directly to the model. This work should then, we hope, increase the trust towards artificial intelligence.