- This event has passed.
Doctoral School – Computational Intelligence and Learning [Researchers]
février 17 @ 14 h 00 min - 17 h 00 minFree
- 14h: Benoît Macq – “Coalitional Active Learning”
Many recent reports and articles show a big gap between the successful achievements of deep learning and their adoption in clinical practice. We propose a new implementation paradigm to overcome this gap.
Practitioners are committed to continuing medical education for their accreditation. They provide annotations and second advice in informal coalitions. These coalitions are also the place for continuing medical education. We propose Coalitional Active Learning, which takes labels provided by the coalition instead of those only provided by individual hospitals, as in classical federated learning approaches. This will align the members on common best clinical practices and increase transfer knowledge between hospitals.
Coalitional Active Learning will provide a continuously joint improvement of the accuracy of the model and of the human expertise. It is based on active learning, which relies on a parsimonious label harvesting on the most informative cases. For those cases, the data will go through an optimal sampler to the coalition. Experts will then provide labels and this will be used to make the clinical decision and to update the model. The labelling process increases the level of expertise of the practitioner who did the labelling. Coalitional active learning optimises this co-learning under the constraint of time-budgets.
We propose two veins to reach this goal. One based on an analytical approach by defining a coalitional gain and the related optimum sampling in the coalition and one based on stochastic scheduling. We will also propose a new secure multicast of image data in a coalition across hospitals with guarantees of ephemeral use of the data. Tokenisation and anonymity of the labels are essential for the acceptance by the practitioners.
- 15h00: Coffee break
- 15h15: Christine Decaestecker – “Deep learning from multi-expert annotations: need for prior consensus or not? A use case in prostate cancer classification”
Deep learning algorithms rely on large amounts of annotations for the training and validation stages. In the medical image domain, the “ground truth” is rarely available and disagreements between experts affect many segmentation and classification tasks. Often, consensus annotations are produced as “ground truth” for training and performance evaluation. This talk presents a use case in digital pathology (prostate cancer grading) where taking into account the annotations of each expert can be beneficial for learning and interpreting results, while being more consistent with the complex clinical reality.
- 16h: Roald Sieberath – A perspective on: Foundation models and ChatGPT
ChatGPT has garnered an incredible amount of interest from a very broad public. Its perceived intelligibility has led some to believe it marks the beginning of Artificial General Intelligence, while others have pointed out its numerous errors and limitations. GPT-3 is one of many “large language models,” such as OPT, Sparrow, PaLM, and Bloom, that have been referred to as “foundation models” in a seminal paper by Stanford HAI. These models provide a broad base of capabilities, and a new type of platform upon which extensions can be built, e.g. through fine-tuning or few-shot learning. They also suffer from many drawbacks, such as typically lacking a representation of the world, leading to errors, in turn raising many ethical, transparency, and security concerns. Moreover, because building a new foundation model requires tremendous computing power (tens of thousands of GPUs), they will likely be controlled by a few well-funded players, further raising the need for transparency. Going beyond just the scientific aspects, Roald Sieberath will draw upon his experience as an entrepreneur in data, trained in venture capital and in AI in Silicon Valley, to offer a perspective on why we need a large-scale, open-source, european effort to devise foundation models that warrant the level of trust that our citizen and companies expect.
- 16h30: End