The symposium aims to foster inspiring discussions on recent advancements and future directions in multi-modal machine learning. This event will feature both theoretical contributions and practical applications, focusing on optimizing, combining, and transferring machine learning models across various modalities. The symposium is co-organized by the Pro-TEXT project.
The main objective is to bring together researchers and practitioners to explore how different learning paradigms can be combined and optimized for greater accuracy and adaptability. The symposium will address how each learning approach, with its specific data-driven constraints, can impact the effectiveness of models. This includes discussing challenges like the ill-posedness of learning processes and how partial data can lead to suboptimal or divergent solutions.
Participants will have the opportunity to examine the latest methods and techniques in optimizing multi-modal models, highlighting their efficiency, diversity, and adaptability. The event is structured to encourage collaborative discussions and knowledge exchange, ultimately driving forward the understanding and application of multi-modal learning in real-world scenarios.
Relevant topics
The following is a partial list of relevant topics (not limited to) for the symposium:
Multi modal learning
Transfer learning, metric learning, and domain adaptation
Optimization of cost functions for ML
Bagging and boosting techniques
Collaborative clustering and learning
Mixtures of distributions or experts
Modular approaches
Multi-task learning
Multi-view learning
Task decomposition …
Invited speakers
Nistor GROZAVU, Full Professor, CY Cergy Paris University
Nicoleta ROGOVSCHI, Associate Professor HdR at LIPADE, Université de Paris
Issam FALIH, Associate Professor at Université Clermont Auvergne