Skip to content

sap_logo
Thesis projects

Bachelor Thesis

Deep Learning Theory
2
Application of neural networks for the classification of observable patterns in ionospheric plasma
The project is still to be defined more specifically, but it will concern the automatic detection of events of interest within time series. A "trivial" but important enhancement is to eventually use explainability techniques in order to analyse the results and provide some sort of physical interpretation.
Stopping the Noise: A Review of Early Stopping Techniques for Learning with Noisy Labels
The primary goal of this thesis is to study the various approaches utilizing early stopping to address the challenge of learning from noisy labels. By examining commonalities and divergences among these methods, we seek to identify potential areas for intervention and improvement.
Information Retrieval
1
Cross-Domain Similarity using Optimal Transport
Cross-domain recommendation (CDR) has recently emerged as an effective way to alleviate the cold-start and sparsity issues faced by recommender systems, by transferring information from an auxiliary domain to a target domain to improve recommendations. Studying the similarity between domains is a novel direction in CDR research, poten- tially opening doors for further exploration. In this context, we want to introduce an approach to quantify similarity between a pair of domains using optimal transport and explore how current CDR methods perform with both similar and dissimilar domain combinations.

Master thesis

Information Retrieval
1
Are LLMs Fair Recommenders?
Starting from this work [https://arxiv.org/pdf/2305.07609.pdf] (RecSys '23), we propose to further investigate the role of LLMs in Recommendation.
Deep Learning Theory
6
Effects of different types of regularization on the complexity of boundaries
Some work stated that for classification, the proper complexity to regularize is the boundary complexity, rather than the functional complexity of F. Indeed the functional complexity . The goal of classification is to recover the Bayes optimal decision boundary, which divides the input space into non-overlapping regions with respect to labels. Therefore, classification is better to be thought of as estimation of sets in R^d, rather than estimation of functions on R^d. ot be closely connected. The set difference reflects the 0-1 loss much more directly than functional norms on F. Indeed we can have f in the functional space F that approximates the optimal classifier, \eta, so well that ||f -\eta||_{\infty}< \epsilon, but there is still no guarantee of matching the sign of \eta(x)- 1/2 close to the decision boundary. In this work we aim to study the effect of different types of regularization (l_1, l_2 norm, dropout , batch norm) on the boundary complexity and study how this is have an impact on generalization and robustness of the architecture.
Effects of noisy labels in neural collapse effect
This work starts from the study of the relative position of embeddings of noisy and clean labels in neural networks that have reached training error zero and are further trained. The goal is to observe that the noisy samples are positioned as far as possible from the other class samples. The proposed approach involves starting from the Neural Collapse paper, training it on a clean dataset, examining the embeddings for clean data, training it on a dataset with noisy samples, examining the embeddings for both clean and noisy data, and running their notebook on a noisy dataset. The distance between the embeddings can be measured by computing the angle between the centroids of the two classes and the two noisy labels, particularly in 2D. The findings of this study can help in identifying robust and reliable deep learning models.
Approximation properties of CEM
The goal of the thesis is to study which kind of functions can be approximated by concepts embedding models.
Stopping the Noise: A Review of Early Stopping Techniques for Learning with Noisy Labels
The primary goal of this thesis is to study the various approaches utilizing early stopping to address the challenge of learning from noisy labels. By examining commonalities and divergences among these methods, we seek to identify potential areas for intervention and improvement.
Exploring the Double descent using the task overlaoding
In this topic, you would be investigating the concept of the double descent curve in the context of neural networks and how overloading tasks within these networks might affect this phenomenon. Double descent refers to the unexpected behavior where the test error of a neural network decreases, increases, and then decreases again as the model's complexity or capacity increases.
Enhancing Graph Neural Network Accuracy Using a Dual-Network Approach with NCOD Loss Function
It is well-known and observed in our research that underparameterized networks tend not to fit noise in classification tasks, including Graph Neural Networks (GNNs). To leverage this characteristic, we propose developing a dual-network architecture. The first, underparameterized network identifies reliable samples, while the second network employs the NCOD (Noisy Channel Optimized Distillation) loss function on samples with low prediction scores. This approach aims to enhance accuracy beyond the use of NCOD alone.
NLP
1
Hackathon Creator
This thesis project aims to develop a customized LLM (Large Language Model) dedicated exclusively to the ideation, design, and management of multidisciplinary hackathons for elementary, high school, and university students and early-career professionals. The goal is to create an automated and highly specialized tool to support the entire development and management process of these events.
Graph Theory
2
Investigating and Optimizing Pooling Layers in GNNs to Mitigate Gradient Bottlenecks
The pooling layer in Graph Neural Networks (GNNs) often acts as a bottleneck, hindering the flow of gradients and impairing model performance. This issue needs to be addressed to enhance the efficiency and effectiveness of GNNs.
Dual-Architecture Framework Development:
In our previous research, we observed that training Graph Neural Networks (GNNs) for graph classification under label noise conditions revealed a crucial insight: removing the negative eigenvalues of the learned weight matrix prevents overfitting to noise. However, this approach leads to overall training instability. To address this issue, we propose developing a dual-architecture network to enhance stability and robustness in the presence of label noise.

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.