23 July 2023

MAIB: Manifold learning, Artificial Intelligence, Biology Forum (MAIB)

Presentation Record(Previous Presentation will be showed here if the video is not released for this talk)

Dr. Yifan Zhang

Yifan Zhang is working toward his Ph.D. degree in the School of Computing, National University of Singapore. His research interests are broadly in machine learning, now with high self-motivation to solve distribution shift problems for deep neural networks. He has published papers in top venues, including NeurIPS, ICML, ICLR, CVPR, ECCV, SIGKDD, IJCAI, TPAMI, TIP, and TKDE. In recognition of his achievements, he has been honored with the Research Achievement Award and Research Excellence Award by National University of Singapore.

Background

Deep long-tailed learning, one of the most challenging problems in visual recognition, aims to train well-performing deep models from a large number of images that follow a long-tailed class distribution. In the last decade, deep learning has emerged as a powerful recognition model for learning high-quality image representations and has led to remarkable breakthroughs in generic visual recognition. However, long-tailed class imbalance, a common problem in practical visual recognition tasks, often limits the practicality of deep network based recognition models in real-world applications, since they can be easily biased towards dominant classes and perform poorly on tail classes. To address this problem, a large number of studies have been conducted in recent years, making promising progress in the field of deep long-tailed learning. Considering the rapid evolution of this field, this paper aims to provide a comprehensive survey on recent advances in deep long-tailed learning. To be specific, we group existing deep long-tailed learning studies into three main categories (i.e., class re-balancing, information augmentation and module improvement), and review these methods following this taxonomy in detail. Afterward, we empirically analyze several state-of-the-art methods by evaluating to what extent they address the issue of class imbalance via a newly proposed evaluation metric, i.e., relative accuracy. We conclude the survey by highlighting important applications of deep long-tailed learning and identifying several promising directions for future research.

Reference

Zhang Y, Kang B, Hooi B, Yan S, Feng J. Deep long-tailed learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2023.

What We can learn:

  1. Deep long-tailed learning aims to train high-performing deep learning models for visual recognition on datasets with a long-tailed class distribution, i.e. containing many examples of some classes but few of other classes. This is a challenging problem.

  2. Deep learning has enabled significant progress in generic visual recognition in the last 10 years. However, long-tailed class imbalance limits the practical use of deep learning for recognition in real-world applications. Deep learning models can be biased towards dominant classes with more examples.

  3. Many recent studies have focused on addressing the deep long-tailed learning problem, making promising progress. The studies can be grouped into three main categories:

  • Class re-balancing: Modifying the class distribution to reduce imbalance, e.g. over/under-sampling methods.

  • Information augmentation: Generating additional examples for tail classes to increase representation, e.g. generative models.

  • Module improvement: Developing deep learning modules/architectures less sensitive to class imbalance, e.g. metric learning losses.

  1. The authors evaluate several state-of-the-art deep long-tailed learning methods using a newly proposed metric called “relative accuracy” to determine how well they address class imbalance.

  2. Deep long-tailed learning has important applications in various visual recognition tasks. There are several promising directions for continued research, e.g. domain adaptation, lifelong learning.

Overall Summarization:

Deep long-tailed learning refers to the challenge of training deep models for visual recognition tasks when the class distribution is heavily skewed, with a few dominant classes and a long tail of less represented classes. This imbalance poses a practical limitation to deep learning models, as they tend to be biased towards the dominant classes and perform poorly on the tail classes. However, there have been significant advancements in addressing this problem in recent years, which this paper aims to comprehensively survey.

The paper categorizes existing studies in deep long-tailed learning into three main categories: class re-balancing, information augmentation, and module improvement. Class re-balancing techniques focus on mitigating the class imbalance by adjusting the training process. This can be achieved through various methods such as over-sampling minority classes, under-sampling majority classes, or applying re-weighting schemes. Information augmentation techniques aim to generate synthetic data or modify existing data to improve the representation of tail classes. This can involve techniques like data synthesis, data mixing, or data augmentation. Module improvement techniques involve enhancing specific components of the deep learning model to better handle long-tailed distributions, such as adaptive loss functions, feature re-weighting, or prototype learning.

The paper provides a detailed review of these methods within each category, discussing their underlying principles, advantages, and limitations. Additionally, the authors propose a novel evaluation metric called relative accuracy to empirically analyze the effectiveness of state-of-the-art methods in addressing class imbalance. This metric measures the performance of tail classes relative to the overall accuracy, providing a more comprehensive assessment.

Finally, the survey highlights important applications of deep long-tailed learning and identifies promising directions for future research. The applications range from object recognition and semantic segmentation to face recognition and anomaly detection. The paper suggests potential research directions such as exploring the combination of multiple methods, investigating the impact of data characteristics on long-tailed learning, and exploring transfer learning approaches for long-tailed scenarios.

In summary, this paper presents a comprehensive survey of recent advances in deep long-tailed learning. It categorizes existing methods, provides an in-depth review, proposes a new evaluation metric, and suggests future research directions. By addressing the challenge of class imbalance in visual recognition, these advancements pave the way for more practical and effective deep learning models in real-world applications.

What is long-tailed class distribution and how to handle it?

In deep learning, a long-tailed class distribution refers to the situation where the classes in the dataset are highly imbalanced, similar to what was described in the previous answer. The term “long-tailed” comes from the shape of the class distribution when visualized as a histogram, where the majority classes have a long tail of instances, and the minority classes have a short tail.

When training deep learning models on datasets with long-tailed class distributions, the imbalanced nature of the data can pose challenges and affect the model’s performance. Deep learning models are typically designed to learn from large amounts of data, and when some classes are heavily underrepresented, the model may not adequately learn to distinguish and generalize on the minority classes.

The challenges of long-tailed class distribution in deep learning include:

  1. Bias: The model can become biased towards the majority classes, leading to lower accuracy and recall on the minority classes.

  2. Rare Class Detection: Deep learning models might have difficulty detecting and learning patterns from the rare classes due to the limited number of instances available for those classes.

  3. Overfitting: The model may overfit on the majority classes, resulting in poor generalization to unseen data, especially for the underrepresented classes.

  4. Loss Function Imbalance: Many deep learning models use cross-entropy loss during training, which can lead to a disproportionate focus on the majority classes when the dataset is imbalanced.

To address these challenges, various techniques have been proposed to handle long-tailed class distributions in deep learning, including:

  1. Data Augmentation: Generating additional samples for the minority classes through augmentation techniques, such as rotation, flipping, or cropping.

  2. Class Weighting: Assigning higher weights to the loss function for the minority classes to give them more importance during training.

  3. Oversampling and Undersampling: Applying resampling techniques to balance the class distribution by either oversampling the minority classes or undersampling the majority classes.

  4. Transfer Learning: Using pre-trained models on large-scale datasets and fine-tuning them on the imbalanced dataset can improve the model’s performance on the minority classes.

  5. Focal Loss: Focal loss is a modified form of cross-entropy loss that down-weights the loss contribution from easy-to-classify examples, thereby reducing the impact of the majority classes.

  6. Ensemble Methods: Combining multiple models, each trained on different data subsets or using different techniques, to improve generalization and performance on the minority classes.

The choice of the technique depends on the specific problem, dataset, and computational resources available. It is essential to carefully evaluate the performance of the deep learning model on both majority and minority classes to ensure it effectively addresses the long-tailed class distribution.



blog comments powered by Disqus