XF-Blog
ProjectMachine LearningdevelopmentAbout
MACHINE LEARNING
PAPER NOTE
[Paper Note] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
https://arxiv.org/abs/2411.05007 https://github.com/mit-han-lab/nunchaku a quantization method where both weights and activations are quantized to 4-bit. Activation outliers are first processed using a smoothing method. However, this leads to more pronounced weight outliers. SVD is then used to d... Read more
[Paper Note] The Super Weight in Large Language Models
https://arxiv.org/abs/2411.07191 Outliers with large magnitudes in LLMs significantly impact model performance. A single parameter can cause a boost in perplexity. The most important outliers are less than 0.01% of the total parameters. These super weights can be identified with just one forward pa... Read more
[Paper Note] Titans: Learning to Memorize at Test Time
https://arxiv.org/abs/2501.00663 Attention mechanisms and recurrent models each have their strengths and weaknesses: Attention can attend to the entire context window, but it comes with a high computational cost. Recurrent models compress the state into a fixed size, but they struggle to model depe... Read more
[Paper Note] Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
https://arxiv.org/abs/2106.06103 Two encoders are used: one to generate a latent variable from the original speech spectrogram, and another to generate from the text. These variables should be as similar as possible. To address the problem of variable length between text and speech, an unsupe... Read more
[Paper Note] Emerging Properties in Self-Supervised Vision Transformers
https://arxiv.org/abs/2104.14294 DINO introduces a self-supervised method that doesn’t require labels or negative samples. It uses a teacher network and a student network, updating the teacher network’s parameters through EMA. The teacher network sees a global view, while the student network only... Read more
[Paper Note] Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
https://arxiv.org/abs/2006.16236 Aiming to address the quadratic complexity issue in Transformers. Self-attention can be expressed as a linear dot-product of kernel feature maps, achieving O(N) complexity. When applying a kernel with positive similarity scores on the queries and keys, linear a... Read more
[Paper Note] Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image Synthesis
Existing Problem While MaskGIT-like methods offer fast inference, their overall performance isn’t great. Innovation Our method uses a combination of multi-modal and single-modal transformer layers. Language and vision representations are inherently different. We use cross-modal transformers to und... Read more
[Paper Note] MaskGIT: Masked Generative Image Transformer
https://arxiv.org/abs/2202.04200 Unlike text, images are not sequential. This makes auto-regressive models unsuitable for image generation tasks. During training, MaskGIT is trained on a masked prediction task, similar to what is used in BERT. Inference: At each iteration, the model predicts a... Read more
[Paper Note] Learning Transferable Visual Models From Natural Language Supervision
CLIP maps images and text into the same embedding space. It is trained using contrastive learning. Within each batch, the similarity between correct image and text feature pairs is maximized, while the similarity between incorrect pairs is minimized. Both the image and text encoders are trained from... Read more
[Paper Note] Language Models are Unsupervised Multitask Learners
A general purpose training procedure that can be applied to a variety of NLP tasks in a zero-shot manner.
This paper presents a general purpose training procedure that can be applied to a variety of NLP tasks, using task instructions and task input as conditioning factors. A model trained with a massive, diverse, and unsupervised dataset can handle many tasks in a zero-shot manner and typically outperfo... Read more