Generative AI Series

Mixture of Experts

Mixture of Experts is orchestrating a set of models that are trained on a specific domain, to achieve a broader input space.

A B Vijay Kumar

--

This blog is an ongoing series on Generative AI and an introduction to mixture of experts model architectures. It provides scalable, high-performing, and efficient LLMs.

We are witnessing an exponential increase in the usage of Language Models (LLMs) in enterprise solutions. There is a growing demand for more capable LLMs, rather than simply increasing the size of existing models and building more generic ones, to cover broader scope. To address this, we require an alternative approach that is both scalable and efficient. As we construct larger models to cover a broader range of use cases, the training and utilization of these models necessitate higher levels of memory and computational resources.

The Mixture of Experts (MoE) model is a new approach that is transforming the way we deal with this situation. It represents an approach that incorporates a dynamic routing mechanism that assigns different “experts” (feed forward networks) to handle different types of data or tasks. This approach helps for a more efficient allocation of computational resources and improved performance.

MoE architecture consists of two main components. The following pictures shows how a generate Mixture of Experts network works.

Experts : Experts are individual neural networks that specialize in processing specific types of information. These are relatively smaller and are domain specific models, that specialize in certain domain. During the training phase, these experts learn to handle the inputs for which they are best suited, instead of one model, trying to learn everything. In transformer models, MoE utilizes sparse layers, which means that only a subset of the experts are active at any given time. This sparsity allows for more efficient computation and scalability.

Routing/Gating (sometimes referred to as Gating network): Routing is responsible for determining which expert should be activated based on the input data. Routing is actually a neural network that is trained to pick the right expert based on the context. It acts as a traffic controller, directing each piece of input data to the most appropriate expert.

The router determines which tokens (pieces of data) are sent to which expert, based on the token’s characteristics and the router’s learned parameters. There are various token routing techniques that are used. The following are some of the popular techniques.

  • Top-k Routing: This approach involves selecting the top ‘k’ number of experts with the highest affinity scores for a given input token. The value of ‘k’ is a constant that is defined and configured. The routing algorithm directs each token to these selected experts.
  • Expert Choice Routing: In this approach, instead of tokens selecting experts, the experts select the tokens they are best equipped to process. This method aims to achieve optimal load balancing and allows for heterogeneity in token-to-expert mapping.
  • Sparse Routing: In this approach, only a subset of experts is activated for each token, creating sparsity in the network. Sparse routing reduces computational costs compared to dense routing, where all experts would be activated for each token.

MoE models are trained with a focus on balancing the load among experts and ensuring efficient training. During inference, the model combines the outputs of the experts based on the gating model’s decisions.

MoE is a huge paradigm shift, and has helped in thinking of different architectural patterns, the following are some of the patterns.

  • Enhancing Transformer architecture with MoE: A class of transformer models (Mixtral) uses sparse MoE layers instead of dense feed-forward network layers. This helps in faster and efficient pretraining and inference with the same compute budget as a dense model. Switch Transformers are a type of transformer model that replace every feed-forward network layer with an MoE layer. This structure enables the model to handle a larger scale of data and model size efficiently.
  • Ensemble Learning: MoE has been applied in ensemble learning, where it decomposes predictive modeling tasks into sub-tasks, with each expert trained on a specific sub-task. A gating/router model then learns which expert to route based on the input to be predicted.
  • Hierarchical MoE: MoE can be an expert within another MoE, creating a hierarchy of MoEs. This allows for a more sophisticated and well-structured approach to handling different aspects of a problem. This is a very good pattern for very complex, diverse domains and datasets.

Advantages of Mixture of Experts

  • Flexible and adaptive: MoE can effectively handle complex data distributions by modeling different subdomains with specialized expert models, while the gating network allows for dynamic selection based on the input features or context.
  • Efficient computation: By selecting only the relevant expert model for each input instance, MoE can significantly reduce the overall computational requirements compared to a single large and complex model.
  • Scalable: MoE can be easily extended to handle large-scale datasets by adding more expert models as needed, while keeping the per-model capacity small and manageable.
  • Improved generalization: By averaging the predictions from multiple experts, MoE can improve generalization performance and reduce overfitting compared to a single expert model.

Limitations of Mixture of Experts

  • Increased model complexity: The addition of a gating network and multiple expert models increases the overall model complexity, which may require more data and computational resources for training and inference.
  • Model interpretability: The dynamic selection of expert models based on input features or context can make it difficult to understand which features are driving the predictions and which expert models are being used.
  • Training challenges: Training MoE models requires careful optimization strategies and large datasets to ensure that both the gating network and the expert models are well calibrated and effective.

It's exciting times for us. I have been already playing around with Mixtral and it's crazy efficient and fast. That's all for now, hope this is useful. Will be back with more, until then, have fun. ;-)

References

--

--

A B Vijay Kumar

IBM Fellow, Master Inventor, Mobile, RPi & Cloud Architect & Full-Stack Programmer