--

HI Francois. The idea is, lets say, we use the same base model to keep fine tuning for various domains, over a period of time, the base model parameters (weights and biases) the key purpose of the base model, might be unlearned. That is what we call the Catastrophic forgetting. To avoid this issue, the enterprises might end up having several versions of the model, finetuned for several domains. Instead. LORA provides a modular model, that can be augmented on the base model (like a plug and play). IF you read my Part 2, u will understand that I trained the LORA module, and then I use "merge" to merge with the base model. Also please check out AdapterHub

--

--

A B Vijay Kumar

IBM Fellow, Master Inventor, Mobile, RPi & Cloud Architect & Full-Stack Programmer