Should make Lora Configurations Flexible for Users
Current Problem: The current lora_trainer file has fixed configurations for attention, alpha, dropout, and epochs. This limits flexibility for users with different requirements.
PROPOSED SOLUTION: To implement a zero-code solution for users, we can create a configuration-based system where users simply select a predefined LoRA configuration or customize it through a simple interface (e.g., CLI, JSON, or YAML file). This eliminates the need for users to write any code.
Proposed Solution Predefined Configurations:
Provide a set of predefined LoRA configurations for common use cases (e.g., small, medium, large attention dimensions). Users can select a configuration by name (e.g., small, medium, large). Configuration File Support:
Allow users to specify their custom configurations in a JSON or YAML file. The SDK will read the file and apply the configurations automatically. CLI or API Interface:
Provide a CLI or API interface where users can select a configuration or pass a configuration file.