Finetune LLM
POST {{baseUrl}}/finetune/llm
Endpoint to configure and start a llm finetuning job.
Request Body
{"pretrainedmodel_config"=>{"model_path"=>"mistralai/Mistral-7B-v0.1", "other_model_info"=>{"model_size_in_billions"=>"<number>", "model_path"=>"<string>"}, "resume_checkpoint_path"=>"", "use_lora"=>true, "lora_r"=>8, "lora_alpha"=>16, "lora_dropout"=>0, "lora_bias"=>"none", "use_quantization"=>false, "use_gradient_checkpointing"=>false, "parallelization"=>"nmp"}, "deployment_name"=>"Null", "data_config"=>{"data_path"=>"tatsu-lab/alpaca", "data_subset"=>"default", "data_source_type"=>"hub_link", "prompt_template"=>"Here is an example on how to use tatsu-lab/alpaca dataset ### Input: {instruction} ### Output: {output}", "cutoff_len"=>512, "data_split_config"=>{"train"=>0.9, "validation"=>0.1}, "prevalidated"=>false}, "training_config"=>{"early_stopping_patience"=>5, "num_train_epochs"=>1, "gradient_accumulation_steps"=>1, "warmup_steps"=>50, "learning_rate"=>0.001, "lr_scheduler_type"=>"reduce_lr_on_plateau", "group_by_length"=>false}, "logging_config"=>{"use_wandb"=>false, "wandb_username"=>"", "wandb_login_key"=>"", "wandb_project"=>"", "wandb_run_name"=>""}}
HEADERS
Key | Datatype | Required | Description |
---|---|---|---|
Content-Type | string | ||
Accept | string |
RESPONSES
status: OK
{}