Lm Eval
POST {{baseUrl}}/deploy/evaluation/llm/lm_eval
Use Eluether Evaluation Harness to evaluate llama3-8b models for hackathon specifically shortly will be extended to all models and service.
Request Body
{"basemodel_path"=>"<string>", "per_gpu_vram"=>"<integer>", "gpu_count"=>"<integer>", "task"=>"<string>", "loramodel_path"=>"<string>"}
HEADERS
Key | Datatype | Required | Description |
---|---|---|---|
Content-Type | string | ||
Accept | string |
RESPONSES
status: OK
{"message":"\u003cstring\u003e","servingParams":{"qui6":"\u003cstring\u003e","ad_fa":"\u003cstring\u003e"},"deployment_id":"\u003cstring\u003e"}