Lm Eval

POST {{baseUrl}}/deploy/evaluation/llm/lm_eval

Use Eluether Evaluation Harness to evaluate llama3-8b models for hackathon specifically shortly will be extended to all models and service.

Request Body

{"basemodel_path"=>"<string>", "per_gpu_vram"=>"<integer>", "gpu_count"=>"<integer>", "task"=>"<string>", "loramodel_path"=>"<string>"}

HEADERS

KeyDatatypeRequiredDescription
Content-Typestring
Acceptstring

RESPONSES

status: OK

{&quot;message&quot;:&quot;\u003cstring\u003e&quot;,&quot;servingParams&quot;:{&quot;qui6&quot;:&quot;\u003cstring\u003e&quot;,&quot;ad_fa&quot;:&quot;\u003cstring\u003e&quot;},&quot;deployment_id&quot;:&quot;\u003cstring\u003e&quot;}