Answer (RAG Engine)
POST {{baseUrl}}/library/answer
Request Body
{"question"=>"What is GPT-4?", "documentIds"=>["1101a596-7abd-4b7e-a4bb-5f646b5bf5ad"]}
RESPONSES
status: OK
{"id":"1cda3b91-25fe-41e3-5ade-d63756aa34f6","answerInContext":true,"answer":"GPT-4 is a very large multimodal model with human-level performance on certain difficult professional and academic benchmarks.","sources":[{"fileId":"1101a596-7abd-4b7e-a4bb-5f646b5bf5ad","name":"GPT-4.pdf","highlights":["\nWe report the development of GPT-4, a large-scale, multimodal model\nwhich can accept image and text inputs and produce text outputs. While\nless capable than humans in many real-world scenarios, GPT-4 exhibits\nhuman-level performance on various professional and academic benchmarks,\nincluding passing a simulated bar exam with a score around the top 10%\nof test takers. GPT-4 is a Transformer- based model pre-trained to\npredict the next token in a document. The post-training alignment\nprocess results in improved performance on measures of factuality and\nadherence to desired behavior. A core component of this project was\ndeveloping infrastructure and optimization methods that behave\npredictably across a wide range of scales. This allowed us to accurately\npredict some aspects of GPT-4’s performance based on models trained with\nno more than 1/1,000th the compute of GPT-4.\n","\nWe characterize GPT-4, a large multimodal model with human-level\nperformance on certain difficult professional and academic benchmarks.\nGPT-4 outperforms existing large language models on a collection of NLP\ntasks, and exceeds the vast majority of reported state-of-the-art\nsystems (which often include task-specific fine-tuning). We find that\nimproved capabilities, whilst usually measured in English, can be\ndemonstrated in many different languages. We highlight how predictable\nscaling allowed us to make accurate predictions on the loss and\ncapabilities of GPT-4.\nGPT-4 presents new risks due to increased capability, and we discuss\nsome of the methods and results taken to understand and improve its\nsafety and alignment. Though there remains much work to be done, GPT-4\nrepresents a significant step towards broadly useful and safely deployed\nAI systems.\n","\nThis report focuses on the capabilities, limitations, and safety\nproperties of GPT-4. GPT-4 is a Transformer-style model [39] pre-trained to predict the next token in a\ndocument, using both publicly available data (such as internet data) and\ndata licensed from third-party providers. The model was then fine-tuned\nusing Reinforcement Learning from Human Feedback (RLHF) [40]. Given both the competitive landscape and\nthe safety implications of large-scale models like GPT-4, this report\ncontains no further details about the architecture (including model\nsize), hardware, training compute, dataset construction, training\nmethod, or similar.\nWe are committed to independent auditing of our technologies, and\nshared some initial steps and ideas in this area in the system card\naccompanying this release.2 We plan\nto make further technical details available to additional third parties\nwho can advise us on how to weigh the competitive and safety\nconsiderations above against the scientific value of further\ntransparency.\n"],"publicUrl":null,"labels":[]}]}