![]() This post-processing includes additional grounding calls to Microsoft Graph, responsible AI checks, security, compliance and privacy reviews, and command generation.įinally, Copilot returns a recommended response to the user, and commands back to the apps where a human-in-the-loop can review and assess. Copilot takes the response from the LLM and post-processes it. This retrieval of information is referred to as retrieval-augmented generation and allows Copilot to provide exactly the right type of information as input to an LLM, combining this user data with other inputs such as information retrieved from knowledge base articles to improve the prompt. For instance, an intranet question about benefits would only return an answer based on documents relevant to the employee’s role. We also scope the grounding to documents and data which are visible to the authenticated user through role-based access controls. It does this, in part, by making a call to Microsoft Graph and Dataverse and accessing the enterprise data that you consent and grant permissions to use for the retrieval of your business content and context. Copilot then preprocesses the prompt through an approach called grounding, which improves the specificity of the prompt, so you get answers that are relevant and actionable to your specific task.
0 Comments
Leave a Reply. |