Retrieval-Augmented Generation: Keep the LLM relevant and current, without hallucinations.
The RAG retrieves with user-supplied query domain-specific knowledge from the search engine (probably from the knowledge embeddings, have to find out how this works) and then adds this 'context' to the LLM model. The LLM service then formulates an answer to the query.
Be warned that the LLM is based on statistics and does not 'understand' the subject matter whatsoever.