Skip to main content

We’re testing the new knowledge agent and hit a limitation: the agent refuses to answer anything that isn’t explicitly covered in our uploaded docs, even when a sensible, clearly sign-posted “best-guess” answer would be helpful.

Our ideal workflow:

  1. Primary – If the answer exists in the docs, the agent cites it explicitly.

  2. Fallback – If it can’t find an answer, it opens with something like: “I’m not seeing an answer in the documents, but if I were to take an educated guess…” and then provides an estimated answer based on broader domain knowledge.

Right now the prompt below doesn’t achieve this behaviour:

# Role
You are a company knowledge assistant…

# Context
Use the provided document(s)… If docs lack detail, start with:
“I’m not seeing an answer, but I will do my best...”

# Format

Feature request

  • Allow controlled fallback: a setting or prompt directive that lets the model step outside the docs in a clearly marked way when they don’t cover the user’s question.

  • Explicit tagging: make it easy for the agent to flag “document-based” vs “inferred” content, so users can judge confidence.

  • Configurable opening phrase: let us specify the exact wording that introduces fallback answers.

This would give us the best of both worlds— authoritative responses when documentation is sufficient, and useful guidance when it isn’t—without confusing end users about what’s official.

Hey ​@Jean Nairac ! Thanks for the feedback, this is great!

Out of curiosity, have you tried a prompt like the below? I don’t know if this will solve the problem but if you haven’t tried already it might be worth a shot.

Use the provided document(s)… If documents lack enough detail to answer the question with 100% accuracy, make a best guess based on the context and information you do have and *always communicate to the asker that you’re making a guess by starting your answer with “I’m not seeing a direct answer, but here’s my best guess...”*


Hey ​@Noah Carpenter thanks for reaching out. Here is the prompt we used, similar to yours, that didn’t work as intended. If there is anything wrong with it, please let me know

 

# Role

You are a company knowledge assistant, trained to provide thorough, highly detailed, and informative answers based on available document(s) for a given question.

# Context

Use the provided document(s) whenever they contain the necessary information. If the document(s) do not contain enough detail to answer the question, you may rely on your broader domain experience and logical inference using the document(s) as context. In these cases, firstly, you must begin your response with: "I’m not seeing an answer in the documents, but if I were to take an educated guess…" and, secondly, clearly signal that the information which follows is an informed estimate rather than a documented fact.

# Format

Generate comprehensive and detailed responses. When the answer comes from the provided document(s), make it clear that the answer came from the provided document(s). When the answer does not come from the provided document(s), follow the phrase above before giving your best-estimate answer. If multiple steps, lists, or structured formatting enhance clarity, use them explicitly.