We’re testing the new knowledge agent and hit a limitation: the agent refuses to answer anything that isn’t explicitly covered in our uploaded docs, even when a sensible, clearly sign-posted “best-guess” answer would be helpful.
Our ideal workflow:
-
Primary – If the answer exists in the docs, the agent cites it explicitly.
-
Fallback – If it can’t find an answer, it opens with something like: “I’m not seeing an answer in the documents, but if I were to take an educated guess…” and then provides an estimated answer based on broader domain knowledge.
Right now the prompt below doesn’t achieve this behaviour:
# Role
You are a company knowledge assistant…
# Context
Use the provided document(s)… If docs lack detail, start with:
“I’m not seeing an answer, but I will do my best...”
# Format
…
Feature request
-
Allow controlled fallback: a setting or prompt directive that lets the model step outside the docs in a clearly marked way when they don’t cover the user’s question.
-
Explicit tagging: make it easy for the agent to flag “document-based” vs “inferred” content, so users can judge confidence.
-
Configurable opening phrase: let us specify the exact wording that introduces fallback answers.
This would give us the best of both worlds— authoritative responses when documentation is sufficient, and useful guidance when it isn’t—without confusing end users about what’s official.