AI "Rules" and "Guide" and hallucination mechanisms/reports
under review
Gerard
We don't have much control over how AI responds that's specific to our brand outside of "friendly" and other tones.
It would be great if we can teach AI specific Voice and Tone. For example:
- Refer to "Agents" as "Heroes"
- avoid using the word "disabled" and use "deactivated" instead
It would also be great to have hallucination checks where if AI seems to hallucinate by attempting to provide an interpretation of an answer in our KB but is not completely accurate, it would be great if there's a feature that flags Admins of a possible response that AI attempted to create but did not because it might incorrect, then allow admins to review that response to determine how it could be improved.
Log In
Gerard
Thanks Navin Kumar -- please do consider the glossary and additional voice/tone/branding controls for AI as well.
D
D360 Product Management
Sure Gerard, Apologies for missing out on addressing this feedback specifically. We will consider this request and try to find a way to roll this out quickly.
Thank you!
//Selvaraaju
N
Navin Kumar
hi Gerard,
We are building a feedback loop for our Eddy AI search. You will be able to see the feedbacks given on the Eddy AI responses in our portal. I think this will be a strong step in solving the problem that you have mentioned here. We will ensure you are posted on the progress of this request.
Gerard
Just to add more color here. In reviewing Eddy responses, i've seen it provide information that's partially correct but it also provided instruction, within the same response, that is completely inaccurate.
I think it's really important to have a mechanism in place to detect possible hallucinations and Eddy should programmatically fail to respond, then allow admins to review failed responses so we can review and adjust our articles, and teach AI how to be better.
D
D360 Product Management
under review
Thanks for this useful feedback.