Ask Eddy testing feedback
Liam Tanner
We've been testing Ask Eddy using our content to see if it's something we'd want to add to our site. However, there are a few key issues we identified,
As with other chatbots, Eddy has a major problem with hallucination. It often makes up answers, especially when leading questions are used. In one of our test questions (regarding deleting a category), Eddy makes up a response that could potentially cause huge problems for users with significant data loss. I checked the reference pages cited to make sure we didn’t include or allude to the incorrect information.
Additionally, the reference articles are often unrelated to the answer provided (sometimes Eddy is able to provide the correct answer even though it is not included in the reference articles cited). In other instances, Eddy will pull an answer from one page, but miss that the correct answer was provided in another article, and will not include that page in the reference articles.
Eddy also seems to have some difficulty pulling answers from HTML tables, which I also found when testing ChatGPT separately. For example, when I asked which versions of PHP are supported, the response ignored the version numbers provided in the table on the system requirements page, and instead generated a response based on the body text on this page.
Finally, the responses are typically very short and give less detail than a standard GPT would - I assume this is done intentionally to reduce the risk of hallucination. This wouldn’t be much of an issue if the reference articles were more accurate.
I don't know what the solutions are to these issues, but hopefully this feedback is useful. As others have mentioned in their feedback, having some control over the instructions provided to the GPT would help to alleviate some of these issues.
Log In
Selvaraaju Murugesan
Liam Tanner Thank you for your feedback. We have recently shipped many enhancements that strengthen foundational capabilities of Eddy AI. Now Eddy can generate response based on the table content and code snippets. If Eddy is unsure, it will respond by saying "I do not know". Eddy AI uses RAG framework under the hood and picks up relevant reference (source articles) that helps to generate a response. If any section / part of an article is relevant to help Eddy generate a response, it will be listed in the reference article. If the reference article tilte might be misleading in this case.
Even though, Eddy AI sources relevant content from different article, it only uses right informatio to generate a response. Please find more information at
Eddy search analytics https://docs.document360.com/docs/eddy-search-analytics helps you to identify knowledge gaps in your knowledge base and create new knowledge given that now you exactly know the "intent of the ask" through prompts/questions that is entered in Eddy AI