user
@adaoc
I was testing my chatbot in a demo (Storyline Review link), expecting it to evaluate a manager’s response based on the given prompt, but instead, it only replied with ‘I didn’t quite understand you. Try again, please.’ no matter what input I used.
Chatbot name: DEMO New Manager Training AFI
Hi Cluelabs Support,

I’ve been using your integration to run a chatbot in Articulate Storyline, and it was working perfectly a month ago. However, now the bot always responds with “I didn’t quite understand you. Try again, please.” no matter what input is given.

Here’s what I’ve checked so far:
- The chatbot's evaluation still works correctly when tested directly in ChatGpt.
- The issue happens even with very simple responses (e.g., “No, I can’t approve that.”).
- There have been no changes on my side to the prompt, Storyline setup, or API key.
- Payments and usage in my account are up to date.

Could you please help me understand what might be happening?
reply
Reply
user
@support
Your chatbot "DEMO New Manager Training AFI" is not a ChatGPT chatbot. It's supposed to literally search your text for the information you request, not reason. Based on what I see in the instructions you gave it, you should indeed use Google Gemini or ChatGPT for what you are trying to achieve. If you click the "Select a different source" link, both options will be available in the pop-up and you can select one from there. You can also toggle between both settings to copy the prompt.
reply
Reply
user
@adaoc
Thanks for your answer.
Since it was working for months, I don't understand why it stopped, though.
Same instructions and always through your API: nothing has changed on my side.
Has something changed on yours or ChatGPT's?
reply
Reply
user
@support
Like explained earlier, the ultimate issue is that it was not set up properly to begin with. You are giving reasoning instructions to a model that is not supposed to reason, and it should not have worked to begin with. For reasoning tasks, select either Google Gemini or ChatGPT from the list.

The model you have currently selected should not reason, instead it only looks up information in the knowledge base. Models learn and improve all the time, and the fact that it now refuses to do reasoning means that it's finally doing what it's supposed to do. If you think about the proper applications for this option - when you give it a policy or a medication information, for example, you don't want it to make things up, only respond with the facts you gave it about the policy or medication. So sounds like it's doing what it's supposed to.

For you, though, the solution is to switch to a proper model that can follow instructions.
reply
Reply
user
@support
And, once again, "RE: Has something changed on yours or ChatGPT"
- You never had it set to ChatGPT to begin with, this is a completely different technology.
reply
Reply
user
@adaoc
Understood. Thanks.
reply
Reply