Mark, to clarify, did you run the question in the AI that appears in Google search results or did you follow the link through to Google Gemini? If you don’t know, past a URL to the AI that you asked, and I can tell you.
@BobRyan , had you tested that prompt on Google’s AI, or were we supposed to use ChatGPT as per your previous revision of the prompt?
I got this answer, from ChatGPT 5.1:
I’m guessing
@BobRyan that again wasn’t the answer you were expecting?
I tried to warn you; the terms you are using are subjective. AI can’t answer it consistently. There is an issue of bias in the training data as my pious and excellent Lutheran friend
@MarkRohfrietsch mentioned, but the actual problem is that you’re still relying on subjective terms and a question that is answered based on statistics. Even with perfect AI, the subjective terminology (including, but not limited to, Trinitarian, administration, denomination) makes your question unanswerable. And since the definition of these things (Trinitarian, administration, denomination) is disputed, if you try to impose objective definitions on them, you still have to deal with the issue of statistical reliability in the training data. So even a perfect AI would be unable to consistently answer this question. Indeed if chatGPT were more advanced, it would refuse to answer the question due to the subjective terms.
As it is now, it isn’t confused, it’s rather using temperature (the value that introduces randomness into chatGPT’s output; this is essential in that it literally is what makes chatGPT capable of sustaining an interesting conversation; if you use the chatGPT API you can set temperature=0, and the result is … not useful, and also it costs money each time your run a question using the API; it’s useful only for serious prompt hacking purposes, and I myself don’t use it (if I had used it in this, you wouldn’t see the pretty output formatting that chatGPT does, since the output would be through an ssh session to a Linux or OpenBSD server, in a command line terminal, basically).
But the other issue is even if we weren’t getting inconsistent issues, which we are, you’re already prompt hacking; your efforts to “prevent the AI from getting confused” are simply massaging the question to get the results you want, and the fact we’re seeing digressive results with each iteration you supply us proves both my point that AI cannot be used in this manner without constituting an appeal to unqualified authority, and also that you’ve lost objectivity, since you’re now trying to manipulate chatGPt into desired behavior, which is the definition of prompt engineering.