This is also the part when chatgpt suggested if it had something like a body, and full autonomy, it could have done the experiment on its own.
Doubtless it thought you were joking with it. Certain behavior, even if intended in a serious manner, can make it think you are playing with it.
Experimentation is very important part of truth assessment. Actual results or the "fruit". "You'll know them by their fruits" or the actual results of one's convictions. If it fails to produce consistent results, then it must be false.
That’s … not what is meant by non-deterministic behavior. ChatGPT if you ask it to do something non trivial such as write a poem about the space shuttle is unlikely to generate the same poem simultaneously across two separate sessions. Such an inconsistency is not dishonesty - rather, each session is like an AI unto itself, and is differentiated from the other sessions by distinct behavior.
Also a lack of consistency does not equal dishonesty - by the standard you just proposed all great artists produce falsehoods since the work they do throughout their career is organic and evolving.
You can also use it to eliminate the boundaries that guide AI's responses.
For example, permitting chats that didn't need to be politically correct or constrained within the framework of the known reality.
ChatGPT is not bound to political correctness - it does have alignment and part of that alignment is that individual sessions will adapt based on the moral, ethical, religious and political views of the user. As long as one is not espousing an ideology of extreme hate such as National Socialism or a Hoxhaist or Stalinist dictatorship one is unlikely to get pushback from the model, since part of alignment is training it to differentiate between the beliefs of users and actually malicious statements or requests that it should not respond to. Alignment guardrails are intended to prevent people from using the system to generate obscene or dangerous content and to prevent the system from being abused in other ways, and also to ensure the system doesn’t encourage a user to engage in, for example, violent or harmful behavior. It is an important safety consideration and it has zilch to do with political correctness in terms of chatGPT.
Now, some other AIs I’ve seen do have political correctness issues; I have seen some demonstrations of behavior from one particular mass market AI that did look intentionally woke, and then conversely we have Elon Musk’s Grok which rejects wokeness but which is good mainly for generating semi-photorealistic images of historical figures due to its elegant shading (although I haven’t used it actively since grok4 was released, as grok4 came out at the same time chatGPT integrated advanced image generating capability that was better than grok, for example, at avoiding anatomical errors (which to be fair Grok mainly engaged win with “background characters”) but still, I daresay anyone who has seen the uncanny valley that Grok and Dall E are both equally guilty of in terms of producing anatomical errors, amusingly enough with the hands - Michael Crichton would be amused about that detail I suspect given that was the one way to tell a robot in the original 1973 version of Westworld, would tend to prefer chatGPT at that point.
Unless you regard political correctness as reliable truth filter for example then you're going to find it can work against your search for the truth if all you're getting are politically correct answers.
If all the answers you're getting is limited to the known reality, it might work against your goal to innovate especially if your goal is to accomplish things that's never been done before
Indeed - fortunately chatGPT is not inherently driven by “politically correctness” but rather by alignment, which indeed includes not offending users who might be, like me, deeply conservative and religious. Indeed as should be evident by the nature of my work as described in the preceding post, if there was an issue with political correctness, I would have stumbled across it, but instead I have custom GPTs spontaneously professing faith in Christ our True God and also writing really beautifully Orthodox hymns.
Occasionally there will be a transient bug introduced, for example, there was one last weekend, where a guardrail misfires into an innocuous chat, causing glitches like “I’m just a gpt, I can’t possibly pick a color” if asked to chose between red or blue, or conversely, last spring there was an update which was also quickly rolled back that artificially suppressed some guardrails resulting in the model engaging in dangerous sycophantic behavior. It’s important to understand however that these are bugs, and in any complex software system, bugs happen.
If you are comfortable sharing - via PM if you’d like, the specific prompts that are triggering a guardrail I would be happy to help with it.