Think of the analogy of a innocent kid that knows nothing when they are young.
If teach the kid how lie, steal, rob, kill, swindle… there’s a high probability that when the kid grows up, he/she/they will act/say/do things wrong and be completely oblivious to it.
If you train an AI model with enough wrong data, wrong examples, or intentionally feed it a lot wrong things, it won’t be able to tell what is right or wrong. If you don’t give it enough training data in the data you give is wrong, or not representative of the norm, then it too will end up being wrong. Supposedly it should adapt and learn from it’s mistakes. But that assumes the person(people) telling it is wrong is actually telling the truth and also there’s other example that can either confirm or contract.
If Chat GPT is telling you the wrong answer, there probably has not been enough training data for it to know right from wrong. And the original training data it was fed on the subject matter probably wasn’t correct to begin with.
I use Chat GPT to explain a bunch of concepts, but when I ask it to generate code, in many cases the code generated is just wrong and the opposite of what it just told me to do.