Trigger Warning: Out of Topic ; Description of Racism
One must understand that ChatGPT 4 is really great when it can iterate over a response. That is someone points out the obvious issues. Given second or third chance it fixes things pretty fast. But if it cann't.... It sucks like hell at times.
Here are few examples :
Q : Why was chef in south park afraid of ghost costume of cartman in Pinkeye?
ChatGPT 4:
It misses the point entirely! It misses the context entirely (chef being a black african american, ghost costume looks suspiciously like KKK dress), It misses the chronology, but those are details. When probed, it finally realized there is more to the play:
ChatGPT 4:
These answer bots like
@Raptor33 are not doing conversation but generating answers from a block of knowledge which precludes conversation like steps.
The answer is still wrong in details. Like chronology of the episode; Eric Cartman first dressed as Hitler then his teachers makes him a "nice costume of scary ghost". The point being -- certain white folks do not even realise the problem of racism which is casually present in society. But then its okay. When probed, it answered the part that I was looking for. It fell flat trying to justify its earlier answer. Which is okay!
Trouble is, when these bots are going to be used in a very detail oriented and specific domains like immigration, it can give very broken and very false information to unsuspecting folks without a mechanism of conversation to correct them.
The worst part? Its perfect use of English, well formatted answer fools humans who take cues of authority from the form of the answer. The answer looks like it could be printed in a text book! How it could be wrong! Well... thats where it fails and is dangerous.