Originally Posted by
teslagirl
I'm an engineer and the greatest fear I have with this is that nobody checks the answer they get from AI systems. Believe it or not, there can be significant flaws in the answers they provide. Way back in the 1980's there was a huge push to use AI Expert Systems to replace expensive engineers. It fell out of use for the same reason: no one checked the answers, so there were catastrophic failures. The only people who could check the answers were the experts they had replaced. They had to hire us all back.
For an example with ChatGPT: I presented a coding problem and asked the AI to write me a solution in Java. It did. And, for a couple of examples, the program worked correctly. But, when presented with a use case that was not in its training set, the answer was wrong. Very wrong. If this program had been put into a production system, it would have created a catastrophic failure. (The error had to do with how Java handles integer division; it truncates. This creates very wrong answers).
I teach coding and I know students use this for homework solutions. It makes me have to work very hard to think of problems that have "quirky" behavior that a student should catch, but the AI doesn't. There is also a complementary system to ChatGPT that uses a similar algorithm to detect answer created by the ChatGPT algorithm. This should help eliminate its use for Academic cheating - but it will not help in industry if no one is checking/proof-reading the answers the AI creates.