Study Finds ChatGPT Is Incorrect for Over Half of Software Questions

Share

Facebook
Twitter
LinkedIn

A study suggests that while ChatGPT offers convenient conversational answers, it may not be suitable for software engineering prompts due to its convenience.

A new study by Purdue University shows that ChatGPT answers less than half of software engineering questions correctly.
The first comprehensive study examining ChatGPT’s answers to over five hundred coding questions submitted to Stack Overflow showed that over 50% of ChatGPT’s responses were incorrect.
ChatGPT is very popular among software engineers for its convenience and quick responses. However, the language model is trained on a wide range of internet texts and does not have real-time access to up-to-date information. It may not always provide accurate or reliable answers, particularly in specialized domains like software engineering. Relying solely on ChaGPT and other generative AI tools may cause harm and put programmers’ professional reputations at risk. When seeking reliable answers to software engineering questions, it is preferable to confirm with authoritative sources, such as reputable websites or experienced professionals.

This study and other empirical data question the validity and dependability of the ChatGPT model. While AI bots are a practical tool for every informational requirement, they can also produce false but persuasive answers. This is a problem with all AI chatbots since it makes it easier for false information to proliferate and could lead to biased or even destructive material. Ultimately, it draws attention to the lack of context awareness and reasoning on the part of these tools and underscores the value of human expertise, particularly in software engineering.

Related Post