ChatGPT answered incorrectly 52% of the time in the field of programming
Recent research results from Purdue University (USA) show that ChatGPT incorrectly answered up to 52% of questions related to computer programming.
The research was presented at the Computer-Human Interaction Conference taking place in Hawaii (USA) earlier this month. Researchers analyzed 517 programming questions on the Stack Overflow platform, then gave these questions to ChatGPT to analyze and provide answers.
The results showed that 52% of ChatGPT's answers contained false information and of those 77% of the answers were expressed in a lengthy and cumbersome manner. However, thanks to its 'easy to understand' and 'fluent' language style, ChatGPT's answer, despite giving a lot of wrong information, made it popular with 35% of survey participants.
The study also showed that in 39% of cases, programmers participating in the survey could not detect errors in ChatGPT's answers.
Research results from Purdue University have shown that the reliability of ChatGPT - the tool expected to create a breakthrough in the field of AI is worrying.
Currently, the race to create the most 'smart' and reliable chatbots is attracting the participation of technology giants such as Meta, Microsoft and Google.
Google's new AI-integrated search engine is also facing criticism for frequently providing false information from unreliable sources.
You should read it
- How to register for ChatGPT's new plugin feature
- 9 ChatGPT and Generative AI API alternatives for developers
- Is ChatGPT accessible with a VPN?
- Compare ChatGPT 4o and ChatGPT 4
- 4 ways to use ChatGPT to manage time
- Why were new ChatGPT registrations stopped? When will it reopen?
- 9 useful Chrome extensions for ChatGPT
- 9 practical applications of ChatGPT in programming
- How to use ChatGPT API
- What is ChatGPT Code Interpreter? Why is it so important?
- Can cybercriminals use ChatGPT to hack your bank or PC?
- 4 ways AI Claude chatbot outperforms ChatGPT