AI can help but human judgment needed

Borja Lozano and Irene Larraz in Poynter, April 3, 2023, write that the artificial intelligence chatbot ChatGPT can write a coherent, persuasive, misspelling-free text but when tested filled in gaps with false information. It can muddle fact and fiction and still seem convincing. There is a need for greater transparency on how the chatbot works and also a greater need to verify information, actually something that could be done by ChatGPT if it were improved.

Lisa McLendon in Poynter, February 15, 2023, writes that the writing chatbot ChatGPT can generate clear, assertive, logical-sound prose but given more complicated topics, it stumbles. CNET retracted some AI articles with plagiarism and factual errors. Men’s Journal published an article with serious errors in a story on health. There is still a need for the input of a sharp editor.