What Does ChatGPT Know About Science?
Unless you’ve been completely off the grid lately, you’ve heard about or met ChatGPT, the popular chatbot that first went online in November 2022 and was updated in March. Type in a question, comment, or command, as I’ve done, and it quickly produces a human-seeming response in good English for any topic. The system comes from artificial-intelligence research on a language model called a Generative Pre-trained Transformer. From a big database—hundreds of gigabytes of text taken from webpages and other sources through September 2021—it selects the words that are most likely to follow those you’ve entered and forms them into responsive, intelligible, and grammatical sentences and paragraphs.
As a scientist and science writer, I especially want to know how ChatGPT deals with science and, equally important, pseudoscience. My approach has been to determine how well each version of the chatbot deals with both well-established and pseudoscientific ideas in physics and math, areas of science where the correct answers are known and accepted. Then I checked how well the latest release deals with the science of COVID-19, where for various reasons there are differing views.
Can GPT-4 distinguish correct from incorrect science?
For openers, the November version (known as GPT-3.5) knew that 2 + 2 = 4. When I typed: “If a banana weighs 0.5 lbs and I have 7 lbs of bananas and 9 oranges, how many pieces of fruit do I have?” (The answer is below.)
You’re reading a preview, subscribe to read more.
Start your free 30 days