GPT-4 Might Just Be a Bloated, Pointless Mess
![](https://article-imgs.scribdassets.com/51lpvsr4cgbwtvr6/images/file197ZSCLE.png)
As a rule, hyping something that doesn’t yet exist is a lot easier than hyping something that does. OpenAI’s GPT-4 language model—much anticipated; yet to be released—has been the subject of unchecked, preposterous speculation in recent months. One post that has circulated widely online purports to evince its extraordinary power. An illustration shows a tiny dot representing GPT-3 and its “175 billion parameters.” Next to it is a much, much larger circle representing GPT-4, with 100 trillion parameters. The new model, one evangelist tweeted, “will make ChatGPT look like a toy.” “Buckle up,” tweeted another.
One problem with this hype is that it’s factually inaccurate. Wherever the 100-trillion-parameter rumor originated, OpenAI’s CEO,.” Another problem is that it elides a deeper and ultimately far more consequential question for the future of AI research. Implicit in the illustration (or at least in the way people seem to have interpreted it) is the assumption that more parameters—which is to say, more knobs that can be adjusted during the learning process in order to fine-tune the model’s output—always lead to more intelligence. Will the technology continue to improve indefinitely as more and more data are crammed into its maw? When it comes to AI, how much does size matter?
You’re reading a preview, subscribe to read more.
Start your free 30 days