Literary Hub

On the Mathematical Problem of Human Creativity

It is often easier to verify a correct solution to a problem than it is to produce the solution in the first place. The P versus NP challenge, the most important of the open mathematical questions, asks whether every problem that can be checked efficiently by a computer can also be solved efficiently.

To draw an analogy, imagine you are putting together a jigsaw puzzle of a featureless image, such as a picture of clear blue sky. To try all the possible combinations of pieces to see if they fit together is difficult; to say it would take a long time is an understatement. However, once the jigsaw is complete, it is easy to check that it has been done correctly. More rigorous definitions of what efficiency means are expressed mathematically in terms of how quickly the algorithm works as the problem gets more complicated—when more pieces are added to the jigsaw. The set of problems that can be solved quickly (in what is known as Polynomial time) is called P. A bigger group of problems that can be checked quickly, but not necessarily solved quickly, is known as NP (which stands for Nondeterministic Polynomial time). P problems are a subset of NP problems, since by solving the problem quickly, we have automatically verified the solution we find.

Now imagine building an algorithm to complete a generic jigsaw puzzle. If the algorithm is in P, then the time taken to solve it might depend on the number of pieces, its square, its cube, or even high powers of the number of pieces. For example, if the algorithm depends on the square of the number of pieces, then it might take four (22) seconds to complete a two-piece jigsaw, one hundred (102) seconds for a ten-piece jigsaw, and ten thousand (1002) seconds for a hundred-piece jigsaw. This sounds like a relatively long time, but it’s still in the realm of only a few hours. However, if the algorithm is in NP, then the time taken to solve it might grow exponentially with the number of pieces. A two-piece jigsaw might still take 4 (22) seconds to solve, but a ten-piece jigsaw could take 1,024 (210) seconds and a hundred-piece jigsaw 1,267,650,600,228,229, 401,496,703,205,376 (2100) seconds—vastly outstripping the time elapsed since the Big Bang. Both algorithms take longer to complete with more pieces, but algorithms to solve generic NP problems quickly become unserviceable as the problem size increases. For all intents and purposes, P might as well stand for those problems that can be solved Practically and NP for Not Practically.

The P versus NP challenge asks whether all the problems in the NP class, which can be checked quickly but for which there is no known quick solution algorithm, are in fact also members of the P class. Could it be that the NP problems have a practical solution algorithm, but we just haven’t found it yet? In mathematical shorthand, does P equal NP? If it does, then, as we shall see, the potential implications, even for everyday tasks, are huge.

*

Rob Fleming, the protagonist of Nick Hornby’s classic 1990s novel, High Fidelity, is the music-obsessed owner of the secondhand-record store Championship Vinyl. Periodically, Rob reorganizes his enormous record collection according to different classifications: alphabetical, chronological, and even autobiographical (telling his life story through the order in which he bought his records). Quite apart from being a cathartic exercise for music lovers, sorting enables data to be interrogated quickly and reordered to display its different nuances. When you click the button that allows you to toggle between ordering your emails by date, sender, or subject, your email service is implementing an efficient sorting algorithm. eBay implements a sorting algorithm when you choose to look at the items matching your search term by “best match,” “lowest price,” or “ending soonest.” Once Google has decided how well web pages match the search terms you entered, the pages need to be quickly sorted and presented to you in the right order. Efficient algorithms that achieve this goal are highly sought after.

Ironically, although science may be a big winner if P equals NP, scientists themselves may be the biggest losers.

One way to sort a number of items might be to make lists with the records in every possible permutation, then check each list to see if the order is correct. Imagine we have an incredibly small record collection comprising one album each by Led Zeppelin, Queen, Coldplay, Oasis, and Abba. With just these five albums we already have 120 possible orderings. With six we have 720, and with ten there are already over 3 million permutations. The number of different orderings grows so rapidly with the number of records that any self-respecting record fan’s collection would easily prohibit them from being able to consider all the possible lists: it’s just not feasible.

Fortunately, as you probably know from experience, sorting out your record collection, books, or DVDs is a P problem—one of those for which there is a practical solution. The simplest such algorithm is known as bubble sort and works as follows. We abbreviate the artists in our meager record collection to L, Q, C, O, and A and decide to organize them alphabetically. Bubble sort looks along the shelf from left to right and swaps any neighboring pairs of records that it finds out of order. It keeps passing over the records until no pairs are out of order, meaning that the whole list is sorted. On the first pass, L stays where it is as it comes before Q in the alphabet, but when Q and C are compared, they are found to be in the wrong order and swapped. The bubble sort carries on by swapping Q with O and then with A as it completes its first pass, leaving the list as L, C, O, A, Q. By the end of this pass, Q has been “bubbled” to its rightful place at the end of the list. On the second pass, C is swapped with L and A is promoted over O, so that O now resides in the correct place: C, L, A, O, Q. We need two more passes before A makes its way to the front and the list is alphabetically ordered.

With five records to sort we had to trawl though the unsorted list four times, doing four comparisons each time. With ten records we would have to undertake nine passes, with nine comparisons each time. This means that the amount of work we have to do during the sort grows almost like the square of the number of objects we are sorting. If you have a large collection, then this is still a lot of work, but thirty records would take hundreds of comparisons instead of the trillions upon trillions of possible permutations we might have to check with a brute-force algorithm listing all the possible orders. Despite this huge improvement, the bubble sort is typically derided as inefficient by computer scientists. In practical applications, such as Facebook’s news feed or Instagram’s photo feed, where billions of posts have to be sorted and displayed according to the tech giants’ latest priorities, simple bubble sorts are eschewed in favor of more recent and more efficient cousins. The “merge sort,” for example, divides the posts into small groups, which it then sorts quickly and merges together in the right order.

In the buildup to the 2008 US presidential election, shortly after declaring his candidacy, John McCain was invited to speak at Google to discuss his policies. Eric Schmidt, Google’s then CEO, joked with McCain that running for the presidency was much like interviewing at Google. Schmidt then asked McCain a genuine Google interview question: “How do you determine good ways of sorting one million thirty-two-bit integers in two megabytes of R AM?” McCain looked flummoxed, and Schmidt, having had his fun, quickly moved on to his next serious question.

Six months later, when Barack Obama was in the hot seat at Google, Schmidt threw him the same question. Obama looked at the audience, wiped his eye, and started off, “Well, er . . .” Sensing Obama’s awkwardness, Schmidt tried to step in, only for Obama to finish, looking Schmidt straight in the eye, “No, no, no, no, I think the bubble sort would be the wrong way to go,” to huge applause and cheers from the assembled computer scientists in the crowd. Obama’s unexpectedly erudite response—sharing an in-joke about the inefficiency of a sorting algorithm—was characteristic of the seemingly effortless charisma (afforded by meticulous preparation) that characterized his whole campaign, eventually bubbling him all the way up to the White House.

*

With efficient sorting algorithms at hand, it’s great to know that next time you want to reorder your books or reshuffle your DVD collection it won’t take you longer than the lifetime of the universe.

For certain tasks, we can recognize good solutions when they are put in front of us.

In contrast to this, some problems are simple to state, but might require an astronomical amount of time to solve. Imagine that you work for a big delivery company such as DHL or UPS and you have a number of packages to deliver during your shift, before dropping your van back at the depot. Since you get paid by the number of packages you deliver and not the time you spend doing the deliveries, you want to find the quickest route to visit all of your drop-off points. This is the essence of an old and important mathematical conundrum known as the traveling salesman problem.

As the number of locations to which you have to deliver increases, the problem gets extremely tough extremely quickly in a “combinatorial explosion.” The rate at which possible solutions increase as you add new locations grows faster even than exponential growth. If you start with thirty places to deliver to, then you have thirty choices for your first drop-off, twenty-nine for your second, twenty-eight for your third, and so on. This gives a total of 30 × 29 × 28 × . . . × 3 × 2 different routes to check. The number of routes with just thirty destinations is roughly 265 nonillion—that’s 265 followed by thirty zeros. This time though, unlike with the sorting problem, there is no shortcut—no practical algorithm that runs in polynomial time to find the answer. Verifying a correct solution is just as hard as finding one in the first place, since all the other possible solutions need to be checked as well.

Back at delivery headquarters, a logistics manager might be attempting to assign the deliveries to be made each day to a number of different drivers, while also planning their optimal routes. This related task, the vehicle-routing problem, is even harder than the traveling salesman one. These two challenges appear everywhere—from planning bus routes across a city, collecting the mail from postboxes, and picking items off warehouse shelves, to drilling holes in circuit boards, making microchips, and wiring up computers.

The only redeeming feature of all of these problems is that, for certain tasks, we can recognize good solutions when they are put in front of us. If we ask for a delivery route that is shorter than one thousand miles, then we can easily check if a given solution fits the bill, even if it is not easy to find such a path in the first place. This is known as the decision version of the traveling salesman problem, for which we have a yes-or-no answer. It is one of the NP class of problems for which finding solutions is hard, but checking them is easy.

We can find exact solutions, despite difficulties, for some specific sets of destinations, even if it is not possible more generally. Bill Cook, a professor of combinatorics and optimization at the University of Waterloo in Ontario, spent nearly 250 years of computer time on a parallel supercomputer calculating the shortest route between all the pubs in the United Kingdom. The giant pub crawl takes in 49,687 establishments and is just forty thousand miles long—on average one pub every 0.8 miles. Long before Cook started on his calculations, Bruce Masters from Bedfordshire in England was doing his own practical version of the problem. He holds the aptly titled Guinness world record for visiting the most pubs. By 2014, the sixty-nine-year-old had had a drink in 46,495 different watering holes. Since starting in 1960, Bruce estimates that he has traveled over a million miles in his quest to visit all the UK’s pubs—over twenty-five times longer than Bill Cook’s most efficient route. If you’re planning a similar odyssey for yourself, or even just a local pub crawl, it’s probably worth consulting Cook’s algorithm first.

*

The vast majority of mathematicians believe that P and NP are fundamentally different classes of problems—that we will never have fast algorithms to dispatch salespeople or route vehicles. Perhaps that’s a good thing. The yes-no “decision version” of the traveling salesman problem is the canonical example of a subgroup of problems known as NP-complete. A powerful theorem tells us that if we ever come up with a practical algorithm that solves one NP-complete problem, then we would be able to transmute this algorithm to solve any other NP problem, proving that P equals NP—that P and NP are in fact the same class of problems. Since almost all internet cryptography relies on the difficulty of solving certain NP problems, proving P equals NP could be disastrous for our online security.

On the plus side, though, we might be able to develop fast algorithms to solve all sorts of logistical problems. Factories could schedule tasks to operate at maximum efficiency, and delivery companies could find efficient routes to transport their packages, potentially bringing down the price of goods—even if we could no longer safely order them online! In the scientific realm, proving P equals NP might provide efficient methods for computer vision, genetic sequencing, and even the prediction of natural disasters.

Ironically, although science may be a big winner if P equals NP, scientists themselves may be the biggest losers. Some of the most astounding scientific discoveries have relied enormously on the creative thinking of highly trained and dedicated individuals, embedded deeply in their fields: Darwin’s theory of evolution by natural selection, Andrew Wiles’s proof of Fermat’s Last Theorem, Einstein’s theory of general relativity, Newton’s equations of motion. If P equals NP, then computers would be able to find formal proofs of any mathematical theorem that is provable—many of the greatest intellectual achievements of humankind might be reproduced and superseded by the work of a robot. A lot of mathematicians would be out of a job. At its heart, it seems, P versus NP is the battle to discover whether human creativity can be automated.

__________________________________

the math of life and death

From The Math of Life and Death: 7 Mathematical Principles That Shape Our Lives by Kit Yates. Used with the permission of Scribner. Copyright © 2020 by Kit Yates.

More from Literary Hub

Literary Hub3 min readPersonal Growth
Tina Turner on the Lessons We Learn Overcoming Adversity
Musical icon Tina Turner’s Happiness Becomes You, billed as a guide to “changing your life for good,” is available now. * Lit Hub: Are there any specific lessons from your spiritual practice that you’ve drawn on in this year of unusual adversity? Tin
Literary Hub1 min read
WATCH: Reyna Grande in Conversation with John Freeman
Click below to watch the virtual meeting of the Alta California Book Club, which Books Editor of Alta Journal David Ulin describes as: an opportunity for us to rethink the book club as a kind of ongoing process involving events, involving posts and i
Literary Hub4 min readCrime & Violence
What Jeffrey Sterling Wants Americans to Understand About Whistleblowers
Hosted by Paul Holdengräber, The Quarantine Tapes chronicles shifting paradigms in the age of social distancing. Each day, Paul calls a guest for a brief discussion about how they are experiencing the global pandemic. On Episode 138 of The Quarantine

Related Books & Audiobooks