Bezoek de website voor leraren en scholieren →

Theoretical computer science is a scientific discipline that is concerned with the kind of problems that computers can solve. A computer solves problems by following a certain set of instructions. Formally, these instructions are known as an algorithm. In everyday life, we often call such instructions a “computer program” or an “app”.

Some algorithms solve a problem with very few instructions, while others need to give a lot of them. The number of instructions that an algorithm gives determines whether it is fast or slow. Because everyone wants a faster computer, there is a big interest in finding the fastest algorithm for every problem. So this is one thing that occupies theoretical computer scientists.

It turns out that there are two common cases for a problem: either there exists a “fast” (polynomial time) algorithm, or only “extremely slow” (NP-complete) algorithms can be found. So problems can be categorised as either fast or extremely slow. This is known as the time complexity of the algorithm. An example of a fast problem is finding a document in a folder, which is as easy as telling the computer to read a list of documents until it finds the one it’s looking for. An example of an extremely slow problem is finding the shortest round-way trip between a large number of cities, for which the computer has to compute the distance of almost every possible route before it can tell which one is the shortest.

But there exists a handful of problems that computer scientists have found to be not extremely slow, but for which they also aren’t able to tell that they are fast. These problems seem to be somehow stuck in-between.

So obviously, one of the big questions in computer science is whether these problems are special, or whether there is a fast algorithm out there, but they just haven’t found it yet. One of these in-between problems is the “graph isomorphism problem”. This problem asks the computer to see if two graphs (simple networks) can be made to look precisely the same if the computer is only allowed to slide the nodes and stretch the connections (so the computer is not allowed to draw or or erase nodes or connections). Basically, this is just a very advanced version of untangling the christmas decorations.

One drawing of the Petersen graph [1]

Another drawing of the Petersen graph [2]

It has been known for a long time that there exists a slow but not extremely slow algorithm for this problem, but there hadn’t been any improvements for more than thirty years. Until now, that is.

In December 2015, Laszlo Babai presented a new algorithm for the graph isomorphism problem that is “almost fast” (quasi-polynomial time). It is not fast enough to be a truly fast algorithm, but it is so close that the difference is hard to spot.

Many computer scientists view this as one of the biggest breakthroughs of recent years in theoretical computer science, or at least as a huge step forward. Quanta Magazine has written an excellent long article about the graph isomorphism problem, the way the new algorithm works, and what it means to computer scientists.

1. Public Domain, https://commons.wikimedia.org/w/index.php?curid=357408
2. Public Domain, https://commons.wikimedia.org/w/index.php?curid=358837
• Article

### How to Plan for a Pandemic? An interview with Hans Heesterbeek•

Prof. dr. ir. Hans Heesterbeek, one of the leading infectious disease epidemiologists in the Netherlands, is a busy man. Fortunately for us, Hans Heesterbeek had a gap in his schedule to yet again answer some questions, this time not asked by professional journalists, but by us, four students from the Technical University in Eindhoven.
• Article

### A big breakthrough in the Euclidean Travelling Salesman Problem••

Recently there has been a breakthrough in the field of algorithms for geometric network problems, concerning the complexity of the Euclidean Travelling Salesman Problem.
• Article

### Is it easier to find half a needle than a full needle in a random network?•••

Common sense tells us that objects of comparable size should be equally hard to find. Yet, when searching inside a random network, surprises are awaiting . . .