The collaboration, reignited after the challenges posed by the pandemic, had its roots in a successful joint meeting in 2018 at the Indian Statistical Institute, Kolkata, where 70 participants engaged in fruitful discussions. Building on this momentum, several one-day workshops were organized in The Netherlands in 2017 and 2018, setting the stage for the latest gathering in Bangalore.

The ICTS campus, established in 2015, provided an ideal backdrop for the workshop. Nestled away from the bustling downtown, the campus exuded a serene atmosphere, boasting beautiful housing facilities and a welcoming canteen. The venue, with its quiet charm and adorned with blackboards, proved to be the perfect setting for discussions among participants.

A total of 45 attendees, including one-third from The Netherlands affiliated with NETWORKS, participated in the workshop. The agenda featured 20 talks and a comprehensive discussion session. The director of ICTS extended a warm welcome on the second day, acquainting the participants with the various facilities available.

This workshop presented an opportunity for European participants to meet their Indian colleagues, and, arguably more importantly, also for young Indian researchers to meet senior European researchers on the topic of the mathematics of networks. Such an exchange would not have been possible without this physical meeting and the vibrant discussions that it has sparked.

The workshop covered a diverse array of topics, delving into the static and dynamic properties of networks, geometric features, optimization and control issues, and the intersections of network science with physics, computer science, social sciences, and medical sciences.

Tuesday saw students presenting posters, followed by two-minute speeches and engaging discussions with the audience. Midway through the week, a strategic afternoon unfolded, focusing on major future questions in network science. Spearheaded by Remco van der Hofstad (TU/e), Rajesh Sundaresan (IISc), and Diego Garlaschelli (IMT, Lucca and Leiden), this two-hour animated discussion session sparked new research ideas and promising directions. How to decide what is a good network model? How can the mathematical networks community collaborate with its more applied counterparts to understand network statistics? How to use clever control to steer a network in a wanted direction? How can statistical physics ideas be fruitfully used in order to understand network modelling and functionality?

Wednesday morning offered a refreshing break as many participants embarked on a scenic walk to a nearby lake, culminating in a rejuvenating sip of coconut water from a local shop. Thursday brought the group together for a well-deserved evening of drinks and dinner at a pub, providing a delightful respite after four intensive days. The culinary journey throughout the workshop was a highlight, with the canteen offering delicious South Indian breakfasts, diverse lunch options, and satisfying dinners. For those seeking refreshment, the beautiful swimming pool provided a perfect escape.

During the workshop ongoing collaborations were strengthened and new collaborations were set up. Talks were followed by extensive discussion sessions, and for those eager to revisit the presentations, the ICTS YouTube channel offered a digital archive. Looking ahead, there are plans to organize a follow-up workshop in January 2026, promising another exciting chapter in the collaborative journey of NETWORKS.

]]>It's well-known how traditional publishing is cut-throat. In general, competition is fierce, margins are tight, and it is costly, specialised, painstaking work to source, polish, market, distribute high-quality material.

In startling contrast, the segment devoted to **scientific** publishing, with a global market capitalisation similar to that of the film industry, is booming. In one of the most inefficient markets in the world, the main players consistently report to their shareholders profit margins around 30%! Even more startlingly, despite all the most costly/valuable work done for journals on an **unpaid** basis by academics and despite decreasing infrastructure costs for publishers due to technological advances, such eye-watering profits are extracted purely out of the (government-funded) budgets of knowledge institutions and funding agencies worldwide.

To offer an idea of the scale, the annual Dutch share of the profit margin sent to just one of the big academic publishing houses (starting with the letter E) is currently in the ballpark of 5 million euro.

Clearly something has gone awry (since decades), but what are the alternatives?

A new journal project, Innovations in Graph Theory, was conceived against this backdrop. After extensive and meticulous preparations, this high-standard journal launched, with a stellar founding editorial board, in August 2023 at an established European conference in combinatorics. It is hosted within a French nonprofit publishing platform, Centre Mersenne, with startup costs supported through a grant of NWO.

Crucially IGT is a **diamond** open access journal. The term **diamond** means costs are incurred neither for contributors nor for readers. The journal is owned by a nonprofit organisation governed by editorial board members.

The diamond model of open access is a key plank in building up sustainable, fair access to scientific publishing. The essential idea for the IGT journal came about by asking the following: can a scientific professional in the field establish and/or maintain their career solely through publication in diamond journals? (One can obviously ask this too for fields other than graph theory.)

Within a chorus of closely-aligned, recent diamond journal launches/flips in discrete mathematics (Discrete Analysis, Advances in Combinatorics, Algebraic Combinatorics, Combinatorial Theory), the IGT initiative is aimed at further fostering the chances that a graph theorist can answer yes to the above question.

]]>If you see six people when you enter a class, it might be difficult to guess their personalities, but there is an amazing thing you can be sure of (mathematically prove): there are three of them who either all know each other, or who have never met. On the contrary, if there are only five people, there is a chance that you might fail to find such a triple: consider five people sitting around a circular table in a way that everybody knows only their two neighbors. Whoever three people you choose there will always be two people who know each other and two people who don't know each other, never all three.

*Figure 1: Five people sitting around a circular table.*

Why does this argument fail when you have six people sitting around a circular table? Think about this for a moment.

This simple observation due to Frank Ramsey in 1930 was the first step in mathematicians building a great theory on it ever since -- nowadays called Ramsey theory. It mainly concerns the structures emerging as the group gets larger. People in a class who may (or may not) have met before can be represented as a network of points that may (or may not) be linked to each other by an edge. There is a whole branch of mathematics that represents mathematical problems about networks in that way, called *graph theory*, and Ramsey theory is a subfield of graph theory.

Let us call a group of people *homogeneous* if either all know each other, or who have never met. Suppose instead of a group of three, you want to find a homogeneous group of size four. You can easily convince yourself that if, for instance, there are 10000 people in the class, this should be an easy task. But, what about the minimum size of the class in order to guarantee existence of such four people? The following example in the figure below shows you might fail if there are only 17 people:

*Figure 2:* *An example of a graph on 17 vertices with no homogeneous set of size four*

Indeed, it is the best example! It can be shown that one can always find a homogeneous group of size four in every group of 18 people.

Ready to be surprised? Mathematicians have no answer (yet) for the next obvious question:

**Question:** What is the minimum number of people to guarantee the existence of a homogeneous group of size five?

The question above is just one example of problems in graph theory which are innocent-looking but enormously difficult! Mathematicians are only able to establish that the answer should be a number between 43 and 48. How can this problem be not fully answered? I can hear you saying a computer would do the job, but it turns out that we need more than computations, which is quite beyond today's computer's abilities.

Let us start formalizing the concepts. In graph theory language, homogeneous groups correspond to substructures in graphs which are either all connected or all disconnected, respectively called *cliques* and *independent sets*. Letting be the minimum integer such that every graph on vertices has a homogeneous substructure of size , the previous discussion is mathematically equivalent to saying that and . Recall that what we could only say about was that , so it seems there is no hope for when , desperate! But, mathematicians never give up.

If something is quite difficult to determine exactly, it is natural to try to understand what it looks like by giving some good upper and lower bounds. The term *asymptotic behavior* for a function informally means the categorization of when grows towards infinity in terms of well-understood functions such as or . For the case of our problem, the behavior of when gets larger, the famous mathematician Paul Erdős, as the founder of the *probabilistic method*, used a probabilistic argument to show that . For the upper bound, Ramsey himself proved that by using a mathematical method called induction, still a huge gap!

The exact asymptotics of -- whether exists or not -- is still one of the biggest open problems in Ramsey theory, but now we are moving to the part in which we discuss another aspect of homogeneous structures. You can and look at proofs of aforementioned upper and lower bounds at the great reference *Proofs from the book, Chapter 45: Probability makes counting (sometimes) easy*. Also you can read Nicos Starreveld's article about the latest breakthrough about the upper bound for and further discussion.

Let us interpret the upper and lower bounds for from another perspective: shows that every graph on vertices has a homogeneous substructure of size . On the other hand, Erdős' result says that in an -vertex graph, you can avoid homogeneous substructures of size . The main message from these two facts can be summarized as follows:

- In an arbitrary graph, logarithmic sized homogeneous substructures are unavoidable.
- There are graphs whose largest homogeneous substructure has logarithmic size.

Erdős and his collaborator Andras Hajnal asked a question in a paper published in 1989 that is still quite open: Take any substructure you want, say . If a graph does not have a part that exactly looks like in terms of edges and non-edges, it is called -free. They conjectured that any -free graph has to have a polynomial size homogeneous substructure -- rather than only logarithmic size you would see in an arbitrary graph (note that for any fixed positive numbers and , becomes much larger than when gets larger.).

**Erdős-Hajnal conjecture**

For every graph , there exists a constant such that every

free graph on vertices has a homogeneous substructure of size .

At the first glance, it might not be intuitive why forbidding a substructure can drastically change the size of the largest homogeneous substructure in a graph compared to arbitrary graphs of the same size, but, of course, there are (mathematical) reasons for it. Maybe it is best to think about it first through an example. Let be a graph that contains three vertices and two edges, i.e. a substructure looks like a path of two edges, depicted in Figure 3 below.

*Figure 3: A path graph with two edges.*

Can we describe -free graphs? Let be a -free graph. Now we will try to understand how this affects the structure of . Consider a vertex in which has at least two other neighbors (vertices adjacent to through a link). Call them as , depicted in Figure 4 below.

*Figure 4: *The vertex and its neighbors.

If and are not linked for some , then observe that would be the same graph with (see Figure 5), a contradiction.

*Figure 5: The vertex and are not connected, then contains .*

Therefore, all the neighbors of should be connected to each other, and this is true for any vertex with at least two neighbors. If you think about it for a few more seconds, you can conclude that should be a disjoint union of parts such that each part is a clique -- all vertices should be linked to each other within the substructure (treat a single vertex as a clique as well). Suppose consists of many cliques. Observe that the largest part has at least vertices, so we find a homogeneous structure of size . On the other hand, we can choose one vertex from each part, they form again a homogeneous substructure because they are all disconnected. As a result, we can find a homogeneous substructure of size . Since , either or , so . So, we proved the conjecture when is path graph consisting of two edges! One case is done among infinitely many graphs. You can try to prove one more case yourself.

**Exercise:** Let be the triangle graph, as depicted in Figure 6. Prove that every -free graph on vertices has a homogeneous set of size .

*Figure 6: A triangle graph consisting of three vertices which are all connected.*

**Solution:** Let be a triangle-free graph on vertices. We will prove that has an independent set of size .

- Suppose there exists a vertex of at least neighbors. Then, any two such neighbors …

- Suppose all the vertices have less than neighbors. Consider the largest independent set , then …

You can see that the conjecture is so strong because it is about any graph . Desperately, since Erdős and Hajnal, there has been no significant progress with the exception examining some special cases of the graph , just like we did. However, even the case of being a path of four edges (see Figure 7) is still unknown!

*Figure 7: A path graph with four edges.*

You can say that even if we could solve this case, it would not mean a huge step towards the general solution of the conjecture (because there are infinitely many graphs). You would be right, but as mathematician we do what we can do!

You can look at the papers listed at the end of this article to have an idea about the recent progress towards the conjecture, which makes mathematicians more optimistic about a proof in the near future. But who knows, maybe we still need to wait for a long time...

- The Erdős-Hajnal Conjecture for Bull-free Graphs by Maria Chudnovsky and Shmuel Safra.
- The Erdős-Hajnal Conjecture by Maria Chudnovsky.
- Erdős-Hajnal for graphs with no 5-hole by Maria Chudnovsky, Alex Scott, Paul Seymour, and Sophie Spirkl.
- Towards the Erdős-Hajnal conjecture for $P_5$-free graphs by Pablo Blanco and Matia Bucic.
- Induced subgraph density I: A loglog step towards Erdős-Hajnal by Matia Bucic, Tung Nguyen, Alex Scott, and Paul Seymour.

This is the opening paragraph of the by-laws of the EMS (European Mathematical Society) Young Academy. From July 15th to 19th the European Congress of Mathematics (9ECM) will take place in Sevilla. The programme looks amazing!

On the website of the event we read:

The European Mathematical Young Academy organizes the following activities during the 9ECM:

**EMYA Lightning talks**

An opportunity for PhD students and early career researchers to present their research in a short, concise, and energetic format. Speakers are tasked with presenting the key-ideas and/or results of their research in just 5-minutes and with a maximum of 3 slides. Keep in mind that session chairs will be strict about the 5-minutes time limit.

To be eligible to give a lightning talk you must be a PhD student or early career researcher (from 2nd year PhD up to 3 years post PhD) and not be presenting elsewhere at the ECM.

Submissions for this activity must be sent through the “abstracts submission” form, selecting “EMYA Lightning talks” as the desired thematic session.

**EMYA ice breaking session**

An occasion for young people to get to know each other and connect with peers in an informal environment. The main target group of the event is PhD students and early stage researchers, especially if they have never participated in big conferences. During the session, there will be the opportunity to talk not only about Mathematics, and participants will be encouraged to interact through organised activities like games as well.

**Sustainability panel & group discussion**

The theme of sustainability of research life is growing in importance in the academic debate. Sustainability can be intended in different ways: in terms of mental health of researchers or, for example, focusing on the environmental point of view. This session is intended as an occasion to discuss in small groups about such themes, sharing our own experiences and ideas on how it is like to live and work in academia and what kind of actions can be taken to mitigate climate change.

**Young KWG**

Also in the Netherlands the mathematics association (KWG) is trying to create a community of early career mathematicians, by establishing the young division Young KWG. The main goal of Young KWG is to attract and support young mathematicians in the early stages of their careers. They aim to create a vibrant network in the Netherlands, a place where individuals can connect, interact, and learn from one another.

]]>Are you one of those people who always seems to pack just a bit too much for your car to handle when packing for a vacation? Then you’re probably familiar with the packing-your-car-Tetris game: given a number of items and bags, can you fit all of these in the back of your car? Some will argue that this is a fun game, for other this might be a straight-up nightmare. However: most will agree, this can be a pretty hard game to win. In this article, I will introduce you to the magical world of computational complexity and will let you *feel* what makes packing your car generally hard.

So why is packing your suitcases into the car so complex? Obviously, it is a 3-dimensional packing problem: we want to pack certain 3D items called suitcases into a 3D space called the trunk. I claim that this is even hard in 2 dimensions. For example, take the puzzle of Steward Coffin on the right with only five(!) pieces which need to be placed into a square tray. You can see these pieces as 2-dimensional suitcases that need to fit into a 2D space, namely the square.

Even if the pieces are just rectangles, for example in the figure on the left, this is an extremely hard problem. You can __try this for yourself here__!

If it is already hard to do this in two dimensions, I hope you agree three dimensions is even harder.

The Arithmeum museum in Bonn has an exposition on Chip Design, which can be partly viewed on their website. If you go to here and choose **Placement** and then **Game** you can play a 2D packing game with rectangles, which is part of this exposition. I would encourage you to start with only a few items and after each success increase the number of items to pack by two. Hopefully you’ll notice it will get a **lot** harder with each extra item!

When we play this game in only 1 dimension, we call it `bin packing’. In this case, we are given so-called bins with a certain capacity and items of a length that need to be distributed over these bins. This type of problem occurs for example when you are packing several suitcases for flying, where each suitcase can weigh at most 23kg. In business, this problem appears when distributing tasks (the items) over the workday of employees (which can be seen as the bins).

Even though Bin Packing is a problem played in one dimension, it remains hard to solve. Let us first look at a simple example, say that I have 6 items of lengths: 8, 7, 7, 5, 2, and 1, can we fit it into two `bins’ of length 15? Try to solve this in the interactive game below, you can move the items with your cursor.

Probably, you can solve this in a couple of seconds, either by just trying or by remarking that and that , so indeed this would fit.

In the following game you can try it yourself. You can choose the number of bins you want, and you have to put the items in them so that they can fit precisely.

However, what if I give you the following set of lengths of items: {4, 12, 17, 28, 34, 37, 49, 54, 59, 65, 96} and I want to distribute them over two bins of capacity 228? Can you do that? How much harder did the problem get, even though the number of items only doubled?

Well yes—and no. Of course we can use computers for this, however, a computer needs a set of instructions to follow, also called an *algorithm*. In other words, we need a structured way to find a solution. An example of an algorithm for Bin Packing with two bins could be:

* For all combinations of items *:

*compute whether all of**fits into one bin, and**compute whether all items***not**in fit into one bin.

*If both are yes: we found a way to distribute the items.If the answer is no for each combination of items*

In the example of and two bins of capacity 228, the algorithm will find the solution when it chooses as {4, 28, 34, 37, 59, 65}.

So, we can use a computer to solve Bin Packing. How long will the computer take to compute this?

This obviously depends on which computer you use and how fast it is: The computers that are produced today are many times faster than those that run on Windows XP from 2001. However, we **can** say something about the *number of computations* the algorithm (and therefore the computer) takes. The algorithms above will do two computations (namely a. and b.) per combination of items. So how many combinations of items are there?

Well, if is the number of items, then each item can be either a part of, or not a part of any combination. This gives us a total of different combinations to check. Hence, the number of computations scales **exponentially** in the number of items. If you recall exponential growth from corona, then you’re probably aware that this is not a good thing. Because of this combinations we check, the number of combinations **doubles **if we add one item to the problem (see also the table on the right). So, if my computer takes 1 second to do Bin Packing with 10 items, it will take for Bin Packing with 20 items around seconds which is around 17 **minutes**. And if you want to compute this for 50 items, well…. It would take about 35.678.376 **years**. I don’t think we would have time to wait for that. So yes, we are able to compute it using computers, but only up to a small number of items.

1 | 2 |

2 | 4 |

3 | 8 |

4 | 16 |

5 | 32 |

10 | 1.024 |

11 | 2.048 |

12 | 4.096 |

13 | 8.192 |

14 | 16.384 |

15 | 32.768 |

16 | 65.536 |

17 | 131.072 |

18 | 262.144 |

19 | 524.288 |

20 | 1.048.576 |

Well- not really! Or at least: not that we know of. Of course, the fact that we do not know of an efficient algorithm, does not necessarily prove that it does not exist.

For two bins, the best algorithms need about computations. For three bins, the best algorithms need about computations. For any other number of bins, there was a recent breakthrough by my co-authors and me, where we present an algorithm that needs about computations.

However, the computer science community believes that efficient algorithms (those that only need for example around or computations) do not exist for Bin Packing. This belief is based on a hypothesis referred to as “”; If you would be able to design an efficient algorithm for Bin Packing it would show that . Why does that matter?

Well first of all, you would earn a million dollars, as showing is one of the millennium price problems. In theory, which would be nice. However, it would also imply that we can solve **many other** **problems** efficiently. That may sound wonderful in theory, until you realize that our security systems are then suddenly also easy to crack! So, giving an efficient algorithm for Bin Packing would actually have a lot of impact on our society.

No! There are actually many, many so-called -hard problems such as Bin Packing. Giving an efficient algorithm for any of these NP-hard problems would also show and therefore have the same impact. These are often fun games to solve with many applications. Let me give you some examples of such -hard problems.

**Steiner Tree:** In this problem, you’re given points that need to be connected with using as few connecting lines as possible. You can play this game here and then clicking ‘Routing’. The website explains one of the applications of this problem: connecting parts of a chip while using the least amount of connecting fluids.

**Travelling Salesman Problem:** In this problem, you want to find the fastest route visiting a set of cities. You can play this game here. You can encounter this problem for example when a Picnic or PostNL car has to visit a set of customers.

**3-coloring:** In this problem, you’re given a set of points and lines between the points. You need to color the points red, blue or green, such that for each line, the endpoints receive different colors.

There is actually some good news: not all packing problems are hard! Sometimes the solution is actually relatively easy to find, and we humans seem to be remarkably good at finding these types of structures. So: maybe you’re lucky and with a bit of hard Tetris work, you can manage to pack your car. But if you can’t, there is actually a really easy solution:

**Just pack a bit less next time.**

Thanks for reading this article about complexity theory. If you’re searching for a subject for a school project or a 'profielwerkstuk’ related to computer science or mathematics, I can recommend studying any of the problems above, matching problems, or for example puzzles like Sudoku. Understanding (the complexity of) the problem, being able to implement/compare some of the algorithms for it, and looking for applications for the problem can all be part of the project.

]]>With this one-day event for students in both theoretical and applied mathematics we aim to showcase the benefits of doing a PhD to help students to make their choice whether they want to apply for a PhD or not. We have invited speakers from many different areas in mathematics within academia and industry.

**Organizer**: Young KWG

**When:** February 2, 2024, ±10.00h – 18.00h

**Where:** Koninsbergergebouw, Utrecht University

**Target audience:** master students in mathematics studying at a university in the Netherlands. Especially women and students from minority groups in mathematics are encouraged to join. If we have enough spaces, (3rd year) bachelor students will also be considered.

**Registration: **Register for the event before 23.59, January 12th, 2024 via the registration form: https://tilburgss.co1.qualtrics.com/jfe/form/SV_839QjPA5QYGLOXs

*Note that the number of participants is limited so we may need to make a selection from the registrations. Hence, registration does not guarantee a place. We will inform everyone after the registration deadline.*

- Cecilia Salgado, Associate Professor Groningen University
- Corinne Meerman, Dutch Healthcare Authority

*Do’s and don’ts when applying for a PhD*

Speaker: **Katrijn van Deun,**Full Professor, Tilburg University

Having trouble writing a motivation letter? How long should it be? What to include and what to omit? Our speaker, professor at Leiden University, will give advice on what to focus while writing a CV and motivation letter for a PhD position. She will also share examples of good and bad practices.

*The next two workshops run in parallel, and students indicate which they prefer to attend.*

**Common hurdles during a PhD and how to prevent them**

Speaker: Sabrina Genz

Have you ever had doubts about beeing good enough for a PhD positions? Or maybe felt the pressure of a competitive environment in acadeima bringing you down? In this workshop we will discuss some of the main struggles that students face in academia and that create big obstacles throught their developement. That will include topics such as how to recognize and deal with impostor syndrome before it becomes a real problem, the relationship with the supervisors and relationships with fellow PhD students, amongst others.

*Outcomes of a PhD*

Speaker: Chiat Cheong

Common myth: “After 4 years of a PhD I only developed skills to continue with research.” In this workshop we focus on a wide range of skills that PhD students develop throughout their research years. Our speaker, Chiat Cheong, is a scientist who made a transition from academic career and her professional mission became: making PhDs and postdocs aware of their potential and providing support in their career development.

**Time**** ****Activity**

- 09:30 – 10:00: Arrival and coffee
- 10:00 – 10:05: Welcome
- 10:05 – 10:50: Industry speaker:
**Corine Meerman**, Dutch Healthcare Authority - 10:50 – 11:00:
*Break* - 11:00 – 11:45: Academic speaker:
**Cecília Salgado Guimarães da Silva**, University of Groningen - 12:00 – 13:30:
**Parallel workshops:****Sabrina Genz**: Common hurdles during a PhD and how to prevent them**Chiat Cheong**: Transferable skills

- 13:30 – 14:30:
*Lunch break*

**Workshops:**

- 14:30 – 15:30:
**Katrijn van Deun:**Do’s and don’ts when applying for a PhD - 15:30 – 15:45: PhD panel
- 16:45 – 17:00: Closing
- 17:00 – 18:00:
*Drinks*

The DNA profiles that eventually led to the arrest of the suspect were not his own, but those of very distant relatives who had voluntarily uploaded their genetic data to public genealogy websites. Forensic investigators used the DNA profiles of distant family matches to narrow down their search and identified Joseph DeAngelo as the prime suspect. The method of forensic investigative genetic genealogy (FIGG) led to his arrest in 2018, where he admitted to committing thirteen murders and more than fifty rapes of women.

Maybe you’ve heard of companies like 23andMe, MyHeritage, or AncestryDNA. They can help you learn about your family history and find relatives you may not know about. They do this by analyzing your DNA; the genetic material which makes you who you are. Although most genetic material is the same for everyone, small sections can contain different information, these small sections are called *loci*. DNA is built in the same way, out of 23 chromosome pairs (and thus 46 chromosomes total). We have 22 pairs of autosomal chromosomes and one pair of sex chromosomes. Half of the autosomal chromosomes come from your mother and half from your father.

*Figure 1:* *Inheritance of DNA*

The sex chromosomes determine your sex (XX or XY). The information from the parents is stored on one of the chromosome pairs. In each generation, the DNA recombines, meaning that the child receives a random mixture of the DNA of each parent. Since DNA is inherited over generations, your DNA is also related to your grandparents, cousins, and other relatives. The percentage of DNA we share with someone decreases as relatedness gets further away.

You may have seen crime shows where investigators collect samples from the crime scene and send them to a lab. Within a day, a suspect is identified using a computer analysis. Although these shows don't always accurately depict the real process, they tend to use a method called DNA fingerprinting, which has been used in real forensic investigations for over 30 years. A forensic researcher can establish a DNA fingerprint of a sample by analyzing a specific number of repeats of a specific sequence that is found on a chromosome, this is also known as a tandem repeats. The regions on a chromosome that are chosen for fingerprinting are the loci, those small sections mentioned above, which differ among individuals. For example, in a specific locus on chromosome 18 of some individual, they may have 3 repeats of the sequence AAGT on one chromosome and 5 repeats of the same sequence on the same locus on the other chromosome.

*Figure 2: Tandem repeats on chromosome 18 of an individual*

When you do this for more loci, also on other chromosomes, you can establish a DNA fingerprint, which can be used to identify an individual. The sample found on the crime scene is compared to reference samples in databases of public genealogy websites, and when the fingerprint aligns, a match is established.

Advancements in DNA analysis have expanded its use beyond its traditional use, for instance with forensic investigative genetic genealogy. You can compare specific regions of the DNA, like the ones used for DNA fingerprinting, to indicate inheritance from a common ancestor. (Identical-by-descent).

Going back to the example above in Figure 2, you expect at least one of the repeats to be the same for one person as for each of the parents. The chromosome with 3 repeats came from the mother, and the 5 repeats came from the father. Thus, at least 50% of the number of repeats is similar between the child and parent. Let's see now what happens with the descendants of two individuals. Given the two parents, the child has a probability of 25% to have either of the combinations of their chromosomes. Suppose the two individuals have tandem repeats as shown below.

*Figure 3: Tandem repeats of two individuals*

Given the two parents (individuals 1 and 2), the child has a probability of 25% to have either of these combinations: 3 and 2 repeats, 3 and 4 repeats, 5 and 2 repeats, and 5 and 4 repeats. Let’s say child 1 has repeats 3 and 5. Child 2 has a 25% probability of getting the same repeats, 50% probability of sharing one of the repeats, and 25% of sharing no repeats.

Using this logic, a suspect does not have to be in a DNA database. Based on the similarity of specific segments from the DNA profile found on the crime scene and the DNA profiles of relatives in a public database, you can “predict” who the suspect might be. You can even use this technique with far relatives who you share less than 1% DNA with!

A huge advantage of this method is that you expand the search range beyond a simple 1-to-1 match by considering partial matches at different degrees of relatedness.

However, since this matching method was used in the Golden State Killer case, a larger conversation was sparked between forensic DNA experts, genealogy website users, and genealogy hobbyists about the scope of work and privacy concerns. While genealogy has traditionally been a hobby pursued by passionate volunteers when it comes to solving complex crimes, the challenges might go beyond their expertise.

When FIGG emerged in criminal cases in 2018, there were no guidelines or policies on the use of it. Since then, there have been changes in regulations surrounding the use of genetic databases for law enforcement purposes. Some companies updated their policies to require a court order or warrant before disclosing genetic information to law enforcement agencies. For example, 23andMe informs customers that they closely examine each law enforcement request, and only act when they deem the request legally valid. Now, when individuals submit their genetic information to companies like 23andMe, MyHeritage, and AncestryDNA, they must specifically “opt-in” (i/o agree per default) for the use of their genetic information for law enforcement purposes. However, the issue arises when the “calculated” genetic information, of family members who did not provide their consent, is used to identify suspects.

This highlights the importance of establishing clear regulations and guidelines to ensure that forensic genealogy is used responsibly and ethically. After 15 years of no leads, FIGG was used in 2019 to solve a double-murder case in Sweden. Despite this success, the Swedish Authority for Privacy Protection now prevents the use of FIGG to solve other cold cases, because the current law does not allow genetic information to be used that way.

On March 6th, 2023, the Dutch Public Prosecution Service (OM) and the Dutch Forensic Institute (NFI) announced their intention to use genealogical databases to solve cold cases involving unidentified victims and suspects. While the use of these databases is already allowed under the current Dutch legislation, the OM will first seek permission from a judge before accessing the DNA profiles to take privacy concerns into account. When approved, the OM has already selected two cases for a pilot program, one involving an unidentified victim and one involving an unknown suspect. If successful, the OM will consider using this method in additional cases.

Our DNA can tell a lot about us and our family history. Besides, it can also be used in crime-solving. With the help of FIGG, investigators can use DNA samples to identify unknown suspects and victims by looking for shared ancestry among distant relatives in genealogy databases. After the Golden State Killer case, this technique has been used to solve over 200 cases, which may have been never solved otherwise. While this technology is very promising, it also raises some important concerns about privacy and ethics. Therefore, we need to make sure we are taking steps to make national and international guidelines and be transparent on the use of genetic information in law enforcement.

]]>Trust is also required when we buy a used car from a personal connection or via-via. We (used to) accept cookies when browsing without thinking until we learnt that these may indeed be being used against us. Even when we vote for a politician to act in our favour, we trust that they will use their position (gained by our trust in them) to enact policies which are in our favour. In this article we will explore how Game Theory has been used in attempts to model trust and cooperation.

The framework of Game Theory dates back to Hungarian-American mathematician John von Neumann in 1928. Though it was another John, John Nash, who developed the ubiquitous equilibrium construct, the Nash Equilibrium (elaborated upon shortly). Game Theory is concerned with the rational decision making of individuals in the context of a game. Game is a broad term because it can mean a literal game (like chess), but it can also be used to describe any interaction between two or more people who have a variety of courses of action. It is successfully employed by policy makers in the fields as varied as auctions, industrial organization and political science. Academics have also used it to model and predict behaviour of animals and organisms in biology.

In Game Theory, the players are rational individuals take the action which maximizes their (expected) reward. A game is defined by a set of actions available to the players involved and a rule for assigning rewards to the players based on the combination of actions chosen. When games are between two individuals it may be succinctly represented by a game matrix. Take as example the fantasy favourite Boulder-Parchment-Shears, in which boulder crushes the shears, shears cut the parchment and parchment covers the boulder. This game has the game matrix:

Represented in this way, we can easily read off the reward given to either player by looking at the appropriate entry in the matrix. A 1 is the reward for winning the round, a −1, the ‘reward’ for losing and a zero indicates a tie (draw). The individual who chooses their action from the rows, is the row player and similarly the player who chooses their action from the columns, is the column player. The entries of the payoff matrix are ordered giving first the row player and then the column players reward. If the column player chooses to play Parchment, while the row player chooses Shears, we read off the table that the column player gets −1 and the row player gets 1. In other words the row player wins, which we may recognize as scissors winning against paper in the slightly less fantastical rock-paper-scissors.

In this game of Boulder-Parchment-Shears, I maximize my expected reward by playing each of the three strategies at probability (completely randomly). Similarly my opponent should do the same. If one of us were to adjust, playing Boulder more frequently, then the other can take advantage of this by increasing the rate at which they play Parchment to win more often. Playing each strategy at probability describes the unique Nash Equilibrium for the game of Boulder- Parchment-Shears. A Nash equilibrium a set of actions of each player such that each players action is the best response to all the other player’s actions. In the turn based Tic-Tac-Toe, we know that games always end in a tie when both players know the optimal (Nash Equilibrium) strategy.

But these are all descriptions of actual games and I promised you a story about trust. Soon I will describe the Trust Game, but before then I want to highlight the relevance of the Nash Equilibrium beyond the idea of an actual game. The strategies of a Nash Equilibrium are best responses to each other. If before an interaction (like a game), I announce that I will be playing the Nash-Equilibrium strategy, then assuming you believe me, you would be best served by also playing the Nash Equilibrium strategy.

Alice is visiting Amsterdam for the first time in her life and after a visit to the Maritime Museum she stumbles upon the Marineterrein, a well-known hotspot for swimming in Amsterdam. She is lucky, it is one of the three scorching days and a swim would be fantastic right now. She makes for a group of strangers and asks them to watch her stuff while she takes a dip. Unfortunately for Alice this is where her luck ends, she was identified as a tourist and The Group of students take advantage of this: They remove the cash from her purse and disappear, never to be spotted by Alice again. The interaction between Alice and The Group can be modelled as the Trust game.

The Trust Game also involves two parties, though they only have two actions to choose from, and these are different per party. The truster (Alice) chooses whether or not to trust the Group (asking them to watch their items at the gracht). Subsequently, if Alice chose to trust, then the trustee (The Group) chooses whether to honour the trust (watch the items) or to abuse it (make a swift get-away with the cash on their swapfiets). This game is represented in matrix form by:

This game in particular may be even better represented in a game tree. This is because the moves 2 in this game are sequential; The Group only makes their decision (and indeed only has a decision to make) after Alice has placed trust. The game-tree of the trust game is shown in the figure below with as example the rewards possible when taking a swim on a warm summer’s day.

*The trust game in game tree format.*

Game Theory predicts that we would never swim on such a day or indeed; buy a used car, accept cookies when browsing, or vote for a politician to act in our favour. Though of course we have in some instances produced mechanisms to minimize the risk we take when performing the above actions, we clearly do still rely being able to trust others in our daily lives. This is of course no mistake, but simply shows us that some models do not capture everything required to make an accurate prediction. There could be various errors we make when modelling such a situation as a trust game, for instance, The Group of students (trustee) could get great joy out of letting tourists enjoy their favorite swimming spot, and so there is some benefit to keeping watch which was not captured in the original formulation. Whether it is the trust game which is modelled incorrectly or the assumptions that the players would act rationally I will not attempt to answer here. There are additions, modifications and developments in the field of game theory which try to answer the question how it can be that there is in fact quite a lot of trust in our societies.

Various possible solutions to this conundrum have been hypothesized which consider an evolutionary perspective (strategies as genes fighting for survival): Family relations between individuals make trust possible, direct (tit-for-tat) or indirect (reputation) reciprocity make trust possible, network structure make trust possible by getting to know the individuals close by and finally, because humans are somewhat ‘pack’ animals, a pack of cooperators outperforms a pack of defectors. (Nicky Case has created this wonderful game which illustrates the simple and powerful effect of repeated interactions) The research on these antecedents of trust and cooperation is not yet concluded. For now, I leave you with a thought: Models can be useful, but humans are wonderful, possibly irrational and definitely not so easily modelled.

Photo by micheile henderson on Unsplash.

]]>To overcome this dilemma, companies use exhaustive testing like assessment centres, or they require external qualifications and certificates. However, these methods aren't always feasible or reflective of a candidate's true potential. Enter the realm of social networks – not just the connections one has, but the strength of endorsements from those within these networks.

Lots of research has gone into the nexus of social networks in the labour market. Often however these networks are estimated based on socio-economic indicators: if, say, two people live in the same block, or belong to the same minority ethnicity residing in the same suburb, then they are more likely to know each other.

Additionally, social networks are notoriously hard to study because they work both ways: not only do hirers learn about candidate quality, but potential candidates learn about job openings.

Our new study in the journal *Labour Economics* takes a different approach. Our research isolates the latter function of social networks by looking at a very special labour market, namely the market for fresh Economics PhD students. On this market, a central platform stores all openings. On this platform, all institutions with an interest to hire a new professor register their job ad. This transparency allows us to focus on the pure effect of network endorsements. Additionally, since this is academia, we can actually connect people in a large network simply based on prior work experience, i.e., a joint publication. So these two things set us apart: an actual network and a study subject where social networks work in just one direction.

Then we focus on the academic adviser, who plays an important role in the hiring process. Advisers write recommendation letters, call colleagues in different institutions, and advertise all their students who “are on the market”. Each year this affects about 1k students just in Economics alone.

Our question basically is: Do students benefit from the connectedness of their advisers in terms of first academic employment after graduate school?

*This letter of recommendation was sent to the Department of Mathematics, Princeton University in 1984, recommending John Nash. Taken from The Abel Prize.*

The network that we look at is composed of more than 250k research articles from 466 journals relevant to Economists. Writing academic papers is what academics do all the time, and mostly they don’t do it alone. As this journey easily lasts multiple years, collaborator bonds can be said to be pretty strong, with frequent exchanges also about other topics. From these 250k documents we extract who worked with whom in which year and establish the corresponding connections. This way we link between 41k and 52k academics.

The connectedness we refer to is simply the Eigenvector centrality of the adviser. The concept of Eigenvector centrality has often been featured on networkpages.nl. It is not merely the number of co-authors an adviser has; it also considers the quality of these connections. In essence, it measures the influence an individual has within the network. The higher the centrality, the greater the adviser's potential to open doors for their students.

The findings are illuminating: advisers with higher Eigenvector centrality significantly boost their mentees' chances of securing positions at prestigious institutions. That means, applicants whose adviser is more connected tend to be hired by better institutions. But there is more to it. We also show that students are more often hired in institutions that are closer to the adviser in the social network. A student is more likely to end up in an institution when there are just two people on the shortest path between the adviser and the closest faculty member, than if there are three people between. The causal underpinning requires some detail which is out of scope for this article, but let us add this: We conduct numerous statistical tests to test our theory.

To sum up, our study confirms empirically that this one aspect of social networks exists not only in theory, but in reality, too: They help decreasing the uncertainty about applicant quality. Why is that? Is it because of information transmission, or is it because of the reputation, or something else, we can’t say. Theory from prior literature however suggests that information diffusion is the primary mechanism.

The implications of this research extend beyond academia. It suggests that in any labour market where candidate quality is challenging to assess, the social capital of third parties could play a crucial role in the hiring process. For companies, this underscores the importance of tapping into networks when evaluating candidates, going beyond the surface level of qualifications and certificates.

Moreover, the study sheds light on the strategic value of building and nurturing professional networks. For job seekers, it's a call to actively engage with mentors and industry peers, not just for the opportunities they directly provide, but for the doors their vouching can open.

]]>It was three weeks ago when I received an email from a colleague of mine that I was chosen by the committee to receive the award in the category "Teaching". This was an incredible surprise since I didn't know I was nominated. Many emotions and thoughts made that moment unique, it was a moment I wanted to celebrate!

In the days that followed, I was thinking a lot about "how do I teach?". The first thought that came to mind is that I am enthousiastic about mathematics and I try to convey this enthousiasm during my teaching. This is probably how I used to teach during my time as a PhD student or when I first started as a teacher in the bachelor program. But I realized quickly that there is much more to say than just being enthousiastic. After my first teaching experiences, I realized that enthousiasm is important but is often not enough.

When I had to structure a whole course I realized how important it is to dive into educational theory to properly organize and set up your teaching. From establishing clear and feasible learning objectives to developing all the material students would use, choosing the learning and teaching activities that would yield the best results, and setting up the examination. This was a very complex procedure, mostly because my background back then was in mathematics and not in educational research. Thankfully the university offers very good supervision for new teaching staff.

After working for five years as a teacher I realize that developing and properly coordinating an educational program demands time and energy. Only after teaching the same course for years, do I feel that it has taken its final form! The first time it was incredibly stressful and felt energy-consuming, the second year it went more smoothly but I still didn't feel entirely comfortable to make improvements, and only after the third year I felt comfortable to experiment with new teaching methods that could improve the quality of the course.

As I keep teaching, and keep growing in my teaching, I realize how important it is to do educational research. Educational research helps you understand how students perceive your teaching, and whether the desired learning outcomes are reached. Through good quality educational research students and teachers develop a common vision about learning, teaching, and doing mathematics. Such a shared vision can empower both students and teachers and create a community where we motivate each other to excel and reach our full potential. At the same time, a shared vision shows the way to an academic community where everyone can grow not only as a scientist but also as a person. A community where mutual respect, well-being, and learning are the core principles the educational program is built upon.

The event took place on the evening of Tuesday 7 November in the Hortus Botanicus in Amsterdam. During the evening the new members of AYA were introduced in a very joyful and energetic atmosphere. There was so much energy in the room and people were so eager to meet each other and interact. It was an amazing occasion to attend. At the closing of the event, three prizes were awarded, in the categories of Teaching, Societal impact, and Academic community support. Next to receiving the award in Teaching, it was amazing to learn about the work of other colleagues. In the category of Societal impact, the prize was awarded to Dr. Katja Tuma, assistant professor in the Computer systems and Network Institute at the VU. She has been organizing for two years the women-only hackathon Hack4her, a multitude of social, learning and skill-practice events where women could engage in a safe space in a discipline that is often male-dominated. In the category of Academic community building, the prize was awarded to Dr. Abbey Steele, associate professor in Political Science at the UvA, for her active involvement in anti-discriminatory and racial justice including innovating the recruitment procedures for both staff and students. It was an honour to stand next to these amazing people!

While receiving the award the rector of the University of Amsterdam read a short nomination text close colleagues of mine had submitted. While hearing these wonderful words, and knowing that my colleagues had nominated me for this award, I realized that being recognized and nominated by your colleagues is the greatest honour! I want to thank them all for the pleasant and motivating working environment they have created at the Korteweg de Vries Institute of Mathematics! The energy and the morale are high, and we keep going to achieve an educational program where everyone feels welcome and safe to learn and grow!

]]>