Which country voted the best at Eurovision?

Eurovision was on yesterday. I’ve never been interested much in the musical side but the weird political dynamics of Eurovision voting have always fascinated me; I tune in each year just for them and somewhat snarky commentary of Graham Norton, the UK commentator.

As I was watching the jury votes come in, a question popped into my head: Which country has voted the best in Eurovision? That is, which country was best at picking the eventual top 10 and in the right order?

Strangely enough, while there’s plenty of work on voting blocs and bilateral biases at Eurovision, most of it asks who votes for whom; I wanted to ask who votes accurately. I couldn’t find anyone asking the question that way, so I decided to do some data analysis myself.

The metric

To begin to answer this question, I first needed to formalize what “best” even means. That is, some mathematical notion of “good” and “bad”.

The simplest measure I could come up with is “of the 10 countries you gave points to, how many were in the actual top 10?”. The problem is that it can’t tell apart a voter who gave the eventual winner 12 from one who gave them 1. That feels wrong: an allocation is a ranking, not just a set, so position ought to matter.

To weight position, I needed some scheme for how much position 1 matters compared to position 10. Turns out Eurovision itself has already answered this question: the 12, 10, 8, 7, 6, 5, 4, 3, 2, 1 it hands out is exactly such a scheme. So I just reused it.

So I tried the next simplest thing: for each country, I have two numbers: what the voter awarded it, and how many points it actually received at the end. Multiply them together, sum across all countries, and divide by the maximum value that sum could possibly take if the voter had ranked everything perfectly (awarded 12 to the actual winner, 10 to second place, and so on). The result is between 0 and 1, where 1 is a perfect vote.

A worked example. The actual jury top 5 in 2023, by points received, was Sweden (340), Israel (177), Italy (176), Finland (150), Estonia (146). Imagine a voter’s jury panel awarded (12 → Italy, 10 → Sweden, 8 → Estonia, 7 → Israel, 6 → Finland).

Here’s the voter’s score:

Voter awardedCountry× Actual points=
12Italy× 1762,112
10Sweden× 3403,400
8Estonia× 1461,168
7Israel× 1771,239
6Finland× 150900
Sum8,819

And here’s the perfect score for this voter, what they’d have got if they had awarded the 12 to the actual top scorer, the 10 to second, and so on:

Voter awardedCountry× Actual points=
12Sweden× 3404,080
10Israel× 1771,770
8Italy× 1761,408
7Finland× 1501,050
6Estonia× 146876
Sum9,184

So this voter’s score is 8,819 / 9,184 = 0.96. Close to perfect: the same five countries are there, just slightly out of order.

Is it the right metric?

Obviously this is not the only metric I could have used and there are many other candidates out there: least-squares distance to the perfect vote, NDCG@10 (a standard ranking measure used in search), and Pearson correlation.

I went with the one above because I wanted the simplest thing which still worked. Least-squares is the only “simpler” alternative I can think of, but its scores end up squashed into a narrow range. The best voter across nine contests only reaches around 0.42 out of a theoretical max of 1.0, which is the highest score in the data but doesn’t read like a good number. The metric above sits more naturally on a 0-to-1 scale: the best voters land in the 0.8s, the worst in the 0.6s, and 1.0 still means a perfect vote.

In any case, I did run all the metrics listed above and they mostly agreed, with the same countries clustered at the top and bottom regardless of which one I used. But the headline #1 did shift between the top three or four countries depending on which metric I picked.

The answer

I picked 2016 as the start because that’s when Eurovision split each country’s contribution into separate jury and televote slates, each awarding their own 12, 10, 8, … points. Older contests used a single combined vote, so the data doesn’t decompose the same way.

Across these nine contests, here’s the full table:

RankCountryScore
1Spain0.815
2Lithuania0.810
3Belgium0.804
4Germany0.798
5Netherlands0.797
6Malta0.792
7Sweden0.790
8Austria0.789
9Iceland0.788
10Norway0.783
11Israel0.778
12Latvia0.777
13Finland0.777
14Moldova0.774
15Poland0.773
16Romania0.771
17Estonia0.770
18San Marino0.770
19Ireland0.762
20Armenia0.762
21Cyprus0.761
22Denmark0.757
23Albania0.754
24Slovenia0.751
25Australia0.749
26Azerbaijan0.746
27Serbia0.745
28Portugal0.743
29Switzerland0.739
30Czechia0.735
31Croatia0.733
32United Kingdom0.723
33France0.712
34Georgia0.707
35North Macedonia0.706
36Greece0.691
37Bulgaria0.688
38Ukraine0.675
39Italy0.666
40Montenegro0.625

Spain edges it, but only barely. The top seven are within 0.025 of each other, narrower than the year-to-year variation of any individual country, so calling Spain “the” best voter is too strong a statement.

A few countries often discussed in bloc-voting analyses do show up near the bottom; parts of the Balkans (Croatia, North Macedonia, Bulgaria, Montenegro) and Greece. But it’s only suggestive: others from the same blocs (Cyprus, Albania, Serbia, Armenia) sit comfortably mid-table, so this metric isn’t really measuring bloc voting; it’s measuring accuracy, which is a different thing. Properly disentangling the two would need more analysis work I don’t really want to do!

A note on data

All voting data is from EurovisionAPI/dataset, a scraped mirror of the EBU’s published results. As a sanity check on the two most recent contests I parsed the Wikipedia voting tables for 2024 and 2025 and diffed every cell against the dataset and got no differences.

All my code is available on GitHub.