It's sad, but inevitable, that the global leaderboard had to be pulled. It's also understandable that this year is just 12 days, so takes some pressure off.
If you've never done it before, I recommend it. Don't try and "win", just enjoy the problem solving and the whimsy.
Write the simple code you want to write, and think about what makes the prior step possible in the easiest way and build your structures from there, filling in the gaps.
- install like this
- initialize a directory with this command
- here are the VSCode extensions (or whatever IDE) that are the bare minimum for the language
- here's the command for running tests
I think either the author thinks people appreciate more the 2 stages challenge, than having one problem each day; or, more likely, the whole "infrastructure" is already prepared for 2 stages challenges per day. And changing that meant more work, eventually touching literally 10 y.o. code. The reason for the reduced days is exactly the lack of time. I assume he preferred to have 12 days, and modify as little as possible the old code. Having 1 stage per day maybe would have been possible at the expense of having less challenges, which again defeats the purpose.
Frankly I'm better off with it being this way instead of the sweaty cupstacking LLM% speedrun it became as it gained popularity.
It's totally fine not to care, but I can't quite get why you would then want to be an active member in a community of people who care about this stuff for no other reason than they fundamentally find it interesting.
Thing is it may have some interesting challenges, I too, wouldn't want to solve some insane string parsing problem with no interesting idea behind it. For today's problem, I did the naive version and it worked. The modular version created some issues with some corner cases.
There should be more events like AoC. Self-contained problems are very educational.
Huge thanks to those involved!
One thing I do think would be interesting is to see solution rate per hour block. It'd give an indication of how popular advent of code is across the world.
Got nowhere near the leaderboard times so gave up after four days!
Nearly scratched a decent ranking once only, top 300 or so.
Sadly it's 5am for me as I'm in the UK.
In 8 years I can say I've never once tried to be awake at 5am in order to do the puzzle. The one time I happened to still be awake at 5am during AoC I was quite spectacularly drunk so looking at AoC would have been utterly pointless.
Anything before 6.45am and I'm hopefully asleep. 7am isn't great as 7am-8am I'm usually trying to get my kid up, fed and out the door to go to school. Weekends are for not waking up at 7am if I don't need to.
9am or later and it messes with the working day too much.
Looking back at my submission times from 2017 onwards (I only found AoC in 2017 so did 2015/2016 retrospectively) I've only got two submissions under 02:xx:xx (e.g. 7am for me). Both were around 6.42am so I guess I was up a bit earlier that day (6.30am) and was waiting for my kid to wake up and managed to get part 1 done quickly.
My usual plan was to get my kid out of the door sometime between 7.30am and 8am and then work on AoC until I started work around 9am. If I hadn't finished it then I'd get a bit more time during my lunch hour and, if still not finished, find some time in the evening after work and family time.
Out of the 400 submissions from 2017-2024 inclusive I've only got 20 that are marked as ">24h" and many of these were days where I was out for the entire day with my wife/kid so I didn't get to even look at the problem until the next day. Only 4 of them are where I submitted part 1 within 24h but part 2 slipped beyond 24h.
Enormous understatement: I were unencumbered by wife/kids then my life would be quite a bit different.
https://perladvent.org/archives.html
Advent of Code is awesome also of course -- and was certainly inspired by it.
Python is extremely suitable for these kind of problems. C++ is also often used, especially by competitive programmers.
Which "non-mainstream" or even obscure languages are also well suited for AoC? Please list your weapon of choice and a short statement why it's well suited (not why you like it, why it's good for AoC).
- Array languages such as K or Uiua. Why they're good for AoC: Great for showing off, no-one else can read your solution (including yourself a few days later), good for earlier days that might not feel as challenging
- Raw-dogging it by creating a Game Boy ROM in ASM (for the Game Boy's 'Z80-ish' Sharp LR35902). Why it's good for AoC: All of the above, you've got too much free time on your hands
Just kidding, I use Clojure or Python, and you can pry itertools from my cold, dead hands.
It has many of the required structures (hashes/maps, ad hoc structs, etc) and is great for knocking up a rough and ready prototype of something. It's also quick to write (but often unforgiving).
I can also produce a solution for pretty much every problem in AoC without needing to download a single separate Perl module.
On the negative side there are copious footguns available in Perl.
(Note that if I knew Python as well as I knew Perl I'd almost certainly use Python as a starting point.)
I also try and produce a Go and a C solution for each day too:
* The Go solution is generally a rewrite of the initial Perl solution but doing things "properly" and correcting a lot of the assumptions and hacks that I made in the Perl code. Plus some of those new fangled "test" things.
* The C solution is a useful reminder of how much "fun" things can be in a language that lacks built-in structures like hashes/maps, etc.
Example: find the first example for when this "game of life" variant has more than 1000 cells in the "alive" state.
Solution: generate infinite list of all states and iterate over them until you find one with >= 1000 alive cells.
let allStates = iterate nextState beginState # infinite list of consecutive solutions
let solution = head $ dropWhile (\currentState -> numAliveCells currentState < 1000) allStatesBut yeah, if you're looking to solve the puzzle in under a microsecond you probably want something like Rust or C and keep all the data in L1 cache like some people do. If solving it in under a millisecond is still good enough, Haskell is fine.
* The expressive syntax helps keep the solutions short.
* It has extensive standard library with tons of handy methods for AoC style problems: Enumerable#each_cons, Enumerable#each_slice, Array#transpose, Array#permutation, ...
* The bundled "prime" gem (for generating primes, checking primality, and prime factorization) comes in handy for at least a few of problems each year.
* The tools for parsing inputs and string manipulation are a bit more ergonomic than what you get even in Python: first class regular expression syntax, String#scan, String#[], Regexp::union, ...
* You can easily build your solution step-by-step by chaining method calls. I would typically start with `p File.readlines("input.txt")` and keep executing the script after adding each new method call so I can inspect the intermediate results.
This year I've been working on a bytecode compiler for it, which has been a nice challenge. :)
When I want to get on the leaderboard, though, I use Go. I definitely felt a bit handicapped by the extra typing and lack of 'import solution' (compared to Python), but with an ever-growing 'utils' package and Go's fast compile times, you can still be competitive. I am very proud of my 1st place finish on Day 19 2022, and I credit it to Go's execution speed, which made my brute-force-with-heuristics approach just fast enough to be viable.
That was impressive! Do you have a public repo with your language, anywhere?
I wrote a bit more about it here https://laszlo.nu/blog/advent-of-code-2024.html
AoC is a great opportunity for exploring languages!
Scheme is fairly well suited to both general programming, and abstract math, which tends to be a good fit for AoC.
OCaml is strong too. Stellar type system, fast execution and sane semantics unlike like 99% of all programming languages. If you want to create elegant solutions to problems, it's a good language.
For both, I recommend coming prepared. Set up a scaffold and create a toolbox which matches the typical problems you see in AoC. There's bound to be a 2d grid among the problems, and you need an implementation. If it can handle out-of-bounds access gracefully, things are often much easier, and so on. You don't want to hammer the head against the wall not solving the problem, but solving parsing problems. Having a combinator-parser library already in the project will help, for instance.
Any recommendations for Go? Traditionally I've gone for Python or Clojure with an 'only builtins or things I add myself' approach (e.g. no NetworkX), but I've been keen to try doing a year in Go however was a bit put off by the verbosity of the parsing and not wanting to get caught spending more time futzing with input lines and err.
Naturally later problems get more puzzle-heavy so the ratio of input-handling to puzzle-solving code changes, but it seemed a bit off putting for early days, and while I like a builtins-only approach it seems like the input handling would really benefit from a 'parse don't validate' type approach (goparsec?).
Once you have something which can "load \n seperated numbers into array/slice" you are mostly set for the first few days. Go has verbosity. You can't really get around that.
The key thing in typed languages are to cook up the right data structures. In something without a type system, you can just wing things and work with a mess of dictionaries and lists. But trying to do the same in a typed language is just going to be uphill as you don't have the tools to manipulate the mess.
Historically, the problems has had some inter-linkage. If you built something day 3, then it's often used day 4-6 as well. Hence, you can win by spending a bit more time on elegance at day 3, and that makes the work at day 4-6 easier.
Mind you, if you just want to LLM your way through, then this doesn't matter since generating the same piece of code every day is easier. But obviously, this won't scale.
Yeah, this is essentially it for me. While it might not be a 'type-safe and correct regarding error handling' approach with Python, part of the interest of the AoC puzzles is the ability to approach them as 'almost pure' programs - no files except for puzzle input and output, no awkward areas like date time handling (usually), absolutely zero frameworks required.
> you can just wing things and work with a mess of dictionaries and lists.
Checks previous years type-hinted solutions with map[tuple[int, int], list[int]]
Yeah...
> but all of the AoC problems aren't parsing problems.
I'd say for the first ten years at least the first ten-ish days are 90% parsing and 10% solving ;) But yes, I agree, and maybe I'm worrying over a few extra visible err's in the code that I shouldn't be.
> if you just want to LLM your way through
Totally fair point if I constrain LLM usage to input handling and the things that I already know that I know how to do but don't want to type, although I've always quite liked being able to treat each day as an independent problem with no bootstrapping of any code, no 'custom AoC library', and just the minimal program required to solve the problem.
How do you parse the puzzle input into a data structure of your choice?
I write most as pure functional/immutable code unless a problem calls for speed. And with extension functions I've made over the years and a small library (like 2d vectors or grid utils) it's quite nice to work with. Like, if I have a 2D list (List<List<E>>), and my 2d vec, like a = IntVec(5,3), I can do myList[a] and get the element due to an operator overload extension on list-lists.
and with my own utils and extension functions added over years of competitive programming (like it's very fluent
I'm plodding my way through the 2015 challenge here: https://git.thomasballantine.com/thomasballantine/Advent_of_... , it's really sharpened me up on a number of points.
https://github.com/taolson/Admiran https://github.com/taolson/advent-of-code
A lot of the problems involve manipulating sets and maps, which Clojure makes really straightforward.
Things like `partition`, `cycle` or `repeat` have come in so handy when working with segments of lists or the Conway's Game-of-Life type puzzles.
Is there a way to drop into a repl like with python and pdb.set_trace()? I couldn't find one last time I played around with Rust.
Common Lisp. Using 'iterate' package almost feels like cheating.
I have done half a year in (noob level) Haskell long ago. But can't find the code any more.
Most mind blowing thing for me was looking at someone's solutions in APL!
Downsides: The debugging situation is pretty bad (hope you like printf debugging), smaller community means smaller package ecosystem and fewer reference solutions to look up if you're stuck or looking for interesting alternative ideas after solving a problem on your own, but there's still quality stuff out there.
Though personally I'm thinking of trying Go this year, just for fun and learning something new.
Edit: also a static type system can save you from a few stupid bugs that you then spend 15 minutes tracking down because you added a "15" to your list without converting it to an int first or something like that.
So.. a language that you're interested in or like?
Reminds me of "gamers will optimize the fun out of a game"
I'm pretty clojure-curious so might mess around with doing it in that
I tried AoC out one year with the Wolfram language, which sounds insane now, but back then it was just a "seemed like the thing to do at the time" and I'm glad I did it.
AoC-like problems are well suited to quick and dirty languages with lots of builtin features, the code doesn't have to be maintained and it is small enough to not require organization.
(post title: "Designing a Programming Language to Speedrun Advent of Code", but starts off "The title is clickbait. I did not design and implement a programming language for the sole or even primary purpose of leaderboarding on Advent of Code. It just turned out that the programming language I was working on fit the task remarkably well.")
> I solve and write a lot of puzzlehunts, and I wanted a better programming language to use to search word lists for words satisfying unusual constraints, such as, “Find all ten-letter words that contain each of the letters A, B, and C exactly once and that have the ninth letter K.”1 I have a folder of ten-line scripts of this kind, mostly Python, and I thought there was surely a better way to do this.
I'll chose to remember it was designed for AoC :-D
Historically good candidates are:
- Rust (despite it's popularity, I know a lot of devs who haven't had time to play with it).
- Haskell (though today I'd try Lean4)
- Racket/Common Lisp/Other scheme lisp you haven't tried
- Erlang/Elixir (probably my choice this year)
- Prolog
Especially for those langs that people typically dabble in but never get a change to write non-trivial software in (Haskell, Prolog, Racket) AoC is fantastic for really getting a feel for the language.
It's a great language. It's dependent-types / theorem-proving-oriented type-system combined with AI assistants makes it the language of the future IMO.
The spatial and functional problem solving makes it easy to reason about how a single cell is calculated. Then simply apply that logic to all cells to come up with the solution.
Neon Language: https://neon-lang.dev/ Some previous AoC solutions: https://github.com/ghewgill/adventofcode
I think it lends itself very well to the problem set, the language is very expressive, the standard library is extensive, you can solve most things functionally with no state at all. Yet, you can use global state for things like memoization without having to rewrite all your functions so that's nice too.
In the past I used it to try out Swift, J, R, ...
I saw someone one Twitter use Excel.
Most problems are 80%-90% massaging the input with a little data modeling which you might have to rethink for the second part and algorithms used to play a significant role only in the last few days.
That heavily favours languages which make manipulating string effortless and have very permissive data structures like Python dict or JS objects.
I know people who make some arbitrary extra restriction, like “no library at all” which can help to learn the basics of a language.
The downside I see is that suddenly you are solving algorithmic problems, which some times are bot trivial, and at the same time struggling with a new language.
Sure Haskell comes packaged with parser combinators, but a new user having to juggle immutability, IO and monads all at once at the same time will be almost certainly impossible.
Having smaller problems makes it possible to find multiple solutions as well.
Also, dune makes pulling in build dependencies easy these days, and there's no shame in pulling in other support libraries. It's years since I've written anything in Haskell, but I'd guess the same goes for cabal, though OCaml is still more approachable than Haskell for most people, I'd say. A newbie is always going to be at some kind of disadvantage regardless.
I think that's the best example of anemic built-in utilities. Tried AoC two years ago with OCaml; string splitting, character matching and string slicing were very cumbersome coming from Haskell. Whereas the convenient mutation and for-loops in OCaml provide an overall better experience.
Given you're already well-versed in the ecosystem you'll probably have no issues working with dune, but for someone picking up OCaml/Haskell and having to also delve in the package management part of the system is not a productive or pleasant experience.
Bonus points for those trying out Haskell, successfully, than in later challenges having to completely rewrite their solution due to spaceleaks, whereas Go, Rust (and probably OCaml) solutions just bruteforce the work.
I'm probably just that bad at programming.
Or MUMPS.
> If you're posting a code repository somewhere, please don't include parts of Advent of Code like the puzzle text or your inputs.
The text I get, but the inputs? Well, I will comply, since I am getting a very nice thing for (almost) free, so it is polite to respect the wishes here, but since I commit the inputs (you know, since I want to be able to run tests) into the repository, it is bit of a shame the repo must be private.
But there are enough possible inputs that most people shouldn't come across anyone else with exactly the same input.
Part of the reason why AoC is so time consuming for Eric is that not only does he design the puzzles, he also generates the inputs programmatically, which he then feeds through his own solver(s) to ensure correctness. There is a team of beta testers that work for months ahead of the contest to ensure things go smoothly.
(The adventofcode subreddit has a lot more info on this.)
He's also described, over the years, his process of making the inputs. Related to your comment, he tries to make sure that there are no features of some inputs that make the problem especially hard or easy compared to the other inputs. Look at some of the math ones, a few tricks work most of the time (but not every time). Let's say after some processing you get three numbers and the solution is their LCM, that will probably be true of every input, not just coincidental, even if it's not an inherent property of the problem itself.
There has been the odd puzzle where some inputs have allowed simpler solutions than others, but those have stood out.
if we just look at the last three puzzles: day 23 last year, for example, admitted the greedy solution but only for some inputs. greedy clearly shouldn't work (shuffling the vertices in a file that admits it causes it to fail).
I have a solve group that calls it "Advent of Input Roulette" because (back when there was a global leaderboard) you can definitely get a better expected score by just assuming your input is weak in structural ways.
The example input(s) is part of the "text", and so committing it is also not allowed. I guess I could craft my own example inputs and commit those, but that exceed the level of effort I am willing to expend trying to publish repository no one will likely ever read. :)
I'm also surprised there are a few Dutch language sponsors. Do these show up for everyone or is there some kind of region filtering applied to the sponsors shown?
I plan on doing this year in C++ because I have never worked with it and AoC is always a good excuse to learn a new language. My college exams just got over, so I have a ton of free time.
Previous attempts:
- in Lua https://github.com/Aadv1k/AdventOfLua2021
- in C https://github.com/Aadv1k/AdventOfC2022
- in Go https://github.com/Aadv1k/AdventOfGo2023
really hope I can get all the stars this time...Cheers, and Merry Cristmas!
Having only started using python in the last few months (and always alongside agents to help me learn the new language) I am enjoying this opportunity/invitation to challenge myself to write the code from scratch, because it is helping me reinforce my understanding of the fundamentals of a language that is new to me.
On the one hand I do love how (in general nowadays) I can tell an agent to “implement a grammar parser for this example input stream” yet on the other hand, it’s too easy to just use the code without bothering to understand how it works. Likewise, it is so pleasantly easy these days to paste an error message into a chat window instead of figuring out for myself what it means / how to fix it. I love being able to get help (from agents) with that kind of stuff, but I also love being able to do it on my own.
Thank you to the folks who organize this event, for giving me that extra motivation to tie a ribbon around my understanding of various topics enough to be able to write python without help from agents or reference guides.
I’d also like to add that having never participated when the global leaderboard existed, I cannot compare this to that, other than to say that I appreciate how this way encourages me to come up with “personal challenges” like not using an IDE with autocomplete, or not looking up any info from reference sources, or not including any libraries beyond the core language functionality.
The part I enjoy the most is after figuring out a solution for myself is seeing what others did on Reddit or among a small group of friends who also does it. We often have slightly different solutions or realize one of our solutions worked "by accident" ignoring some side case that didn't appear in our particular input. That's really the fun of it imho.
I've never stressed out about the leaderboard. Ive always taken it as an opportunity to learn a new language, or brush up on my skills.
In my day-to-day job, I rarely need to bootstrap a project from scratch, implement a depth first search of a graph, or experiment with new language features.
It's for reasons like these that I look forward to this every year. For me it's a great chance to sharpen the tools in my toolbox.
Sometimes it's nice to have a break by writing a load of error handling, system architecture documentation, test cases, etc.
> For me it's a great chance to sharpen the tools in my toolbox.
That's a good way of putting it.
My way of taking it a step further and honing my AoC solutions is to make them more efficient whilst ensuring they are still easy to follow, and to make sure they work on as many different inputs as possible (to ensure I'm not solving a specific instance based on my personal input). I keep improving and chipping away at the previous years problems in the 11 months between Decembers.
I'm having a problem where the webpage switches me to a different dataset. I've been logged in the entire time in the same session, same user. I get "wrong answer but that's the answer for a different dataset". It seems to be switching me between two "problem sets". Then it keeps telling me I'm submitting answers too fast.
Originally I saved my dataset as "prob1-test.txt". It kept telling me I had the wrong answer. I did some debugging, perhaps I fixed a bug but I'm not sure.
Then I downloaded the dataset again after I really thought I had it right and tried a bunch of other things. I got a completely different data set! Call this "prob1-test-ds2.txt". I submitted an answer on data set 2 and it was accepted.
So I'm on to part 2 of the day 1 problem. I have the same problem again, I think it's right (but not impossible it could be wrong ;-)). It's giving me this feedback:
"That's not the right answer; your answer is too low. Curiously, it's the right answer for someone else; you might be logged in to the wrong account or just unlucky. In any case, you need to be using your puzzle input. If you're stuck, make sure you're using the full input data; there are also some general tips on the about page, or you can ask for hints on the subreddit. Please wait one minute before trying again. [Return to Day 1]"
If I had to guess, as part of debugging this I get a "go back to problem 1 definition" after getting a "wrong answer" result.
Pretty sure it's working now that I have the "this is the answer for another dataset" message, but I keep getting either "this is the answer for another dataset" or "too many submissions". I'm waiting a few minutes between them.
How can I fix this switcheroo problem.
I know how you might fix it, put a dataset set ID at the start of the dataset and tell people to skip it. Like (# 9481818). and you have to put that in your answer. This way you could detect this bug.
Every time I see this I wonder how many amateur/hobbyist programmers it sets up for disappointment. Unless your definition of “pretty far” is “a small number of the part ones”, it’s simply not true.
In the programming world I feel like there's a lot of info "for beginners" and a lot of folks / activities for experts.
But that middle ground world is strange... a lot of it is a combo of filling in "basics" and also touching more advanced topics at the same time and the amount of content and just activities filling that in seems very low. I get it though, the middle ground skilled audience is a great mix of what they do or do not know / can or can not solve.
I don't know if that made any sense.
Advanced level stuff usually gets recommended directly by experts or will be interesting to beginners too as a way of seeing the high level.
Mid level stuff doesn't have that wide appeal, the freshness in the mind of the experts, or the ease of getting into, so it's not usually worth it for creators if the main metric is reach/interest
Structured (taught) learning is better in this regard, it at least gives you structure to cling on to at the mid level
But also, the middle ground is often just years of practice.
I used to program competitively and while that's the case for a lot of the early day problems, usually a few on the later days are pretty tough even by those standards. Don't take it from me, you can look at the finishing times over the years. I just looked at some today because I was going through the earlier years for fun and on Day 21/2023, 1 hour 20 minutes got you into the top 100. A lot of competitive programmers have streamed the challenges over the years and you see plenty of them struggle on occasion.
People just love to BS and brag, and it's quite harmful honestly because it makes beginner programmers feel much worse than they should.
According to Eric last year (https://www.reddit.com/r/adventofcode/comments/1hly9dw/2024_...) there were 559 people that had obtained all 500 stars. I'm happy to be one of them.
The actual number is going to be higher as more people will have finished the puzzles since then, and many people may have finished all of the puzzles but split across more than one account.
Then again, I'm sure there's a reasonable number of people who have only completed certain puzzles because they found someone else's code on the AoC subreddit and ran that against their input, or got a huge hint from there without which they'd never solve it on their own. (To be clear, I don't mind the latter as it's just a trigger for someone to learn something they didn't know before, but just running someone else's code is not helping them if they don't dig into it further and understand how/why it works.)
There's definitely a certain specific set of knowledge areas that really helps solve AoC puzzles. It's a combination of classic Comp Sci theory (A*/SAT solvers, Dijkstra's algorithm, breadth/depth first searches, parsing, regex, string processing, data structures, dynamic programming, memoization, etc) and Mathematics (finite fields and modular arithmetic, Chinese Remainder Theorem, geometry, combinatorics, grids and coordinates, graph theory, etc).
Not many people have all those skills to the required level to find the majority of AoC "easy". There's no obvious common path to accruing this particular knowledge set. A traditional Comp Sci background may not provide all of the Mathematics required. A Mathematics background may leave you short on the Comp Sci theory front.
My own experience is unusual. I've got two separate bachelors degrees; one in Comp Sci and one in Mathematics with a 7 year gap between them, those degrees and 25+ years of doing software development as a job means I do find the vast majority of AoC quite easy, but not all of it, there are still some stinkers.
Being able to look at an AoC problem and think "There's some algorithm behind this, what is it?" is hugely helpful.
The "Slam Shuffle" problem (2019 day 22) was a classic example of this that sticks in my mind. The magnitude of the numbers involved in part 2 of that problem made it clear that a naive iteration approach was out of the question, so there had to be a more direct path to the answer.
As I write the code for part 1 of any problem I tend to think "What is the twist for part 2 going to be? How is Eric going to make it orders of magnitude harder?" Sometimes I even guess right, sometimes it's just plain evil.
Just checked my copy of TAOCP (Vol 3 - Sorting and Searching) and it doesn't mention A* or SAT.
Ref: https://en.wikipedia.org/wiki/The_Art_of_Computer_Programmin...
A quick google shows that the newer volumes (Volume 4 fascicles 6 and 7) seem to cover SAT. Links to downloads are on the Wikipedia page above.
Maybe the planned 4C Chapter 7 "Combinatorial searching (continued)" might cover A* searching. Ironically googling "A* search" is tricky.
Hopefully someone else will chip in with a better reference that is somewhere in the middle of Wikipedia's brevity and TAOCP's depth.
I don't mean to say my solution was good, nor was it performant in any way - it was not, I arrived at adjacency (linked) lists - but the problem is tractable to the well-equipped with sufficient headdesking.
Operative phrase being "a computer science education," as per GGP's point. Easy is relative. Let's not leave the bar on the floor, please, while LLMs are threatening to hoover up all the low hanging fruit.
I have a computer science education and I have no idea what you're talking about. The prompt "Proof." ?
Most people who study Comp Sci never use any of what they learned ever again, and most will have forgotten most of what they learned within one or two years. Most software engineers never use any comp sci theory at all, but especially not graph theory or shit like Dijkstras algorithms, DFS, BFS etc.
> Most software engineers never use any comp sci theory at all, but especially not graph theory or shit like Dijkstras algorithms, DFS, BFS etc.
But we are talking about Advent of Code here, which is a set of fairly contrived, theoretical, in vitro learning problems that you don't really see in the real software engineering world either.
> The prompt "Proof." ?
See this paper on the Stoer-Wagner min-cut algorithm from graph theory, for the last problem in a previous year's Advent of Code: https://www.cs.dartmouth.edu/~ac/Teach/CS105-Winter05/Handou...
> I have a computer science education and I have no idea what you're talking about.
A post-secondary computer science education? I don't mean bootcamp. I mean a course of study in mathematics.
My only assumption is that you're really out of touch with the ordinary world of humanity if you think most people are aware of stuff like this:
https://www.cs.dartmouth.edu/~ac/Teach/CS105-Winter05/Handou...
Maybe when I was in college (if AoC had existed back then) I could have kept pace, but if part of your life is also running a household, then between wrapping up projects for work, finalizing various commitments I want wrapped up for the year, getting together with family and friends for various celebrations, and finally travel and/or preparing your own house for guests, I'm lucky if I have time to sit down with a cocktail and book the week before Christmas.
Seeing the format changed to 12 days makes me think this might be the first time in years I could seriously consider doing it (to completion).
I have no evidence to say this, but I'd guess a lot more people give up on AoC because they don't want to put in the time needed than give up because they're not capable of progressing.
I think it comes down to experience, exposure to problems, and the ability to recognise what the problem boils down to.
A colleague who is an all round better coder than me might spend 4 hours bashing away trying to solve a problem that I might be able to look at and quickly recongise it is isomorphic to a specific classic Comp Sci or Maths problem and know exactly how best to attack it, saving me a huge amount of time.
Spoiler alert: Take the "Slam Shuffle" in 2019 Day 22 (https://adventofcode.com/2019/day/22). I was lucky that I quickly recognised that each of the actions could be represented as '( a*n + b ) mod noscards' (with a and b specific to the action) and therefore any two actions like this can be combined into the same form. The optimal solution follows relatively simply from this.
Doing all of the previous years means there's not much new ground although Eric always manages to find something each year.
There have also been some absolutely amazing inventions along the way. The IntCode Breakout game (2019) and the adventure game (can't remember the year) both stick in my mind as amazing constructions.
And then something shiny and fun comes along during a problem that I'm having trouble with, and I just never come back.
But, speaking to the original question as to the number of newbies that go all the way, I'd say one cannot expect to increase their skills in anything if one sticks in their comfort zone. It should be hard, and as a newbie who participated in previous years, I can confirm it often is. But I learned new things every time I did it, even if I did not finish.
The idea that anyone who doesn't know any code would:
1) Complete in Advent of Code at all.
2) Complete a single part of a single problem.
let alone, complete the whole thing without it being a "tremendous challenge"...
is so completely laughable it makes me question whether you live on the same planet as the rest of us here.
Getting a person who has never coded to write a basic sort algorithm (i.e. bubble sort) is already basically impossible. I work with highly talented non coder co-workers who all attended tier-1 universities (e.g. Oxford, Harvard, Stanford) but for finance/business related degrees, I cannot get them to write while/foreach loops in Python, and simply using Claude Code is way too much for them.
If you are even fully completing one Advent of Code problem, you are in the top 0.1% of coders, completing all of them puts you in the top 0.001%.
Wishing you best of luck in AoC, Life and Love but I imagine someone like you doesn't need it, being a complete toolbox and all.
P.S.: Tell your coworkers I'm sorry they have to put up with you.
You're the person saying Advent of Code is "so easy" that anyone even people with no coding ability at all should find it do-able, which is totally diminishing the difficulty of the problems, and asserting your own genius, i.e. that you found it totally trivial.
I am the person saying that actually, stuff like Advent of Code is incredibly difficult and 99% of active programmers aren't able to complete it, let alone people who don't code.
I am not an elitist at all, unlike yourself, I don't find completing "Advent of Code" easy, in fact, it would take me a long time to complete it, more time than I have available in my busy life in the average December. And I doubt I would be able to complete it 100% without looking up help, getting hints, or using LLMs to help.
Heck, I even talked about having to be serious about completion, and you could not bother to read the whole comment, then proceed to call me delusional? FFS, I am now praying for your co-workers and I'm not even religious.
Did you realize only roughly 500 people of the > 1M who are registered for advent of code even complete it?
You said "it should not be a tremendous challenge", i.e. not that big of a deal even if you don't know how to code. Which is absolutely diminishing the difficulty of the event, I mean, come on man...
This is why I'm asserting you are quietly oblivious to the abilities of most people. I am asserting that most people who CAN code, cannot complete the event, yet alone non-coders. I am a very active coder (for fun mostly these days, but also sometimes for work), but I could not complete Advent of Code. Maybe if I took all of December off work to dedicate serious time, but even then I wonder if it's possible without looking at hints/LLM-help etc.
I often try and help my co-workers who are working on AI based side-projects for fun, so I have a strong insight into the abilities of non-coding smart people, and the reality is that yes, they get very turned off as soon as you get anything more complex than for-loops and if-statements. This isn't me being mean to co-workers, this is the reality of things I have experienced. It's not a brains thing, they can understand more complex stuff, but they don't want to, they find it annoying, boring, not worth the time/effort etc. So the idea of them learning dynamic programming, DFS/BFS, more complex data structures etc, is well, just not going to happen.
My point is that you are effectively saying, "oh just about anyone can do Advent of Code if they want to", is totally not grounded in any sort of reality.
Try to have a better day.
https://adventofcode.com/2020/day/1 for example. It's not hard to do part 1 by hand.
You need two numbers from the input list (of 200 numbers) that add to 2020.
For each number n in the list you just have to check if (2020-n) is in the list.
A quick visual scan showed my input only had 9 numbers that were less than 1010, so I'd only have to consider 9 candidate numbers.
It would also be trivial for anyone who can do relatively simple things with a spreadsheet.
That's not the same as saying they're easy, but it's a different kind of barrier, and (in my opinion) more a test of 'can you think?' than 'did you do a CS degree?'
In this sense it's accessible: you won't get stuck because of a word you don't understand or a concept you've never heard of.
I very much disagree here. To make any sort of progress in AoC, in my experience, you need at least:
- awareness of graphs and how to traverse them
- some knowledge of a pathfinding algorithm
- an understanding of memoisation and how it can be applied to make deeply recursive computations feasible
Those types of puzzle come up a lot, and it’s not anything close to what I’d expect someone with “just a little programming knowledge” to have.
Someone with just a little programming knowledge is probably good with branches and loops, some rudimentary OOP, and maybe knows when to use a list vs a map. They’re not gonna know much about other data structures or algorithms.
They could learn them on the go of course, but then that’s why I don’t think basic coding knowledge is enough.
It's only going to be 12 problems rather than 24 this year and there isn't going to be a gloabl leaderboard, but I'm still glad we get to take part in this fun Christmas season tradition, and I'm thankful for all those who put in their free time so that we can get to enjoy the problems. It's probably an unpopular stance, but I've never done Advent of Code for the competitive aspect, I've always just enjoyed the puzzles, so as far as I'm concerned nothing was really lost.
Is this an unpopular stance? Out of a dozen people I know that did/do AoC every year, only one was trying to compete. Everyone else did it for fun, to learn new languages or concepts, to practice coding, etc.
Maybe it helps that, because of timezones, in Europe you need to be really dedicated to play for a win.
No, it's not. At most 200 people could end up on the global leaderboard, and there are tens of thousands of people who participate most days (though it drops off by the end, it's over 100k reliably for the first day). The vast majority of participants are not there for the leaderboard. If you care about being competing, there are always private leaderboards.
Premises:
(i) I love Advent of Code and I'm grateful for its continuing existence in whatever form its creators feel like it's best for themselves and the community;
(ii) none of what follows is a request, let alone a demand, for anything to change;
(iii) what follows is just the opinion of some random guy on the Internet.
I have a lot of experience with competitions (although more on the math side than on the programming side), and I've been involved essentially since I was in high school, as a contestant, coach, problem writer, organizer, moving tables, etc. In my opinion Advent of Code simply isn't a good competition:
- You need to be available for many days in a row for 15 minutes at a very specific time.
- The problems are too easy.
- There is no time/memory check: you can write ooga-booga code and still pass.
- Some problems require weird parsing.
- Some problems are pure implementation challenges.
- The AoC guy loves recursive descent parsers way too much.
- A lot of problems are underspecified (you can make assumptions not in the problem statement).
- Some problems require manual input inspection.
To reiterate once again: I am not saying that any of this needs to change. Many of the things that make Advent of Code a bad competition are what make it an excellent, fun, memorable "Christmas group thing". Coming back every day creates community and gives people time to discuss the problems. Problems being easy and not requiring specific time complexities to be accepted make the event accessible. Problems not being straight algorithmic challenges add welcome variety.
I like doing competitions but Advent of Code has always felt more like a cozy problem solving festival, I never cared too much for the competitive aspect, local or global.
One could probably build a separate service that provides a leaderboard for solution runtimes.
I agree that it’s more of a cozy activity than a hardcore competition, that’s what I appreciate about it most.
The vast majority (though not all) of the inputs can be parsed with regex or no real parsing at all. I actually can't think of a day that needed anything like recursive descent parsing.
LOL!!
I agreed with a lot of what you wrote, but also a lot of us strive for beautiful solutions regardless of time/memory bounds.
In fact, I’m (kind of) tired of leetcode flagging me for one ultra special worst-case scenario. I enjoy writing something that looks good and enjoying the success.
(Not that it’s bad to find out I missed an optimization in the implementation, but… it feels like a lot of details sometimes.)
Just a random example: https://open.kattis.com/problems/magicallights
But the Kattis website is great. The program runs on their server without you getting to know the input (you just get right/wrong back), so a bit different. But also then gives you memory and time constraints which you for the more difficult problems must find your way out of.
The problems are pretty difficult in my book (I never make it past day 3 or so). So I definitely would hope they never increase the difficulty.
[0] https://www.jerpint.io/blog/2024-12-30-advent-of-code-llms/
The difference when working on larger tasks that require reasoning is night and day.
In theory it would be very interesting to go back and retry the 2024 tasks, but those will likely have ended up in the training data by now...
I see people assert this all over the place, but personally I have decreased my usage of LLMs in the last year. During this change I’ve also increasingly developed the reputation of “the guy who can get things shipped” in my company.
I still use LLMs, and likely always will, but I no longer let them do the bulk of the work and have benefited from it.
It's true this was 4 months after AoC 2024 was out, so it may have been trained on the answer, but I think that's way too soon.
Day 3 in 2024 isn't a Math Olympiad tier problem or anything but it seems novel enough, and my prior experience with LLMs were that they were absolutely atrocious at assembler.
But as others have said, it’s a night and day difference now, particularly with code execution.
From watching them work, they read the spec, write the code, run it on the examples, refine the code until it passes, and so on.
But we can’t tell whether the puzzle solutions are in the training data.
I’m looking forward to seeing how well current agents perform on 2025’s puzzles.
Doing things for the fun of it, for curiosity's sake, for the thrill of solving a fun problem - that's very much alive, don't worry!
Maybe just have a cool advent calendar thingy like a digital tree that gains an ornament for each day you complete. Each ornament can be themed for each puzzle.
Of course I hope it goes without saying that the creator(s) can do it however they want and we’re nothing but richer for it existing.
It becomes a race when you start seeing it as a race :) One can just... ignore the leaderboard
Instead, getting gold stars for solving the puzzles is incentive enough, and can be done as a relaxing thing in the morning.
No matter what you do, as the puzzles get harder, you won't solve them in a day (or even a lifetime) if you don't come up with good algorithms/methods/heuristics.
> Having a leaderboard also leaks into the puzzle design.
Is it your opinion? Can you give an example? Or did Eric say that?Even before LLMs I knew it was filled with with results faster then you can blink.
So some of us, from gut feeling the vast majority, it was always just for fun. Usually I spent at least until March to finish as much as I did in every year.
Many people do - well, did - AoC while ignoring the leaderboard.
I'm just glad they're keeping this going.
I am still updating it for this year, so please feel free to submit a PR or share some here.
Could either be really recreational and relaxing.. or painful and annoying.
Though I don't care even if it takes me all of next year, it's all in order to learn :)
In the IEEEXTREME university programming competition there are ~10k participating teams.
Our university has a quite strong Competitive Programming program and the best teams usually rank in the top 100. Last year a team ranked 30 and it's wasn't even our strongest team (which didn't participate)
This year none of our teams was able to get in the top 1000. I would estimate close to 99% of the teams in the Top 1000 were using LLMs.
Last year they didn't seem to help much, but this year they rendered the competition pointless.
I've read blogs/seen videos of people who got in the AOC global leaderboard last year without using LLMs, but I think this year it wouldn't be possible at all.
Cheating is rampant anywhere there’s an online competition. The cheaters don’t care about respecting others, they get a thrill out of getting a lot of points against other people who are trying to compete.
Even in the real world, my runner friends always have stories about people getting caught cutting trails and all of the lengths their running organizations have to go through now to catch cheaters because it’s so common.
The thing about cheaters in a large competition is that it doesn’t take many to crowd out the leaderboard, because the leaderboard is where they get selected out. If there are 1000 teams competing and only 1% cheat, that 1% could still fill the top 10.
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
reminds me of something I read in "I’m a high schooler. AI is demolishing my education." [0,1] emphasis added:
> During my sophomore year, I participated in my school’s debate team. I was excited to have a space outside the classroom where creativity, critical thinking, and intellectual rigor were valued and sharpened. I love the rush of building arguments from scratch. ChatGPT was released back in 2022, when I was a freshman, but the debate team weathered that first year without being overly influenced by the technology—at least as far as I could tell. But soon, AI took hold there as well. Many students avoided the technology and still stand against it, but it was impossible to ignore what we saw at competitions: chatbots being used for research and to construct arguments between rounds.
high school debate used to be an extracurricular thing students could do for fun. now they're using chatbots in order to generate arguments that the students can just regurgitate.
the end state of this seems like a variation on Dead Internet Theory - Team A is arguing the "pro" side of some issue, Team B is arguing the "con" side, but it's just an LLM generating talking points for both sides and the humans acting as mouthpieces. it still looks like a "debate" to an outside observer, but all the critical thinking has been stripped away.
0: https://www.theatlantic.com/technology/archive/2025/09/high-...
High school debate has been ruthless for a long time, even before AI. There has been a rise in the use of techniques designed to abuse the rules and derail arguments for several years. In some regions, debates have become more about teams leveraging the rules and technicalities against their opponents than organically trying to debate a subject.
Imagine the shitshow that gaming would be without any kind of anti-cheat measures, and that's the state of competitive programming.
If the rules don't allow that and yet people do then well, you need online qualifiers and then onsite finals to pick the real winners. Which was already necessary, because there are many other ways to cheat (like having more people than allowed in the team).
It's not much different than outlawing performance enhancing drugs. Or aimbots in competitive gaming. The point is to see what the limits of human performance are.
If an alien race came along and said "you will all die unless you beat us in the IEEE programming competition", I would be all for LLM use. Like if they challenged us to Go, I think we'd probably / certainly use AI. Or chess - yeah, we'd be dumb to not use game solvers for this.
But that's not in the spirit of the competition if it's University of Michigan's use of Claude vs MIT's use of Claude vs ....
Imagine if the word "competition" meant "anything goes" automatically.
They're just different types of fun. The problem is if one type of fun is ruined by another.
With products I want actual correctness. And not something thrown away.
Which is why I think it’s great they dropped the competitive part and have just made it an advent calendar. Much better that way.
(I did a couple of these in college, though we didn't practice outside of competition so we weren't especially good at it.)
The Regional Finals and World Finals are in a single venue with a very controlled environment. Just like the IOI and other major competitions.
National High School Olympiads have been dealing with bigger issues because there are too many participants in the first few phases, and usually the schools themselves host the exams. There has been rampant cheating. In my country I believe the organization has resorted to manually reviewing all submissions, but I can only see this getting increasingly less effective.
This year the Canadian Computing Competition didn't officially release the final results, which for me is the best solution:
> Normally, official results from the CCC would be released shortly after the contest. For this year’s contest, however, we will not be releasing official results. The reason for this is the significant number of students who violated the CCC Rules. In particular, it is clear that many students submitted code that they did not write themselves, relying instead on forbidden external help. As such, the reliability of “ranking” students would neither be equitable, fair, or accurate.
Available here: [PDF] https://cemc.uwaterloo.ca/sites/default/files/documents/2025...
Online competitions are just hopeless. AtCoder and Codeforces have rules against AI but no way to enforce them. A minimally competent cheater is impossible to detect. Meta Hacker Cup has a long history and is backed by a large company, but had its leaderboard crowded by cheaters this year.
I don’t see why competitive debate or programming would be different. (But I understand why a fair global leaderboard for AOC is no longer feasible).
I solved a few problems with it last year, and it is amazing how compact the solutions are. It also messes with your head, and the community surrounding it is interesting. Highly recommended.
Uiua – A stack-based array programming language - https://news.ycombinator.com/item?id=42590483 - Jan 2025 (6 comments)
Uiua: A minimal stack-based, array-based language - https://news.ycombinator.com/item?id=37673127 - Sept 2023 (104 comments)
Much easier so far than it was in 2023 when just basic string wrangling was basically nonexistent.
From https://adventofcode.com/2025/about:
" Why did the number of days per event change? It takes a ton of my free time every year to run Advent of Code, and building the puzzles accounts for the majority of that time. After keeping a consistent schedule for ten years(!), I needed a change. The puzzles still start on December 1st so that the day numbers make sense (Day 1 = Dec 1), and puzzles come out every day (ending mid-December). "
We (Depot) are sponsoring this year and have a private leaderboard [0]. We’re donating $1k/each for the top five finishers to a charity of their choice.
>What happened to the global leaderboard? The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn't compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I've made it so you can share a read-only view of your private leaderboard. *Please don't use this feature or data to create a "new" global leaderboard.*)
It's kotlin and shik for me this year, probably a bit of both. And no stupid competitions, AoC should be fun.
No thanks.
It used to be that reddit had a user creation screen that looked like you needed to input an email address, but you could actually just click "Next" to skip it.
The last time I had cause to make a reddit account, they no longer allowed this.
But it is true that at any time they could make using an email address or phone number mandatory, and then creating an Advent of Code account will be gated behind that.
Having done auth myself, I can also understand why auth is being externalised like this. The site was flooded with bots and scrapers long before LLMs gained relevance and adding all the CAPTCHAs and responding to the "why are you blocking my shady CGNAT ISP when I'm one of the good ones" complaints is just not worth it. Let some company with the right expertise deal with all of that bullshit.
I'd wish the site would have more login options, though. It's a tough nut to crack; pick a small, independent oauth login service not under control of a bit tech company and you're basically DDOSing their account creation page for all of December. Pick a big tech company and you're probably not gaining any new users. You can't do decentralized auth because then you're just doing authentication DDOS with extra steps.
If I didn't have a github account, I'd probably go with a throwaway reddit account to take part. Reddit doesn't really do the same type of tracking Twitter tries to do and it's probably the least privacy invasive of the bunch.
https://gist.github.com/rtfeldman/f46bcbfe5132d62c4095dfa687...
while [1]; do kill -9 $((rnd * 100000)); sleep 5; end
Probably needs some external tool for the rnd function.On a serious note, I just saw this: https://linuxupskillchallenge.org
while true; do
kill -9 $RANDOM
sleep 5
done
Or to kill running running PIDs each time while true; do
rnd=$(ps -e -o pid= | shuf -n 1)
kill -9 $rnd
sleep 5
doneAnd yet I expect the whole leaderboard to be full of AI submissions...
Edit: No leaderboard this year, nice!
There are plenty of programming competitions and hackathons out there. Let this one simply be a celebration of learning and the enjoyment of problem solving.
> The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn't compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard.
There will be no global leaderboard this year.
I always put it down to overthinking and never arriving at a solution but maybe it was actually a much tougher problem!
I've got 500 stars (i.e. I've completed every day of all 10 previous years) but not always on the day the puzzles were available, probably 430/500 on the day. (I should say I find the vast majority of AoC relatively easy as I've got a strong grounding in both Maths and Comp Sci.)
First of all I only found out about AoC in 2017 and so I did 2015 and 2016 retrospectively.
Secondly I can keep up with the time commitments required up until about the 22nd-24th (which is when I usually stop working for Christmas). From then time with my wife/kids takes precedence. I'll usually wrap up the last bits sometime from the 27th onwards.
I've never concerned myself with the pointy end of the leaderboards due to timezones as the new puzzles appear at 5am local time for me and I've no desire to be awake at that time if I can avoid it, certainly not for 25 days straight. I expect that's true of a large percentage of people participating in AoC too.
My simple aim every day is that my rank for solving part 2 of a day is considerably lower than my rank for solving part 1.
(To be clear, even if I was up and firing at 5am my time every day I doubt I could consistently get a top 100 rank. I've got ten or so 300-1000 ranks by starting ~2 hours later but that's about it. Props to the people who can consistently appear in the top 100. I also start most days from scratch whilst many people competing for the top 100 have lots of pre-written code to parse things or perform the common algorithms.)
I also use the puzzles to keep me on my toes in terms of programming and I've completed every day in one of Perl, C or Go and I've gone back and produced solutions in all 3 of those for most days. Plus some random days can be done easily on the command-line piping things through awk, sed, sort, grep, and the like.
The point of AoC is that everyone is free to take whatever they want from it.
Some use it to learn a new programming language. Some use it to learn their first language and only get a few days into it. Some use it to make videos to help others on how to program in a specific language. Some use it to learn how/when to use structures like arrays, hashes/maps, red-black trees, etc, and then how/when to use classic Comp Sci algorithms like A* or SAT solvers, Djikstra's, etc all the way to some random esoteric things like Andrew's monotone chain convex hull algorithm for calculating the perimeter of a convex hull. There are also the mathsy type problems often involving Chinese Remainder Theorem and/or some variation of finite fields.
My main goal is to come up with code that is easy to follow and performs well as a general solution rather than overly specific to my individual input. I've also solved most years with a sub 1 second total runtime (per year, so each day averages less than 40msec runtime).
Anyway, roll on tomorrow. I'll get to the day 1 problem once I've got my kid up and out the door to go to school as that's my immediate priority.
Of course, folks may use it to visualise the puzzles but not to solve them.
Maybe it's useful for people trying to learn but also becoming pointless now as all Junior dev roles can be done with AI.
I mean do plumbers have an advent of plumbing where they try and unblock shit filled toilets for fun?
Yes, plumbers and other types of craftspeople and technicians do also have these little fun competitions. Why shouldn't they?
I think the reason some of us programmers do these things, is likely because many (myself included) entered the field as enthusiasts and hobbyists in the first place.
No, but you’ll see it for writers, musicians, and the like.
Engineering (software or not) can be an intellectually rewarding experience for many. I don’t know why some people find this something to scoff at, would you rather have no pleasure derived from your work?
You've obviously never watched "Drain Cleaning Australia" on YouTube!
Yes, some people find this stuff fun, because they find coding fun, and don't typically get to do the fun kind of coding on company time. Also, there'd be a hell of a lot less open source software in the world if people didn't code for fun.
Let people enjoy things. Just because you don't like that par of your job as much as them doesn't mean they're wrong.
I enjoy programming a lot, but most of it comes from things like designing APIs that work well and that people enjoy using, or finding things that allow me to delete on ton of legacy code.
I did try to do the advent of code many times. Usually I get bored half way through reading the first problem. and then when I finally get through I realize that these usually involve tradeoffs that are annoying to make in terms of memory/cpu usage and also several edge cases to deal with.
it really feels more like work than play.
Why would you use a site called HackerNews if you are not a hacker? No idea.
[1] https://web.archive.org/web/20241201070128/https://adventofc...
Although there are now rumours of hidden motors in Tour de France bicycles. So, I guess it's the same.