mark_l_watson 8 minutes ago
I enjoy reading Peter’s ‘Python studies’ and was surprised to see here a comparison of different LLMs for solving advent of code problems, but the linked article is pretty cool.

Peter and a friend of his wrote an article over a year ago discussing whether or not LLMs are already AGI, and after re-reading that article my opinion was moved a bit to: LLMs are AGI in broad digital domains. I still need to see embodied AI in robots and physical devices before I think we are 100% of the way there. Still, I apply Gemini and also a lot of open weight models to both 1. coding problems and 2. after I read or watch material on Philosophy I almost always ask Gemini for a summary, references, and a short discussion based on what Gemini knows about me.

az09mugen 4 hours ago
I'm sorry, but what's the point here ? It's not for a job or improve a LLM or doing something useful per se, just to "enjoy" how version X or Y of an LLM can solve problems.

I don't want to sound grumpy or but it doesn't achieve anything, this is just a showcase of how a "calculator with a small probability of failure can succeed".

Move on, do something useful, don't stop being amazed by AI but please stop throwing it at my face.

mathgeek 3 hours ago
Author is Peter Norvig, who has definitely done “something useful” when it comes to AI. He’s earned some time for play.
pbalau 3 hours ago

    > Move on, do something useful, don't stop being amazed by AI but please stop throwing it at my face.

Do you see the irony in what you did?

So, how about you move on, do something useful, don't stop being annoyed by AI, but please stop throwing your opinion in anyone's face.

s1mplicissimus 2 hours ago
One could argue that pointing out the pointlessness of LLM hype is infact useful, while producing that same hype is not