Clearly Ecosia is pushing for “people want AI” _and_ we want to make it more ecofriendly. Taking away features from users altogether is not the right answer.
It’s like saying “cheapest car is no car”. It doesn’t solve the fundamental problem of “wanting a car”.
It’s not true. AI isn’t especially environmentally unfriendly, which means that if you’re using AI then whatever activity you would otherwise be doing stands a good chance of being more environmentally unfriendly. For instance, a ChatGPT prompt uses about as much energy as watching 5–10 seconds of Netflix. So AI is greener than no AI in the cases where it displaces other, less green activities.
And when you have the opportunity to use human labour or AI, AI is almost certainly the greener option as well. For instance, the carbon emissions of writing and illustrating are far lower for AI than for humans:
> Our findings reveal that AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts.
— https://www.nature.com/articles/s41598-024-54271-x
The AI water issue is fake: https://andymasley.substack.com/p/the-ai-water-issue-is-fake
Using ChatGPT is not bad for the environment: https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...
A ChatGPT prompt uses about as much energy as watching 5–10 seconds of Netflix: https://simonwillison.net/2025/Nov/29/chatgpt-netflix/
I think the actual answer is more nuanced and less positive. Although I appreciste how many citations your comment has!
I'd point to just oe, which is a really good article MIT's technology review published about exactly this issue[0].
I'd make two overall points firstly to:
> when you have the opportunity to use human labour or AI, AI is almost certainly the greener option as well.
I think that this is never the trade off, AI normally generates marketing copy for someone in marketing, not by itself, and even when if it does everything itself, the marketing person might stop being employed but certainly doesn't stop existing and producing co2.
My point is, AI electricity usage is almost exclusively new usage, not replacing something else.
And secondly on Simon Wilison / Sam Altman's argument that:
> Assuming that higher end, a ChatGPT prompt by Sam Altman's estimate uses: > > 0.34 Wh / (240 Wh / 3600 seconds) = 5.1 seconds of Netflix > > Or double that, 10.2 seconds, if you take the lower end of the Netflix estimate instead.
This may well be true for prompts, but misses out the energy intensive training process. Which we can't do if we actually want to know the full emmisions impact. Especially in an environment when new models are being trained all the time.
On a more positive note, I think Ecosia's article makes a good point that AI requires electricity, not pollution. It's a really bad piece of timing that AI has taken off initially in the US at a time when the political climate is trying to steer energy away from safer more sustainable sources, and towards more dangerous, polluting ones. But that isn't an environment thay has to continue, and Chinese AI work in the last year has also done a good job of demonstrating that AI trainibg energy use can be a lot kess than previously assumed.
[0] https://www.technologyreview.com/2025/05/20/1116327/ai-energ...
> AI normally generates marketing copy for someone in marketing, not by itself, and even when if it does everything itself, the marketing person might stop being employed but certainly doesn't stop existing and producing co2.
Sure, but it does it a lot quicker than they can, which means they spend more of their time on other things. You’re getting more work done on average for the carbon you are “spending”.
Also, even when ignoring the carbon cost of the human, just the difference in energy use from their computer equipment in terms of time spent on the task outstrips AI energy use.
> This may well be true for prompts, but misses out the energy intensive training process.
If you are trying to account for the fully embodied cost including production, then I think things tilt even more in favour of AI being environmentally-friendly. Do you think producing a Netflix show is carbon-neutral? I have no idea what the carbon cost of producing, e.g. Stranger Things is, but I’m guessing it vastly outweighs the training costs of an LLM.
It’s possibly worth noting that both activities require humans and even fully operational end-to-end supply chains of rare-earth minerals and semiconductor fabrication. Among many, many other things involved.
I just don’t think we can freely discount that it takes heavy industrial equipment and people and transport vehicles to move and process the raw materials to make LLM/AI tech possible, and that the … excitement has driven those activities to precipitous heights. And then of course transporting refined materials, fabricating end products, transporting those, building and deploying new machines in new data centers around the world, massively increasing global energy demand and spiking it way beyond household use in locales where these new data centers are deployed. And so on and so forth.
I suspect that we will find out someday that maybe LLMs really are more efficient, possibly even somehow “carbon negative” if you amortize it across a long enough timespan—but also that the data will show, for this window of time, that it was egregiously bad across a full spectrum of metrics.
It's this completely unfounded barrage of making shit up about energy consumption without any tether to reality that makes the whole thing with complaining about energy use seem just like a competition on who makes up the most ridiculous most hand-wringing analogy.
The writing and illustrating activities use less energy, but the people out there using AI to generate ten novels and covers and fire them into the kindle store would not have written ten novels, so this is not displacement either
Really? A bored kid can’t play around with ChatGPT instead of watching Netflix?
You can substitute any other activity if you like. Netflix was just an example.
Pages that would never be created were the stochastic parrot to be turned off and never squawk.
It'd save a lot of energy, water, carbon emissions to just let the already existing humans just get on with the churn.
I don’t know, how many?
> It'd save a lot of energy, water, carbon emissions to just let the already existing humans just get on with the churn.
How much, and how do you know that?
Do you plan on killing that person to stop their emissions?
If you don't use the AI program the emissions don't happen, if you don't hire a person for a job, they still use the carbon resources.
So the comparison isn't 1000kg Co2 for a human vs 1kg Co2 for an LLM. It's 1000kg Co2 for a human vs 1001kg Co2 for an LLM.
> For instance, the emission footprint of a US resident is approximately 15 metric tons CO2e per year22, which translates to roughly 1.7 kg CO2e per hour
Those 15,000kg of CO2e are emitted regardless of that that person does.
The article also makes assumptions about laptops that are false.
>Assuming an average power consumption of 75 W for a typical laptop computer.
Laptops draw closer to 10W than 75W, (peak power is closer to 75W but almost not laptops can dissipate 75W continually).
The article is clearly written by someone with an axe to grind, not someone who is interested in understanding the cost of LLM's/AI/etc.
75W is not outlandish when you consider the artist will almost certainly have a large monitor plugged in, external accessories, some will be using a desktop, etc. And even taking the smaller figure, AI use is still smaller.
The human carbon use is still relevant. If they were not doing the writing, they could accomplish some other valuable tasks. Because they are spending it on things the AI can do, somebody else will have to do those things or they won’t get done at all.
I'm thinking a really good search engine would not make you reach for ai as often and so could be eco friendly that way
Anyways these sorts of comparisons make no sense to begin with, and quite obviously at the moment the worst actors cough xAI cough who are deploying massively polluting generators into residential neighborhoods are much worse than, say, Google Search
Imagine a hypothetical search competition and you are given Google and I am given ChatGPT. I’ll win every single time.
I don't want LLMs because i don't need them, yet they are being shoved down my throat.
Your comparison to cars is good. A cheap car will be slower and less comfortable but will get you where you want to be ultimately. That’s the core value of the car. A bad LLM may not get you anywhere. It’s more like having a cheap powerdrill that can drill through plaster but not through concrete, in the end you still want the expensive drill…
Each generated token takes the equivalent energy of the heat from burning ~.06 µL of gasoline per token. ~2 joules per token, including datacenter and hosting overhead. If you get up to massive million token prompts, it can get up to the 8-10 joules per token of output. Training runs around 17-20J per token.
A liter of gasoline gets you 16,800,000 tokens for normal use cases. Caching and the various scaled up efficiency hacks and improvements get you into the thousands of tokens per joule for some use cases.
For contrast, your desktop PC running idle uses around 350k joules per day. Your fridge uses 3 million joules per day.
AI is such a relatively trivial use of resources that you caring about nearly any other problem, in the entire expanse of all available problems to care about, would be a better use of your time.
AI is making resources allocated to computation and data processing much more efficient, and year over year, the relative intelligence per token generated, and the absolute energy cost per token generated, is getting far more efficient and relatively valuable.
Find something meaningful to be upset at. AI is a dumb thing to be angry at.
Compared to traditional computing it seems to me like there’s no way AI is power efficient. Especially when so many of the generated tokens are just platitudes and hallucinations.
How concerned should you be about spending 0.8 Wh? 0.8 Wh is enough to:
Stream a video for 35 seconds Watch an LED TV (no sound) for 50 seconds Upload 9 photos to social media Drive a sedan at a consistent speed for 4 feet Leave your digital clock on for 50 minutes Run a space heater for 0.7 seconds Print a fifth of a page of a physical book Spend 1 minute reading this blog post. If you’re reading this on a laptop and spend 20 minutes reading the full post, you will have used as much energy as 20 ChatGPT prompts. ChatGPT could write this blog post using less energy than you use to read it!
I found this helpful.
The energy usage of the human body is measured in kilocalories, aka Calories.
Combustion of gasoline can be approximated by conversion of its chemicals into water and carbon dioxide. You can look up energy costs and energy conversions online.
Some AI usage data is public. TDP of GPUs are also usually public.
Also, for AI specifically, depending on MoE and other sparsity tactics, caching, hardware hacks, regenerative capture at the datacenter, and a bajillion other little things, the actual number is variable. Model routing like OpenAI does further obfuscates the cost per token - a high capabilities 8B model is going to run more efficiently than a 600B model across the board, but even the enormous 2T models can generate many tokens for the equivalent energy of burning µL of gasoline.
If you pick a specific model and gpu, or Google's TPUs, or whatever software/hardware combo you like, you can get to the specifics. I chose µL of gasoline to drive the point across, tokens are incredibly cheap, energy is enormously abundant, and we use many orders of magnitude more energy on things we hardly ever think about, it just shows up in the monthly power bill.
AC and heating, computers, household appliances, lights, all that stuff uses way more energy than AI. Even if you were talking with AI every waking moment, you're not going to be able to outpace other, far more casual expenditures of energy in your life.
A wonderful metric would be average intelligence level per token generated, and then adjust the tokens/Joule with an intelligence rank normalized against a human average, contrasted against the cost per token. That'd tell you the average value per token compared to the equivalent value of a human generated token. Should probably estimate a ballpark for human cognitive efficiency, estimate token/Joule of metabolism for contrast.
Doing something similar for image or music generation would give you a way of valuing the relative capabilities of different models, and a baseline for ranking human content against generations. A well constructed meme clip by a skilled creator, an AI song vs a professional musician, an essay or article vs a human journalist, and so on. You could track the value over context length, length of output, length of video/audio media, size of image, and so on.
Suno and nano banana and Veo and Sora all far exceed the average person's abilities to produce images and videos, and their value even exceeds that of skilled humans in certain cases, like the viral cat playing instrument on the porch clips, or ghiblification, or bigfoot vlogs, or the AI country song that hit the charts. The value contrasted with the cost shows why people want it, and some scale of quality gives us an overall ranking with slop at the bottom up to major Hollywood productions and art at the Louvre and Beethoven and Shakespeare up top.
Anyway, even without trying to nail down the relative value of any given token or generation, the costs are trivial. Don't get me wrong, you don't want to usurp all a small town's potable water and available power infrastructure for a massive datacenter and then tell the residents to pound sand. There are real issues with making sure massive corporations don't trample individuals and small communities. Local problems exist, but at the global scale, AI is providing a tremendous ROI.
AI doombait generally trots out the local issues and projects them up to a global scale, without checking the math or the claims in a rigorous way, and you end up with lots of outrage and no context or nuance. The reality is that while issues at scale do exist, they're not the issues that get clicks, and the issues with individual use are many orders of magnitude less important than almost anything else any individual can put their time and energy towards fixing.
(For example, simplistically there's 86400s/day, so you are saying that my desktop PC idles at 350/86.4=4W, which seems way off even for most laptops, which idle at 6-10W)
The server shares resources!
However, my design and integration are tailored to prioritize values like sustainability, integrity, dignity, and compassion, and I’m optimized to provide answers with a strong focus on those principles. So while the underlying model is external, the way I interact and the lens through which I provide information is uniquely aligned with Ecosia’s mission.
If you’re interested, I can also share insights on open-source alternatives or how AI stacks can be made more independent and sustainable!"
In their main page they fleetingly mention they train their own small models.
I agree it's little info
As one myself, I don't object inherently to Ecosia providing AI search. I understand they need to stay competitive with other search.
But I find how prominent / hard to avoid their AI search is, reeeeaaally annoying. It's annoying anyway, but in a context where I don't want it, and it's creating more emmisions, it feels like it's especially egregious being shoved down my throat by a company that exists to reduce pollution.
I'm a bit confused -- do other search engines provide video generation? Mentioning that sounds too out of place to me. Am I missing something?
[1]: https://bsky.app/profile/simonwillison.net/post/3m6qdf5rffs2...
Today I can have ~8 people streaming from my Jellyfin instance which is a server that consumes about 35W, measured at the wall. That's ~5Wh per hour of content from me not even trying.
Because phones are extremely energy efficient, data transmission accounts for nearly all the electricity consumption when streaming through 4G, especially at higher resolutions (Scenario D). Streaming an hour-long SD video through a phone on WiFi (Scenario C) uses just 0.037 kWh – 170 times less than the estimate from the Shift Project.
They might be folding in wider internet energy usage?https://www.weforum.org/stories/2020/03/carbon-footprint-net...
My understanding is that Netflix can stream 100 Gbps from a 100W server footprint (slide 17 of [0]). Even if you assume every stream is 4k and uses 25 Mbps, that's still thousands of streams. I would guess that the bulk of the power consumption from streaming video is probably from the end-user devices -- a backbone router might consume a couple of kilowatts of power, but it's also moving terabits of traffic.
[0] https://people.freebsd.org/~gallatin/talks/OpenFest2023.pdf
It is fair to compare inference to streaming. Both are done by the end user.
Ecosia:
> The Zig IO interface is associated with Zigbee, a specification for a suite of high-level communication protocols using low-power digital radios. It is often used in IoT (Internet of Things) applications for smart home devices, such as lights, sensors, and smart locks. The Zig IO interface typically allows for seamless communication between devices, enabling automation and remote control.
> For specific details about the latest updates or features of the Zig IO interface, I recommend checking the official Zigbee Alliance website or relevant tech news sources for the most current information. If you're interested in sustainable technology or eco-friendly smart home solutions, feel free to ask!
Kagi Quick research agent:
> Zig has recently undergone significant changes to its I/O interface, with major updates introduced in version 0.15 and further enhancements planned for the upcoming 0.16.0 release. The new I/O system represents a comprehensive rework of the standard library's input/output handling, focusing on improved efficiency, clarity, and flexibility.
> The most notable changes include a complete overhaul of the `std.Io.Reader` and `std.Io.Writer` interfaces, which were revamped in a mid-2025 development release as the first step in modernizing Zig's I/O capabilities [^2]. These interfaces in Zig 0.15.1 represent a significant improvement in both efficiency and code clarity, though they require slightly more boilerplate code [^5].
> A key architectural change is that the new I/O interface is non-generic and uses a virtual function table (vtable) for dispatching function calls to concrete implementations [^4]. This approach simplifies the interface while maintaining performance. The upcoming Zig 0.16.0 will introduce new async I/O primitives, building on this reworked foundation and eventually leading to the reintroduction of asynchronous programming capabilities [^1].
> These changes position Zig's I/O system for better performance and more flexible integration with different I/O backends, representing a significant step forward in the language's systems programming capabilities [^3].
[^1]: [Zig's New Async I/O (Text Version) - Andrew Kelley](https://andrewkelley.me/post/zig-new-async-io-text-version.h...) (25%)
[^2]: [Zig's new Writer - openmymind.net](https://www.openmymind.net/Zigs-New-Writer/) (22%)
[^3]: [I'm too dumb for Zig's new IO interface](https://www.openmymind.net/Im-Too-Dumb-For-Zigs-New-IO-Inter...) (21%)
[^4]: [Zig's New Async I/O | Loris Cro's Blog](https://kristoff.it/blog/zig-new-async-io/) (17%)
[^5]: [Zig 0.15.1 I/O Overhaul: Understanding the New Reader/Writer ...](https://dev.to/bkataru/zig-0151-io-overhaul-understanding-th...) (15%)
The Ecosia AI does not seem to be grounded in search results. When using small models, this is essentially useless.
NEEEEEXT