https://extremelearning.com.au/unreasonable-effectiveness-of...
(There's no TAA in my use case, so there's no advantage for interleaved gradient noise there.)
EDIT: Actually, I remember trying R2 sequences for dither. I didn't think it looked much better than interleaved gradient noise, but my bigger problem was figuring out how to add a temporal component. I tried generalizing it to 3 dimensions, but the result wasn't great. I also tried shifting it around, but I thought animated interleaved gradient noise still looked better. This was my shadertoy: https://www.shadertoy.com/view/33cXzM
Looks pretty good! It looks a bit like a dither, but with fewer artifacts. Definitely a "sharper" look than blue noise, but in places like the transitions between the text boxes you can definitely see a bit more artifacts (almost looks like the boxes have a staggered edge).
Thanks for bringing this to my attention!
Absolutely - there's a reason why traditional litho printing uses a clustered dot screen (dots at a constant pitch with varying size).
I've spent some time tinkering with FPGAs and been interested by the parallels between two-dimensional halftoning of graphics and the various approaches to doing audio output with a 1-bit IO pin: pulse width modulation (largely analogous to the traditional printer's dot screen) seems to cope better with imperfections in filters and asymmetries in output drivers than pulse density modulation (analogous to error diffusion dithers).
Dithering - Part 1
[1] https://blue-noise.blode.co [2] https://github.com/mblode/blue-noise-rust [3] https://github.com/mblode/blue-noise-typescript
It's ok for people to get excited about shared passions
When this happens, you need to stop and appreciate the sheer genius of the creator.
This is one of those posts.
I can't wait until the next installment on error diffusion. I still think Atkinson dithering looks great, so much so that I made a web component to dither images.
If the author stops by, I'd be interested to hear about the tech used.
Interestingly enough, despite the GPU being completely incapable of "true" 24-bit rendering, Sony decided to ship the PS1 with a 24-bit video DAC and the ability to display 24-bit framebuffers regardless. This ended up being used mainly for title screens and video playback, as the PS1's hardware MJPEG decoder retained support for 24-bit output.
[1]: https://psx-spx.consoledev.net/graphicsprocessingunitgpu/#24...
Look at it this way though, this site is low-key a CV portfolio piece because he isn't just writing about dithering, he's demonstrating that he can research, analyze and then both code and create a site at a level most vibers cannot.