This is not how email works, though.
I wonder if it is a generation gap thing. The young folks these days have probably used only Gmail, Proton or one of these big email services that abstract away all the technical details of sending and receiving emails. Without some visibility into the technical details of how emails are composed and sent they might not have ever known that the email headers are not some definite source of truth but totally user defined and can be set to anything.
If the sending server doesn't do DKIM, it's fundamentally broken, move your email somewhere else. If the sending server lets any user send with an arbitrary local part, that's either intended and desired, or also fundamentally broken. If there are other senders registered on the domain with valid DKIM and you can't trust them, you have bigger problems.
No, it just won't get very good deliverability, because everything it talks to is now fundamentally broken.
DKIM shouldn't exist. It was a bad idea from day one.
It adds very little real anti-spam value over SPF, but the worse part is exactly the model you describe. DKIM was a largely undiscussed, back-door change to the attributability and repudiability of email, and at the same time the two-tiered model it created is far, far less effective or usable than just end-to-end signing messages at the MUA.
If you as a user can't trust your email server, you've already lost, no matter if something is authorized by an outbound email or a click on an inbound link. If your mail server is evil or hacked, it can steal your OTP token or activation link just as easily as it can send an email in your name.
Yes, end to end authentication is definitely better, but this isn't what people are discussing here. With enforced DKIM, "send me an email" has a nearly identical security profile to "I've emailed you a link, click on it". Both are inferior to end-to-end crypto.
Funny enough, these days it indicates the article was written by a human. I had a dev join my team and made a few typos and it gave me a chuckle, as it’s a whole class of mistake I hadn’t seen in awhile.
I can't open an issue (to ask the service to remove my email) without logging in to an account I don't have control over.
I don't want to use "forgot my password", because I don't want my IP address to be associated with a login to the account, because in some cases (particularly Shopify), the services were obviously used for fraud.
> I don't want to use "forgot my password", because I don't want my IP address to be associated with a login to the account
As a fellow victim of worldwide technically-illiterate namesakes, I used to do this using the TOR browser until I had a paid VPN service which is what I use now. Out of sheer paranoia, I always use a secondary browser profile while using a false userAgent extension.
I once wrote to the FTC for guidance as to whether or not this included requiring unsubscribers to solve a CAPTCHA or disable adblockers or enable Javascript, but did not get a response. I believe the law is plain with regards to this, but a lot of companies seem to be willing to risk it.
See: https://www.ecfr.gov/current/title-16/chapter-I/subchapter-C...
edit: I feel their pain - I've spent the past week fighting AI scrapers on multiple sites hitting routes that somehow bypass Cloudflare's cache. Thousands of requests per minute, often to URLs that have never even existed. Baidu and OpenAI, I'm looking at you.
Plus hitting the endpoints for authentication that return 403 over and over.
Oh you're so deterministic.
I also do not have a robots.txt so google doesnt index.
Got some scanners who left a message how to index or dei dex, but was like 3 lines total in my log (thats not abusive).
But yeah, blocking the whole of Asia stopped soooo much of the net-shit.
That doesn't sound right. I don't have robots.txt too but Google indexes everything for me.
I think this is a recent change.
I import them into iptables and wholesale block them all.
I dont deal with eastdakota's pile of shit.
Look at how many sites still get "HN hugged" (formerly known as "slashdotted").
At this point, I have to assume that most software is too inefficient to be exposed to the Internet, and that becomes obvious with any real load.
If you get a clear notice that a user wants you to delete something, you act on it. It doesn't matter if it was sent by carrier pigeon. Can't automate it? Tough doo-doo. Interferes with your business model? Change your model or close.
trollbridge's point about scrapers using residential IPs and targeting authentication endpoints matches what we've seen. The scrapers have gotten sophisticated. They're not just crawling, they're probing.
The economics are broken. Running a small site used to cost almost nothing. Now you need to either pay for CDN/protection or spend time playing whack-a-mole with bad actors.
ronsor hosting a front-page HN project on 32MB RAM is impressive and also highlights how much bloat we've normalized. The scraper problem is real, but so is the software efficiency problem.