They are forbidden to buy foreign equipment beyond their current process node, which is already obsolete, die size is 40% bigger than Samsung, not to mention lithography, the big 3 are using EUV while they are stuck with lobotomized DUV.
They can start making some decent money now, but vastly expanding capacity as is means enormous losses if the cycle went downward a few years later, that's how all previous makers went bankrupt.
They can squeeze out a bit more performance if they are ready to go beyond their current node using only domestic equipment and be blacklisted by the US government.
But the cap is there, unless they can make a working EUV machine in 5 years, they are doomed to be a minor player, if the current cycle even lasts that long.
Except for RAM from YMTC, which the USA gave a near-death sentence to by placing it on the Dept. of Commerce “Entity List” so no USA-associated business can do business with YMTC now.
We use ASRock Rack servers, mainly because the only option for our industry are OEMs like Supermicro and ASRock. Dell and HPE are non-starters, except for our "storage" offering.
Back in 2019, HPE was a good midrange option. Then came ASRock Rack who obliterated HPE with the X470D4U, relegating HPE to high-end enterprise servers. But also made Ryzen-based VPS hosts including yours truly, BuyVM, et al.
One problem with US sanctions is it could hurt US companies too, like in the case of cutting-edge EUV and CXMT. This is when China is actually a hero and not a villain.
But, hypothetically if they had a ton of previous gen GPUs (so, less efficient) and a ton of intermittent energy (from solar or wind) maybe it could be a good tradeoff to run them intermittently?
Ultimately a workload that can profitably consumer “free” watts (and therefore flops) from renewable overprovisioning would be good for society I guess.
What I found is that it is cheap because the cores, and presumably ram, is old. Like, 2013 era Xeon E3-1275 v3 old. But that's fine! Old hardware like this uses old ram that is less affected by the current shortage. It's good enough for my needs.
Unlike Fourplex.net which uses modern ASRock Ryzen 9000 servers, Qeru.net used older HPE DL360 Gen9 servers.
I gave 3GB of RAM for $3-4/mo then. But these servers weren't very fast. I ended up selling the business, and am happy I did.
IPv4 shortages didn’t kill it, and I don’t think this will either.
But for a new provider like us, we'd have to spend more than an established player like BuyVM or RackNerd who bought most of their servers pre-AI-boom.
It's 12 dedicated cores of a modern EPYC CPU, 32GB RAM, 1TB NVMe.
I got that offer during their Black Friday sale and pay €25/month (price before VAT), plus the offer I got has a 2TB NVMe instead of the 1TB one.
CPU launch price 11000 USD. RAM will likely be another 10000 USD
20000 / 8 customers / 40 USD/mo = 62 months just to recoup CPU and RAM let alone other components.
Weird, whenever I napkin math offers of any HW for renting, I get that I could buy it myself in 1-2 years of rent. Sometimes faster.
Do they not intend to recoup the costs of HW? :)
Having not used AWS for years, I logged in to check it out, navigated through the Kafkaesque maze of their services until I found what I was looking for:
A lone S3 storage bucket, with one file, "Squirrel.jpg". A 200kB picture of a squirrel that I uploaded 8 years ago and can't remember why.
I wonder what the cost to AWS was for keeping track of that and running your CC. There's no way they made money off you / that 12 cents/year cost them *at least* 12 cents to collect every year
One of the hard rules we learned pre-pandemic was that services attached to usage based billing should really exit on error. It's a lesson I'm keeping in mind working with agents and routing (and the main reason I'm local-first).
If I need to host something small, I don’t want to mess around with the many permissions and quirks that are required to deal with AWS. It is often much easier to just setup the server on a standalone service.
When I worked at Microsoft, I seldom used Azure for personal use due to it being expensive and complicated.
Whereas I have plenty of Fourplex.net servers because even on half the salary, it's affordable enough for 16 Tor exit relays and two personal web/email/Mastodon servers.
There's also the human touch in terms of who you talk to: a lot of the smallest VPS hosts are 1-2 people, both technical, so customer support = sysadmin = contact for everything.
Does it have a billion 9's of reliability? No, but I don't care, it has literally never not worked when I've used it
Customer Service so far has been human, but that will vary greatly for the provider
I also use a different provider for work related hosting, and the reduced latency of being within 20 ms of the DC has been probably the single biggest (perceived) perf improvement my users have ever seen, specially on the legacy webforms platform we recently decomissioned (We're a bit too geographically far for most Datacenters of most large providers)
Never had problems with downtime and I payed, like, 40 bucks a year over 3 years. I think I had to restart the thing once because of something dumb I did on my end.
If you’re a government agency or a company you don’t care about saving $14/month, you want a secure provider. And these hosts are not secure, you’re basically just on your own.
That is a nice way to have a static IP on the internet and enough resources to do small things like host a nameserver and/or OpenVPN/Wireguard.
I may have had 4 hours of downtime in one year, always announced days in advance.
AWS does offer Lightsail which is similar pricing.
Outbound data pricing is a potentially huge saving.
AWS is as much as $90/TB outbound with 1GB free. Hetzner is $1.20/TB (in EU and US) with 1TB/20TB (US/EU) free.
(Good) Smaller places are more likely to have actual technical staff you can talk to.
I use them to run wireguard to evade geoblocks when I'm travelling, a few redundant monitoring scripts alerting me of reachability issues of more critical stuff I care about, they serve as contingency access channels to my home (and home assistant) if my primary channels are down.
I get no support, no updates, it's all on me - which is fine, it allows me to stay current and not lose hands-on practice on skills which I anyway need for my job (and which are anyway my passion). I don't even get an entire IPv4 - I get.... 1/3000th of it? (21 ports, the rest are forwarded to other customers). Suits me fine.
VPS services are usually really, really simple and fairly cheap.
I'd say that actually VPS prices is where we actually see computing prices going down rather than on the big 3.
AWS used to optimize further and pass down the savings to the customers back in the day, now they don't do it anymore.
I replaced with a home server and it costs way more just in power hahaha.
Not including the faster SSD & included traffic
But if all you really do with cloud stuff is "ssh into a server I have" (which covers a ton!) then you'll find much cheaper/more performant elsewhere.
If you're using AWS/GCP/Azure to just host a couple of VMs for a small group you're massively overpaying.
Personally, the only thing I know of that is a true deal vs. competition is cold storage of data. Using the s3 glacier tiers for long term data that is saved solely for emergencies is really cheap, something like $1/100GB a month or less.
AWS is usually not the cheapest EVER when it comes to offerings like EC2. If you aren't doing cloud-native or serverless at AWS, you're probably spending too much.
I view a large percentage of "cloud" usage like Teslas stock price: it's completely detached from reality by people who have drunk the kool aid and can't get out.
How can you trust Gary from GaryHosting not to just steal all your data? How can you trust him to have redundant networks? You just can’t.
A. rendezvous services so clients can connect to one another,
B. storage/retrieval of encrypted data where the host does not have the key to decrypt,
C. transport of encrypted data which cannot be known by the host due to B above.
> How can you trust him to have redundant networks
You can't, so abstract that away at the application layer. Make it not dependent on a single host or network.
Update: Fourplex (this host) uses a 1GB minimum.
Can't get a Linux box to idle (or even install) under 512M these days.
Can't find a web developer worth a shit who doesn't think he needs a Python backend application server to print "Hello, world" when you could do this with a static page served with something like OpenBSD with two-digit RAM requirements.
It's not the RAM that's changed; it's everyone around the RAM.
A coddled generation who were taught that AWS is the Internet and live in abstractions certainly hasn't helped.
Install can be tricky indeed, but if you have installed system, it's easier.
USER RES▽ Command
root 70436 systemd-journald
root 14268 amazon-ssm-agent
root 13508 systemd
root 12160 systemd --user
root 10240 sshd: root@pts/0
root 9088 sshd: root [priv]
root 8944 systemd-udevd
root 8704 systemd-logind
root 8320 nix-daemon --daemon
systemd-ti 8192 systemd-timesyncd
systemd-oo 7808 systemd-oomd
root 6492 -zsh
nscd 6272 nsncd
messagebus 5888 dbus-daemon --system --address=systemd: --nofork --nopidfile -
root 5888 htop
sshd 4904 sshd: root [net]
root 4736 sshd: sshd -D -f /etc/ssh/sshd_config [listener] 1 of 10-100
root 2960 (sd-pam)
root 2816 agetty --login-program login ttyS0 --keep-baud
root 2192 dhcpcd: [privileged proxy]
dhcpcd 1680 dhcpcd: [manager] [ip4] [ip6]
dhcpcd 1468 dhcpcd: [BPF ARP] ens5 172.31.8.86
dhcpcd 1168 dhcpcd: [control proxy]
dhcpcd 1040 dhcpcd: [network proxy]why
Windows NT was routinely run with 32 MB of RAM TOTAL and the event log is basically unchanged 30 years later.
Edit: Haha, withing a handful of seconds I got a downvote. :-D