unrelenting.technology

Shower thought: Google App Engine was the original AWS Lambda of the late 2000s.

Okay, it literally was not, it was just “platform as a service”, morally equivalent to Heroku, right? Well, sure it didn’t have all the non-web event handling stuff, but it was in many ways closer to Lambda than to Heroku. Namely it did not run just any app, it would “insert itself into your code” – e.g. Python WSGI apps had to be adapted with a special module – which is exactly how it is in Lambda!

And around 2010 GAE did feel like the place for letting someone else run your random hobby project for free “forever”. Well, hm, any PaaS or FaaS with a free tier should be like that. Soooo I just went to check if this ancient project is up. Nope. I guess with the absorption of GAE into Google Cloud, things changed so much that not ever signing into GCloud (and so not accepting new terms etc.) leads to the app being shut down. Well, that kind of thing is very much expected from Google by now.

This website is now fully owned by Bezos very very Serverless™! Built using mildly forked Zola in GitHub Actions, uploaded to S3, content-delivered via CloudFront. And even DNS is now Route 53, otherwise properly having CloudFront on an apex (bare) domain would be difficult. Webmentions are outsourced to Webmention.io (+ a tiny endpoint converting their webhook to GitHub rebuild command is hosted on Glitch) and micropub is gone, at least for now. (When I have nothing better to do I might just make an app running in Lambda that would edit the git repo in response to Micropub requests, send outgoing webmentions, be a custom auth endpoint, eventually also handle incoming webmentions, and so on.)

I’m glad I only did this now: CloudFront Functions did not exist until a few months ago. With this functionality, it is a quite capable CDN. Still clunky and weird in some aspects though (e.g. “Default Root Object” applying even after a Function rewrites /something/ to / was quite surprising)

Burstable Graviton2 instances are now a thing. Cool! Changed the instance type for this website from a1.medium to t4g.micro so that Jeff Bezos gets less of my money :P (Basically no money until the end of this year, even — there’s a free trial for t4g.micro for all AWS accounts!)

Wow, about a month ago Spot (ex-Spotinst), the service that can auto-restore an EC2 spot instance after it gets killed, fixed their arm64 support! (Used to be that it would always set the AMI’s “architecture” metadata to amd64, haha.)

And of course their support didn’t notify me that it was fixed, the service didn’t auto-notify me that an instance finally was successfully restored after months of trying and failing, AWS didn’t notify either (it probably can but I haven’t set anything up?), so I wasted a few bucks running a spare inaccessible clone server of my website. Oh well, at least now I can use a spot instance again without worrying about manual restore.

UPD: hmm, it still tried i386 on another restore! dang it.

AWS CloudFormation looks rather disappointing:

  • the import functionality is a joke?? you have to make the template yourself, for some reason there’s no “make template from this real thing” button??
  • even that import thing cannot import an ACM certificate at all, literally says that’s unsupported.
  • the GUI designer thing does not know anything about CloudFront!

What.

New image upload/optimization for sweetroll2

Website update: imgroll image optimization has been deployed. Now I can finally properly share pics! :D

Meme: I CAN HAS IMAGE PROCESSING?

How it works: the micropub media endpoint in sweetroll2 uploads to S3 (with a callback URL in the metadata), returns an S3 URL. The imgroll Lambda notices the upload, extracts metadata, does processing, uploads resized versions to S3, POSTs to the callback a rich object with metadata and links to the sizes. But from there, there’s three ways of getting the object into the post instead of the URL:

  • if everything goes right, it’s processed quickly: the callback is forwarded to the post editor via Server-Sent Events and the URL gets replaced with the object right in the browser;
  • if the post is saved with the S3 URL before the processing is done: the callback handler modifies all posts with that URL in any field;
  • same but after the processing is done: the micropub endpoint replaces all URLs for which these callbacks have happened.

Also, the images are served from CloudFront now, on a CNAME subdomain (with a certificate issued by AWS ACM). Which has required.. switching DNS providers: the 1984 FreeDNS was being buggy and wouldn’t apply my changes. Now I’m on desec.io which is currently API-only and has no web UI, but that’s actually cool because I now have all the DNS records in a script that deploys them using curl.

Looks like NetBSD is already working on the EC2 AArch64 instances! My attempt at running FreeBSD there failed: for mysterious reasons, the system reboots just after the last loader.efi message..

Trying to do anything system-level on EC2 is incredibly frustrating. There is STILL no read-write access to the serial console, because Bezos doesn’t believe in debugging or something >_<

Also, about the ARM instances themselves. I am happy to see a big player enter the ARM space. And with custom (Annapurna) chips, even. (Though they’d have much better performance if they just bought some Ampere eMAGs or Cavium ThunderX2s.)

But what’s up with that price? Did anyone at AWS ever look at Scaleway’s pricing page?! On-demand pricing for a single core EC2 ARM instance is almost 20 bucks per month! While Scaleway offers four ThunderX cores for three euros per month!! Sure sure Scaleway is not a big player and doesn’t have a huge ecosystem and is getting close to being out of stock on these ARM instances.. but still, 1/4 the cores for 5x the price.

(Spot pricing is better of course.)

So Amazon Lambda has a 6 MB limit on request (and response) size. Binary files have to be Base64 encoded (LOL) which makes the limit even SMALLER! So my micropub media endpoint chokes on full DSLR resolution photos. Yeah the "right way" is to have the API Gateway endpoint upload to S3, and the upload event trigger the Lambda processing which would download from S3, and use a separate Lambda for authentication on that endpoint… but I need the processed URLs in the response body. I need everything to happen in one request! How did AWS engineers not see that use case coming?!

AWS Route 53 looks like a nice DNS hosting service because API (automated ACME DNS verification is pretty cool) and automatic config for adding domains to other AWS things but they're really slow with new DNS record types. Still no CAA! And SSHFP! What in the hell, the most powerful Cloud™ company in the world can't add a simple record type?

Quake Champions is awesome (as in the gameplay — performance is meh).

Amazon Web Services is not awesome: it wasn't really obvious that promotional credits aren't spent on reserved EC2 instances :( Also HardenedBSD was behaving weird on it (secadm kernel panic, Python libssl segfaults).

But with regular FreeBSD I've set up a Matrix homeserver (Synapse) on EC2! I am now @greg:unrelenting.technology :) It's working as my new IRC bouncer, so with that I've been able to say goodbye to the previous VPS that served this website (which was still running my ZNC).