Firstup Recruitment

In a little over a week I’ll be starting my new role at Firstup, who use some of my favourite Web technologies to deliver tools that streamline employee communication and engagement.

I’m sure there’ll be more to say about that down the line, but for now: let’s look at my recruitment experience, because it’s probably the fastest and most-streamlined technical recruitment process I’ve ever experienced! Here’s the timeline:

Firstup Recruitment Timeline

  • Day 0 (Thursday), 21:18 – One evening, I submitted an application via jobs listing site Welcome To The Jungle. For comparison, I submitted an application for a similar role at a similar company at almost the exact same time. Let’s call them, umm… “Secondup”.
  • 21:42 – I received an automated response to say “Firstup have received your application”. So far, so normal.
  • 21:44 – I received an email from a human – only 26 minutes after my initial application – to invite me to an initial screener interview the following week, and offering a selection of times (including a reminder of the relative timezone difference between the interviewer and I).
  • 21:55 – I replied to suggest meeting on Wednesday the following week1.
  • Day 6 (Wednesday), 15:30 – Half-hour screener interview, mostly an introduction, “keyword check” (can I say the right keywords about my qualifications and experience to demonstrate that, yes, I’m able to do the things they’ll need), and – because it’s 2025 and we live in the darkest timeline – a confirmation that I was a real human being and not an AI2. The TalOps person, Talia, says she’d like to progress me to an interview with the person who’d become my team lead, and arranges the interview then-and-there for Friday. She talked me through all the stages (max points to any recruiter who does this), and gave me an NDA to sign so we could “talk shop” in interviews if applicable.
Dan peeps out from behind a door, making a 'shush' sign with his finger near his lips. On the door is a printed sign that reads 'Interview happening in library. Please be quiet. Thanks!'
I only took the initial stages of my Firstup interviews in our library, moving to my regular coding desk for the tech tests, but I’ve got to say it’s a great space for a quiet conversation, away from the chaos and noise of our kids on an evening!
  • Day 8 (Friday), 18:30 – My new line manager, Kirk, is on the Pacific Coast of the US, so rather than wait until next week to meet I agreed to this early-evening interview slot. I’m out of practice at interviews and I babbled a bit, but apparently I had the right credentials because, at a continuing breakneck pace…
  • 21:32 – Talia emailed again to let me know I was through that stage, and asked to set up two live coding “tech test” interviews early the following week. I’ve been enjoying all the conversations and the vibes so far, so I try to grab the earliest available slots that I can make. This put the two tech test interviews back-to-back, to which Ruth raised her eyebrows – but to me it felt right to keep riding the energy of this high-speed recruitment process and dive right in to both!
  • Day 11 (Monday), 18:30 – Not even a West Coast interviewer this time, but because I’d snatched the earliest possible opportunity I spoke to Joshua early in the evening. Using a shared development environment, he had me doing a classic data-structures-and-algorithms style assessment: converting a JSON-based logical inference description sort-of reminiscent of a Reverse Polish Notation tree into something that looked more pseudocode of the underlying boolean logic. I spotted early on that I’d want a recursive solution, considered a procedural approach, and eventually went with a functional one. It was all going well… until it wasn’t! Working at speed, I made frustrating early mistake left me with the wrong data “down” my tree and needed to do some log-based debugging (the shared environment didn’t support a proper debugger, grr!) to get back on track… but I managed to deliver something that worked within the window, and talked at length through my approach every step of the way.
  • 19:30 – The second technical interview was with Kevin, and was more about systems design from a technical perspective. I was challenged to make an object-oriented implementation of a car park with three different sizes of spaces (for motorbikes, cars, and vans); vehicles can only fit into their own size of space or larger, except vans which – in the absence of a van space – can straddle three car spaces. The specification called for a particular API that could answer questions about the numbers and types of spaces available. Now warmed-up to the quirks of the shared coding environment, I started from a test-driven development approach: it didn’t actually support TDD, but I figured I could work around that by implementing what was effectively my API’s client, hitting my non-existent classes and their non-existent methods and asserting particular responses before going and filling in those classes until they worked. I felt like I really “clicked” with Kevin as well as with the tech test, and was really pleased with what I eventually delivered.
  • Day 12 (Tuesday), 12:14 – I heard from Talia again, inviting me to a final interview with Kirk’s manager Xiaojun, the Director of Engineering. Again, I opted for the earliest mutually-convenient time – the very next day! – even though it would be unusually-late in the day.
  • Day 13 (Wednesday), 20:00 –  The final interview with Xiaojun was a less-energetic affair, but still included some fun technical grilling and, as it happens, my most-smug interview moment ever when he asked me how I’d go about implementing something… that I’d coincidentally implemented for fun a few weeks earlier! So instead of spending time thinking about an answer to the question, I was able to dive right in to my most-recent solution, for which I’d conveniently drawn diagrams that I was able to use to explain my architectural choices. I found it harder to read Xiaojun and get a feel for how the interview had gone than I had each previous stage, but I was excited to hear that they were working through a shortlist and should be ready to appoint somebody at the “end of the week, or early next week” at the latest.
Illustration of a sophisticated Least Recently Used Cache class with a doubly-linked list data structure containing three items, referenced from the class by its head and tail item. It has 'get' and 'put' methods. It also has a separate hash map indexing each cached item by its key.
This. This is how you implement an LRU cache.
  • Day 14 (Thursday), 00:09 – At what is presumably the very end of the workday in her timezone, Talia emailed me to ask if we could chat at what must be the start of her next workday. Or as I call it, lunchtime. That’s a promising sign.
  • 13:00 – The sun had come out, so I took Talia’s call in the “meeting hammock” in the garden, with a can of cold non-alcoholic beer next to me (and the dog rolling around on the grass). After exchanging pleasantries, she made the offer, which I verbally accepted then and there and (after clearing up a couple of quick queries) signed a contract to a few hours later. Sorted.
  • Day 23 – You remember that I mentioned applying to another (very similar) role at the same time? This was the day that “Secondup” emailed to ask about my availability for an interview. And while 23 days is certainly a more-normal turnaround for the start of a recruitment process, I’d already found myself excited by everything I’d learned about Firstup: there are some great things they’re doing right; there are some exciting problems that I can be part of the solution to… I didn’t need another interview, so I turned down “Secondup”. Something something early bird.

Wow, that was fast!

With only eight days between the screener interview and the offer – and barely a fortnight after my initial application – this has got to be the absolute fastest I’ve ever seen a tech role recruitment process go. It felt like a rollercoaster, and I loved it.

Photograph of a looping steel rollercoaster against a setting sun. A logo of a flaming 'F' in the corner declares it Firstup: World's First Recruitment-Themed Rollercoaster. The track is annotated with the steps of the process in order, from application to screener to interview and so on.
Is it weird that I’d actually ride a recruitment-themed rollercoaster?

Footnotes

1 The earliest available slot for a screener interview, on Tuesday, clashed with my 8-year-old’s taekwondo class which I’d promised I’ll go along and join in with it as part of their “dads train free in June” promotion. This turned out to be a painful and exhausting experience which I thoroughly enjoyed, but more on that some other time, perhaps.

2 After realising that “are you a robot” was part of the initial checks, I briefly regretted taking the interview in our newly-constructed library because it provides exactly the kind of environment that looks like a fake background.

× × ×

The Huge Grey Area in the Anthropic Ruling

This week, AI firm Anthropic (the folks behind Claude) found themselves the focus of attention of U.S. District Court for the Northern District of California.

New laws for new technologies

The tl;dr is: the court ruled that (a) piracy for the purpose of training an LLM is still piracy, so there’ll be a separate case about the fact that Anthropic did not pay for copies of all the books their model ingested, but (b) training a model on books and then selling access to that model, which can then produce output based on what it has “learned” from those books, is considered transformative work and therefore fair use.

Fragment of court ruling with a line highlighted that reads: This order grants summary judgment for Anthropic that the training use was a fair use.

Compelling arguments have been made both ways on this topic already, e.g.:

  • Some folks are very keen to point out that it’s totally permitted for humans to read, and even memorise, entire volumes, and then use what they’ve learned when they produce new work. They argue that what an LLM “does” is not materially different from an impossibly well-read human.
  • By way of counterpoint, it’s been observed that such a human would still be personally liable if the “inspired” output they subsequently created was derivative to the point of  violating copyright, but we don’t yet have a strong legal model for assessing AI output in the same way. (BBC News article about Disney & Universal vs. Midjourney is going to be very interesting!)
  • Furthermore, it might be impossible to conclusively determine that the way GenAI works is fundamentally comparable to human thought. And that’s the thing that got me thinking about this particular thought experiment.

A moment of philosophy

Here’s a thought experiment:

Support I trained an LLM on all of the books of just one author (plus enough additional language that it was able to meaningfully communicate). Let’s take Stephen King’s 65 novels and 200+ short stories, for example. We’ll sell access to the API we produce.

Monochrome photograph showing a shelf packed full of Stephen King's novels.
I suppose it’s possible that Stephen King was already replaced long ago with an AI that was instructed to churn out horror stories about folks in isolated Midwestern locales being harassed by a pervasive background evil?

The output of this system would be heavily-biased by the limited input it’s been given: anybody familiar with King’s work would quickly spot that the AI’s mannerisms echoed his writing style. Appropriately prompted – or just by chance – such a system would likely produce whole chapters of output that would certainly be considered to be a substantial infringement of the original work, right?

If I make KingLLM, I’m going to get sued, rightly enough.

But if we accept that (and assume that the U.S. District Court for the Northern District of California would agree)… then this ruling on Anthropic would carry a curious implication. That if enough content is ingested, the operation of the LLM in itself is no longer copyright infringement.

Which raises the question: where is the line? What size of corpus must a system be trained upon before its processing must necessarily be considered transformative of its inputs?

Clearly, trying to answer that question leads to a variant of the sorites paradox. Nobody can ever say that, for example, an input of twenty million words is enough to make a model transformative but just one fewer and it must be considered to be perpetually ripping off what little knowledge it has!

But as more of these copyright holder vs. AI company cases come to fruition, it’ll be interesting to see where courts fall. What is fair use and what is infringing?

And wherever the answers land, I’m sure there’ll be folks like me coming up with thought experiments that sit uncomfortably in the grey areas that remain.

×

Smug Interview Moment

I’ve been in a lot of interviews over the last two or three weeks. But there’s a moment that stands out and that I’ll remember forever as the most-smug I’ve ever felt during an interview.

Close-up of part of a letter, the visible part of which reads: Dear Dan, We are pleased to offer you a position as Senior Softwa... / and reporting to the Company's Manager, Software E... / (the "Commencement Date"). You will receive an... / By accepting this offer you warrant and agree...
There’ll soon be news to share about what I’m going to be doing with the second half of this year…

This particular interview included a mixture of technical and non-technical questions, but a particular technical question stood out for reasons that will rapidly become apparent. It went kind-of like this:

Interviewer: How would you go about designing a backend cache that retains in memory some number of most-recently-accessed items?

Dan: It sounds like you’re talking about an LRU cache. Coincidentally, I implemented exactly that just the other week, for fun, in two of this role’s preferred programming languages (and four other languages). I wrote a blog post about my design choices: specifically, why I opted for a hashmap for quick reads and a doubly-linked-list for constant-time writes. I’m sending you the links to it now: may I talk you through the diagrams?

Interviewer:

'Excuse me' GIF reaction. A white man blinks and looks surprised.

That’s probably the most-overconfident thing I’ve said at an interview since before I started at the Bodleian, 13 years ago. In the interview for that position I spent some time explaining that for the role they were recruiting for they were asking the wrong questions! I provided some better questions that I felt they should ask to maximise their chance of getting the best candidate… and then answered them, effectively helping to write my own interview.

Anyway: even ignoring my cockiness, my interview the other week was informative and enjoyable throughout, and I’m pleased that I’ll soon be working alongside some of the people that I met: they seem smart, and driven, and focussed, and it looks like the kind of environment in which I could do well.

But more on that later.

× ×

Scarecrows

As time has gone by, a great many rural English villages have been consumed by their nearest towns, or else become little more than dormitory villages: a place where people do little more than eat and sleep in-between their commutes to-and-from their distant workplaces1.

And so it pleases me at least a little that the tiny village I’ve lived in for five years this week still shows great success in how well it clings on to its individual identity.

Panoramic view of a village green, flanked by houses, around which several decorative boards have been erected: made to look like old-fashioned televisions, each shows a photograph of an event in the village's past.
Right now our village green is surrounded by flags, bunting, and thematic decorations.

Every summer since time immemorial, for example, it’s hosted a Village Festival, and this year it feels like the community’s gone all-out. The theme this year is A Century in Television, and most of the festivities seem to tie-in to the theme.

Human-sized scarecrows of classic childrens' TV characters Bill and Ben the Flower Pot Men and their friend Little Weed, constructed mostly out of plant pots, standing at the corner of a road by a wood-panelled outbuilding.
If you recognise these characters from their first time around on British television, you’re probably older than I am. If you recognise them from their 2001 “reboot”, then you’re probably younger.

I’ve been particularly impressed this year by entrants into the (themed) scarecrow competition: some cracking scarecrows (and related decorations) have started popping up around the village in advance of festival week!

A lifesize mannequin of kids' TV character Bob the Builder stands on scaffolding that's being used by an actual builder who's constructing hip-roof dormer windows.
Bob the Builder’s helping out with the reconstruction of the roof of one of the houses down towards the end of my hamlet, just outside the village proper.

There’s a clear bias towards characters from childrens’ television programmes, but that only adds to the charm. Not only does it amuse the kids when we walk by them, but it feeds into the feeling of nostalgia that the festival theme seems to evoke (as well, perhaps, as a connection to the importance of this strange village tradition).

A scarecrow of Postman Man, alongside a cardboard cutout of Jess (his black-and-white cat), sits atop a stone wall. The wall contains an in-wall Royal Mail postbox, and the thatched house behind is called Letterbox Cottage, contributing to the theme for this scarecrow.
Well-played, Letterbox Cottage. Well-played.

If you took a wrong turning and found your way through our village when you meant to be somewhere else, you’d certainly be amused, bemused, or both by the plethora of figures standing on street corners, atop hedgerows, and just generally around the place2.

Large scarecrows of two anthropomorphic sheep, the smaller one holding a teddy bear, stood atop a hedgerow.
Shaun the Sheep and what I believe must be his cousin Timmy stand atop a hedge looking down on a route used by many children on their way to school.

The festival, like other events in the local calendar, represents a collective effort by the “institutions” of the village – the parish council, the church, the primary school, etc.

But the level of time and emotional investment from individual households (whether they’re making scarecrows for the Summer festival… decorating windows as a Christmas advent calendar… turning out for a dog show last week, I hear3…) shows the heart of a collective that really engage with this kind of community. Which is really sweet.

Red-headed 'thing-on-a-spring' Zebedee, from The Magic Roundabout, in scarecrow form.
An imaginative use of a coloured lampshade plus some excellent tinfoil work makes Zebedee here come to life. He could only have been more-thematic if he’d been installed on the village’s (only) roundabout!

Anyway, the short of it is that I feel privileged to live in a village that punches above its weight class when it comes to retaining its distinctive personality. And seeing so many of my neighbours, near and far, putting these strange scarecrows out, reminded me of that fact.

Composite photograph showing 11 more scarecrows, including a BBC newscaster, a Casualty surgeon, Spongebob Squarepants, and the bar at the Rovers Return.
I’m sure I’m barely scraping the surface – there are definitely a few I know of that I’ve not managed to photograph yet – but there are a lot of scarecrows around my way, right now.

Footnotes

1 The “village” in which our old house resided certainly had the characteristic feel of “this used to be a place of its own, but now it’s only-barely not just a residential estate on the outskirts of Oxford, for example. Kidlington had other features, of course, like Oxford’s short-lived zoological gardens… but it didn’t really feel like it had an identity in its own right.

2 Depending on exactly which wrong turn you took, the first scarecrow you saw might well be the one dressed as a police officer – from some nonspecific police procedural drama, one guesses? – that’s stood guard shortly after the first of the signs to advertise our new 20mph speed limit. Holding what I guess is supposed to be a radar gun (but is clearly actually a mini handheld vacuum cleaner), this scarecrow might well be having a meaningful effect on reducing speeding through our village, and for that alone it might be my favourite.

3 I didn’t enter our silly little furball into the village dog show, for a variety of reasons: mostly because I had other things to do at the time, but also because she’s a truculent little troublemaker who – especially in the heat of a Summer’s day – would probably just try to boss-around the other dogs.

× × × × × × ×

Bored Gay Werewolf by Tony Santorella

Cover for Bored Gay Werewolf by Tony Santorella. It's bright green with a mixture of pink and black uppercase letters, with a colour sketch of the head of a slavering wolf in the centre.What can I possibly say about Bored Gay Werewolf, which caught my attention with the garish colours of its front cover when I saw it in Waterstones and whose blurb suggested that it might, perhaps, be a queer fantasy romp with a Buffy-esque sense of humour.

Werewolf? Sure, it’s got a few of those. There’s even a bit of fun, offbeat humour each time the protagonist reflects on their curious monthly cycle and tries to work out whether they attacked or even killed anybody this time around. But mostly it’s not a story about werewolf: it’s a story about a slacker who gets suckered into a pyramid scheme, with just a hint of lycanthropy around the fringes.

Gay? I mean: the protagonist’s gay, and many of their friends are queer… and while the representation is good, sexuality doesn’t feel like it’s a particularly significant issue to the storyline. I enjoyed the parallels that were drawn between Brian’s coming-out as gay versus his (for most of the story) closeted werewolf nature – which even though I saw them coming from the first chapter onwards were still well-presented – but apart from that it almost felt like gayness wasn’t a central theme to the story. A smidge of homophobia, some queer culture references, and a throwaway Grindr hookup with a closeted MSM dude do not contribute enough homosexuality to justify “gay” being the largest, pinkest word on a novel’s cover, if you ask me.

BoredI was, at some points in the book, but I’m not convinced that’s what was intended. The pacing’s a little inconsistent: a long and drawn-out description of an exercise routines overshadows an exploration of the impact of werewolf super-senses, for example. And a long-foreshadowed fight scene finale feels like it’s over in an instant (with a Van Helsing ex Machina twist that felt simultaneously like the brakes being slammed on and a set-up for an inevitable sequel).

I sound pretty negative about it, I’m sure. But it’s not actually bad. It’s just not actually good, either. It’s a passable, middle-of-the-road time-filler with an interesting hook, a few funny set pieces (I laughed out loud a couple of times, for sure), and a set of misfit characters who spend most of the book feeling a little… incomplete? Though it’s possible that latter point’s at-least partially deliberate, as this is without a doubt a “Gen-Z Grows Up” story. Maybe if I were younger and didn’t yet have my shit together the story would appeal better.

×

I Wrote the Same Code Six Times!

I was updating my CV earlier this week in anticipation of applying for a handful of interesting-looking roles1 and I was considering quite how many different tech stacks I claim significant experience in, nowadays.

There are languages I’ve been writing in every single week for the last 15+ years, of course, like PHP, Ruby, and JavaScript. And my underlying fundamentals are solid.

But is it really fair for me to be able to claim that I can code in Java, Go, or Python: languages that I’ve not used commercially within the last 5-10 years?

Animated GIF showing Sublime Text editor flicking through six different languages, with equivalent code showing in each.
What kind of developer writes the same program six times… for a tech test they haven’t even been asked to do? If you guessed “Dan”, you’d be correct!

Obviously, I couldn’t just let that question lie2. Let’s find out!

Contents

The Test

I fished around on Glassdoor for a bit to find a medium-sized single-sitting tech test, and found a couple of different briefs that I mashed together to create this:

In an object-oriented manner, implement an LRU (Least-Recently Used) cache:

  • The size of the cache is specified at instantiation.
  • Arbitrary objects can be put into the cache, along with a retrieval key in the form of a string. Using the same string, you can get the objects back.
  • If a put operation would increase the number of objects in the cache beyond the size limit, the cached object that was least-recently accessed (by either a put or get operation) is removed to make room for it.
  • putting a duplicate key into the cache should update the associated object (and make this item most-recently accessed).
  • Both the get and put operations should resolve within constant (O(1)) time.
  • Add automated tests to support the functionality.

My plan was to implement a solution to this challenge, in as many of the languages mentioned on my CV as possible in a single sitting.

But first, a little Data Structures & Algorithms theory:

The Theory

Simple case with O(n) complexity

The simplest way to implement such a cache might be as follows:

  • Use a linear data structure like an array or linked list to store cached items.
  • On get, iterate through the list to try to find the matching item.
    • If found: move it to the head of the list, then return it.
  • On put, first check if it already exists in the list as with get:
    • If it already exists, update it and move it to the head of the list.
    • Otherwise, insert it as a new item at the head of the list.
      • If this would increase the size of the list beyond the permitted limit, pop and discard the item at the tail of the list.
Illustration of a simple Cache class with a linked list data structure containing three items, referenced from the class by its head item. It has 'get' and 'put' methods.
It’s simple, elegant and totally the kind of thing I’d accept if I were recruiting for a junior or graduate developer. But we can do better.

The problem with this approach is that it fails the requirement that the methods “should resolve within constant (O(1)) time”3.

Of particular concern is the fact that any operation which might need to re-sort the list to put the just-accessed item at the top 4. Let’s try another design:

Achieving O(1) time complexity

Here’s another way to implement the cache:

  • Retain cache items in a doubly-linked list, with a pointer to both the head and tail
  • Add a hash map (or similar language-specific structure) for fast lookups by cache key
  • On get, check the hash map to see if the item exists.
    • If so, return it and promote it to the head (as described below).
  • On put, check the hash map to see if the item exists.
    • If so, promote it to the head (as described below).
    • If not, insert it at the head by:
      • Updating the prev of the current head item and then pointing the head to the new item (which will have the old head item as its next), and
      • Adding it to the hash map.
      • If the number of items in the hash map would exceed the limit, remove the tail item from the hash map, point the tail at the tail item’s prev, and unlink the expired tail item from the new tail item’s next.
  • To promote an item to the head of the list:
    1. Follow the item’s prev and next to find its siblings and link them to one another (removes the item from the list).
    2. Point the promoted item’s next to the current head, and the current head‘s prev to the promoted item.
    3. Point the head of the list at the promoted item.
Illustration of a more-sophisticated Cache class with a doubly-linked list data structure containing three items, referenced from the class by its head and tail item. It has 'get' and 'put' methods. It also has a separate hash map indexing each cached item by its key.
Looking at a plate of pointer-spaghetti makes me strangely hungry.

It’s important to realise that this alternative implementation isn’t better. It’s just different: the “right” solution depends on the use-case5.

The Implementation

That’s enough analysis and design. Time to write some code.

GitHub repo 'Languages' panel, showing a project that includes Java (17.2%), Go (16.8%), Python (16.4%), Ruby (15.7%), TypeScript (13.7%), PHP (11.5%), and 'Other' (8.7%).
Turns out that if you use enough different languages in your project, GitHub begins to look like itwants to draw a rainbow.

Picking a handful of the more-useful languages on my CV6, I opted to implement in:

  • Ruby (with RSpec for testing and Rubocop for linting)
  • PHP (with PHPUnit for testing)
  • TypeScript (running on Node, with Jest for testing)
  • Java (with JUnit for testing)
  • Go (which isn’t really an object-oriented language but acts a bit like one, amirite?)
  • Python (probably my weakest language in this set, but which actually ended up with quite a tidy solution)

Naturally, I open-sourced everything if you’d like to see for yourself. It all works, although if you’re actually in need of such a cache for your project you’ll probably find an alternative that’s at least as good (and more-likely to be maintained!) in a third-party library somewhere!

What did I learn?

This was actually pretty fun! I might continue to expand my repo by doing the same challenge with a few of the other languages I’ve used professionally at some point or another7.

And there’s a few takeaways I got from this experience –

Lesson #1: programming more languages can make you better at all of them

As I went along, one language at a time, I ended up realising improvements that I could make to earlier iterations.

For example, when I came to the TypeScript implementation, I decided to use generics so that the developer can specify what kind of objects they want to store in the cache, rather than just a generic Object, and better benefit type-safety. That’s when I remembered that Java supports generics, too, so I went back and used them there as well.

In the same way as speaking multiple (human) languages or studying linguistics can help unlock new ways of thinking about your communication, being able to think in terms of multiple different programming languages helps you spot new opportunities. When in 2020 PHP 8 added nullsafe operators, union types, and named arguments, I remember feeling confident using them from day one because those features were already familiar to me from Ruby8, TypeScript9, and Python10, respectively.

Lesson #2: even when I’m rusty, I can rely on my fundamentals

I’ve applied for a handful of jobs now, but if one of them had invited me to a pairing session on a language I’m rusty on (like Java!) I might’ve felt intimidated.

But it turns out I shouldn’t need to be! With my solid fundamentals and a handful of other languages under my belt, I understand when I need to step away from the code editor and hit the API documentation. Turns out, I’m in a good position to demo any of my language skills.

I remember when I was first learning Go, I wanted to make use of a particular language feature that I didn’t know whether it had. But because I’d used that feature in Ruby, I knew what to search for in Go’s documentation to see if it was supported (it wasn’t) and if so, what the syntax was11.

Lesson #3: structural rules are harder to gearshift than syntactic ones

Switching between six different languages while writing the same application was occasionally challenging, but not in the ways I expected.

I’ve had plenty of experience switching programming languages mid-train-of-thought before. Sometimes you just have to flit between the frontend and backend of your application!

But this time around I discovered: changes in structure are apparently harder for my brain than changes in syntax. E.g.:

  • Switching in and out of Python’s indentation caught me out at least once (might’ve been better if I took the time to install the language’s tools into my text editor first!).
  • Switching from a language without enforced semicolon line ends (e.g. Ruby, Go) to one with them (e.g. Java, PHP) had me make the compiler sad several times.
  • This gets even tougher when not writing the language but writing about the language: my first pass at the documentation for the Go version somehow ended up with Ruby/Python-style #-comments instead of Go/Java/TypeScript-style //-comments; whoops!

I’m guessing that the part of my memory that looks after a language’s keywords, how a method header is structured, and which equals sign to use for assignment versus comparison… are stored in a different part of my brain than the bit that keeps track of how a language is laid-out?12

Okay, time for a new job

I reckon it’s time I got back into work, so I’m going to have a look around and see if there’s any roles out there that look exciting to me.

If you know anybody who’s looking for a UK-based, remote-first, senior+, full-stack web developer with 25+ years experience and more languages than you can shake a stick at… point them at my CV, would you?

Footnotes

1 I suspect that when most software engineers look for a new job, they filter to the languages, frameworks, they feel they’re strongest at. I do a little of that, I suppose, but I’m far more-motivated by culture, sector, product and environment than I am by the shape of your stack, and I’m versatile enough that technology specifics can almost come second. So long as you’re not asking me to write VB.NET.

2 It’s sort-of a parallel to how I decided to check the other week that my Gutenberg experience was sufficiently strong that I could write standard ReactJS, too.

3 I was pleased to find a tech test that actually called for an understanding of algorithm growth/scaling rates, so I could steal this requirement for my own experiment! I fear that sometimes, in their drive to be pragmatic and representative of “real work”, the value of a comprehension of computer science fundamentals is overlooked by recruiters.

4 Even if an algorithm takes the approach of creating a new list with the inserted/modified item at the top, that’s still just a very-specific case of insertion sort when you think about it, right?

5 The second design will be slower at writing but faster at reading, and will scale better as the cache gets larger. That sounds great for a read-often/write-rarely cache, but your situation may differ.

6 Okay, my language selection was pretty arbitrary. But if I’d have also come up with implementations in Perl, and C#, and Elixir, and whatever else… I’d have been writing code all day!

7 So long as I’m willing to be flexible about the “object-oriented” requirement, there are even more options available to me. Probably the language that I last wrote longest ago would be Pascal: I wonder how much of that I remember?

8 Ruby’s safe navigation/”lonely” operator did the same thing as PHP’s nullsafe operator since 2015.

9 TypeScript got union types back in 2015, and apart from them being more-strictly-enforced they’re basically identical to PHP’s.

10 Did you know that Python had keyword arguments since its very first public release way back in 1994! How did it take so many other interpreted languages so long to catch up?

11 The feature was the three-way comparison or “spaceship operator”, in case you were wondering.

12 I wonder if anybody’s ever laid a programmer in an MRI machine while they code? I’d be really interested to see if different bits of the brain light up when coding in functional programming languages than in procedural ones, for example!

× × × ×

The Difference Between Downloading and Streaming

What’s the difference between “streaming” and “downloading” video, audio, or some other kind of linear media?1

Screenshot from Vimeo's settings, showing the 'What can people do with your videos?' section. The 'Download them' checkbox is highlighted and a question mark has been scrawled alongside it.
Many platforms make a firm distinction between streaming and downloading, implying that they’re very different. But they’re not.

They’re basically the same thing

Despite what various platforms would have you believe, there’s no significant technical difference between streaming and downloading.

Suppose you’re choosing whether to download or stream a video2. In both cases3:

  • The server gets frames of video from a source (file, livestream, etc.)
  • The server sends those frames to your device
  • Your device stores them while it does something with them
Want to keep a copy of this animation? You don’t have to trick your computer into retaining it as it streams because I’ve open-sourced it, and the code used to produce it.

So what’s the difference?

The fundamental difference between streaming and downloading is what your device does with those frames of video:

Does it show them to you once and then throw them away? Or does it re-assemble them all back into a video file and save it into storage?

Screenshot from YouTube music video of Rick Astley's "Never Gonna Give You Up", 34 seconds in, showing Rick singing outdoors at night. the red YouTube progress bar goes a little over half way through the darker grey buffering indicator along the timeline bar.
When you’re streaming on YouTube, the video player running on your computer retains a buffer of frames ahead and behind of your current position, so you can skip around easily: the darker grey part of the timeline shows which parts of the video are stored on – that is, downloaded to – your computer.

Buffering is when your streaming player gets some number of frames “ahead” of where you’re watching, to give you some protection against connection issues. If your WiFi wobbles for a moment, the buffer protects you from the video stopping completely for a few seconds.

But for buffering to work, your computer has to retain bits of the video. So in a very real sense, all streaming is downloading! The buffer is the part of the stream that’s downloaded onto your computer right now. The question is: what happens to it next?

All streaming is downloading

So that’s the bottom line: if your computer deletes the frames of video it was storing in the buffer, we call that streaming. If it retains them in a file, we call that downloading.

That definition introduces a philosophical problem. Remember that Vimeo checkbox that lets a creator decide whether people can (i.e. are allowed to) download their videos? Isn’t that somewhat meaningless if all streaming is downloading.

Because if the difference between streaming and downloading is whether their device belonging to the person watching the video deletes the media when they’re done. And in virtually all cases, that’s done on the honour system.

Comic showing a conversation between Firefox and Netflix, as represented by their respective logos. Firefox says 'Hey, send me a copy of Dispicable Me 4?'. Netflix replies: 'Promise you'll delete it when you're done watching it?'. Firefox responds: 'Umm... sure!'
This kind of conversation happens, over the HTTP protocol, all the time. Probably most of the time the browser is telling the truth, but there’s no way to know for certain.

When your favourite streaming platform says that it’s only possible to stream, and not download, their media… or when they restrict “downloading” as an option to higher-cost paid plans… they’re relying on the assumption that the user’s device can be trusted to delete the media when the user’s done watching it.

But a user who owns their own device, their own network, their own screen or speakers has many, many opportunities to not fulfil the promise of deleting media it after they’ve consumed it: to retain a “downloaded” copy for their own enjoyment, including:

  • Intercepting the media as it passes through their network on the way to its destination device
  • Using client software that’s been configured to stream-and-save, rather than steam-and-delete, the content
  • Modifying “secure” software (e.g. an official app) so that it retains a saved copy rather than deleting it
  • Capturing the stream buffer as it’s cached in device memory or on the device’s hard disk
  • Outputting the resulting media to a different device, e.g. using a HDMI capture device, and saving it there
  • Exploiting the “analogue4 hole”5: using a camera, microphone, etc. to make a copy of what comes out of the screen/speakers6

Okay, so I oversimplified (before you say “well, actually…”)

It’s not entirely true to say that streaming and downloading are identical, even with the caveat of “…from the server’s perspective”. There are three big exceptions worth thinking about:

Exception #1: downloads can come in any order

When you stream some linear media, you expect the server to send the media in strict chronological order. Being able to start watching before the whole file has downloaded is a big part of what makes steaming appealing, to the end-user. This means that media intended for streaming tends to be stored in a way that facilitates that kind of delivery. For example:

  • Media designed for streaming will often be stored in linear chronological order in the file, which impacts what kinds of compression are available.
  • Media designed for streaming will generally use formats that put file metadata at the start of the file, so that it gets delivered first.
  • Video designed for streaming will often have frequent keyframes so that a client that starts “in the middle” can decode the buffer without downloading too much data.

No such limitation exists for files intended for downloading. If you’re not planning on watching a video until it’s completely downloaded, the order in which the chunks arrives is arbitrary!

But these limitations make the set of “files suitable for streaming” a subset of the set of “files suitable for downloading”. It only makes it challenging or impossible to stream some media intended for downloading… it doesn’t do anything to prevent downloading of media intended for streaming.

Exception #2: streamed media is more-likely to be transcoded

A server that’s streaming media to a client exists in a sort-of dance: the client keeps the server updated on which “part” of the media it cares about, so the server can jump ahead, throttle back, pause sending, etc. and the client’s buffer can be kept filled to the optimal level.

This dance also allows for a dynamic change in quality levels. You’ve probably seen this happen: you’re watching a video on YouTube and suddenly the quality “jumps” to something more (or less) like a pile of LEGO bricks7. That’s the result of your device realising that the rate at which it’s receiving data isn’t well-matched to the connection speed, and asking the server to send a different quality level8.

The server can – and some do! – pre-generate and store all of the different formats, but some servers will convert files (and particularly livestreams) on-the-fly, introducing a few seconds’ delay in order to deliver the format that’s best-suited to the recipient9. That’s not necessary for downloads, where the user will often want the highest-quality version of the media (and if they don’t, they’ll select the quality they want at the outset, before the download begins).

Exception #3: streamed media is more-likely to be encumbered with DRM

And then, of course, there’s DRM.

As streaming digital media has become the default way for many people to consume video and audio content, rights holders have engaged in a fundamentally-doomed10 arms race of implementing copy-protection strategies to attempt to prevent end-users from retaining usable downloaded copies of streamed media.

Take HDCP, for example, which e.g. Netflix use for their 4K streams. To download these streams, your device has to be running some decryption code that only works if it can trace a path to the screen that it’ll be outputting to that also supports HDCP, and both your device and that screen promise that they’re definitely only going to show it and not make it possible to save the video. And then that promise is enforced by Digital Content Protection LLC only granting a decryption key and a license to use it to manufacturers.11

Fingers hold a small box, about half the size of a deck of cards, labelled "ezcoo 4K HDML 2.0 Splitter 1x2". On the side facing the camera can be seen a "HDMI In" port and an "EDID" switch that can be set to 4K7.1, 4K5.1, or COPY1.
The real hackers do stuff with software, but people who just want their screens to work properly in spite of HDCP can just buy boxes like this (which I bought for a couple of quid on eBay). Obviously you could use something like this and a capture card to allow you to download content that was “protected” to ensure that you could only stream it, I suppose, too.

Anyway, the bottom line is that all streaming is, by definition, downloading, and the only significant difference between what people call “streaming” and “downloading” is that when “streaming” there’s an expectation that the recipient will delete, and not retain, a copy of the video. And that’s it.

Footnotes

1 This isn’t the question I expected to be answering. I made the animation in this post for use in a different article, but that one hasn’t come together yet, so I thought I’d write about the technical difference between streaming and downloading as an excuse to use it already, while it still feels fresh.

2 I’m using the example of a video, but this same principle applies to any linear media that you might stream: that could be a video on Netflix, a livestream on Twitch, a meeting in Zoom, a song in Spotify, or a radio show in iPlayer, for example: these are all examples of media streaming… and – as I argue – they’re therefore also all examples of media downloading because streaming and downloading are fundamentally the same thing.

3 There are a few simplifications in the first half of this post: I’ll tackle them later on. For the time being, when I say sweeping words like “every”, just imagine there’s a little footnote that says, “well, actually…”, which will save you from feeling like you have to say so in the comments.

4 Per my style guide, I’m using the British English spelling of “analogue”, rather than the American English “analog” which you’ll often find elsewhere on the Web when talking about the analog hole.

5 The rich history of exploiting the analogue hole spans everything from bootlegging a 1970s Led Zeppelin concert by smuggling recording equipment in inside a wheelchair (definitely, y’know, to help topple the USSR and not just to listen to at home while you get high) to “camming” by bribing your friendly local projectionist to let you set up a video camera at the back of the cinema for their test screening of the new blockbuster. Until some corporation tricks us into installing memory-erasing DRM chips into our brains (hey, there’s a dystopic sci-fi story idea in there somewhere!) the analogue hole will always be exploitable.

6 One might argue that recreating a piece of art from memory, after the fact, is a very-specific and unusual exploitation of the analogue hole: the one that allows us to remember (or “download”) information to our brains rather than letting it “stream” right through. There’s evidence to suggest that people pirated Shakespeare’s plays this way!

7 Of course, if you’re watching The LEGO Movie, what you’re seeing might already look like a pile of LEGO bricks.

8 There are other ways in which the client and server may negotiate, too: for example, what encoding formats are supported by your device.

9 My NAS does live transcoding when Jellyfin streams to devices on my network, and it’s magical!

10 There’s always the analogue hole, remember! Although in practice this isn’t even remotely necessary and most video media gets ripped some-other-way by clever pirate types even where it uses highly-sophisticated DRM strategies, and then ultimately it’s only legitimate users who end up suffering as a result of DRM’s burden. It’s almost as if it’s just, y’know, simply a bad idea in the first place, or something. Who knew?

11 Like all these technologies, HDCP was cracked almost immediately and every subsequent version that’s seen widespread rollout has similarly been broken by clever hacker types. Legitimate, paying users find themselves disadvantaged when their laptop won’t let them use their external monitor to watch a movie, while the bad guys make pirated copies that work fine on anything. I don’t think anybody wins, here.

× × ×

Google Shared My Phone Number!

Duration

Podcast Version

This post is also available as a podcast. Listen here, download for later, or subscribe wherever you consume podcasts.

Earlier this month, I received a phone call from a user of Three Rings, the volunteer/rota management software system I founded1.

We don’t strictly offer telephone-based tech support – our distributed team of volunteers doesn’t keep any particular “core hours” so we can’t say who’s available at any given time – but instead we answer email/Web based queries pretty promptly at any time of the day or week.

But because I’ve called-back enough users over the years, it’s pretty much inevitable that a few probably have my personal mobile number saved. And because I’ve been applying for a couple of interesting-looking new roles, I’m in the habit of answering my phone even if it’s a number I don’t recognise.

Dan sits at his laptop in front of a long conference table where a group of people are looking at a projector screen.
Many of the charities that benefit from Three Rings seem to form the impression that we’re all just sat around in an office, like this. But in fact many of my fellow volunteers only ever see me once or twice a year!

After the first three such calls this month, I was really starting to wonder what had changed. Had we accidentally published my phone number, somewhere? So when the fourth tech support call came through, today (which began with a confusing exchange when I didn’t recognise the name of the caller’s charity, and he didn’t get my name right, and I initially figured it must be a wrong number), I had to ask: where did you find this number?

“When I Google ‘Three Rings login’, it’s right there!” he said.

Google Search results page for 'Three Rings CIC', showing a sidebar with information about the company and including... my personal mobile number and a 'Call' button that calls it!
I almost never use Google Search2, so there’s no way I’d have noticed this change if I hadn’t been told about it.

He was right. A Google search that surfaced Three Rings CIC’s “Google Business Profile” now featured… my personal mobile number. And a convenient “Call” button that connects you directly to it.

'Excuse me' GIF reaction. A white man blinks and looks surprised.

Some years ago, I provided my phone number to Google as part of an identity verification process, but didn’t consent to it being shared publicly. And, indeed, they didn’t share it publicly, until – seemingly at random – they started doing so, presumably within the last few weeks.

Concerned by this change, I logged into Google Business Profile to see if I could edit it back.

Screenshot from Google Business Profile, with my phone number and the message 'Your phone number was updated by Google.'.
Apparently Google inserted my personal mobile number into search results for me, randomly, without me asking them to. Delightful.

I deleted my phone number from the business listing again, and within a few minutes it seemed to have stopped being served to random strangers on the Internet. Unfortunately deleting the phone number also made the “Your phone number was updated by Google” message disappear, so I never got to click the “Learn more” link to maybe get a clue as to how and why this change happened.

Last month, high-street bank Halifax posted the details of a credit agreement I have with them to two people who aren’t me. Twice in two months seems suspicious. Did I accidentally click the wrong button on a popup and now I’ve consented to all my PII getting leaked everywhere?

Spoof privacy settings popup, such as you might find on a website, reading: We and our partners work very hard to keep your data safe and secure and to operate within the limitations of the law. It's really hard! Can you give us a break and make it easier for us by consenting for us to not have to do that? By clicking the 'I Agree' button, you consent to us and every other company you do business with to share your personal information with absolutely anybody, at any time, for any reason, forever. That's cool, right?
Don’t you hate it when you click the wrong button. Who reads these things, anyway, right?

Such feelings of rage.

Footnotes

1 Way back in 2002! We’re very nearly at the point where the Three Rings system is older than the youngest member of the Three Rings team. Speaking of which, we’re seeking volunteers to help expand our support team: if you’ve got experience of using Three Rings and an hour or two a week to spare helping to make volunteering easier for hundreds of thousands of people around the world, you should look us up!

2 Seriously: if you’re still using Google Search as your primary search engine, it’s past time you shopped around. There are great alternatives that do a better job on your choice of one or more of the metrics that might matter to you: better privacy, fewer ads (or more-relevant ads, if you want), less AI slop, etc.

× × × × ×

Dynamic Filters in Pure CSS

While working on something else entirely1, I had a random thought:

Could the :checked and and :has pseudo-classes and the subsequent-sibling (~) selector be combined to perform interactive filtering without JavaScript?

Turns out, yes. Have a play with the filters on the side of this. You can either use:

  • “OR” mode, so you can show e.g. “all mammals and carnivores”, or
  • “AND” mode, so you can show e.g. “all mammals that are carnivores”.

Filter the animals!

(if it doesn’t work right where you are, e.g. in a feed reader, you can view it “standalone”)

  • Alpaca
  • Anteater
  • Bat
  • Beetle
  • Butterfly
  • Camel
  • Cat
  • Chameleon
  • Cobra
  • Cow
  • Crab
  • Crocodile
  • Dog
  • Duck
  • Elephant
  • Elk
  • Fish
  • Frog
  • Giraffe
  • Hippo
  • Husky
  • Kangaroo
  • Lion
  • Macaw
  • Manatee
  • Monkey
  • Mouse
  • Octopus
  • Ostrich
  • Owl
  • Panda
  • Pelican
  • Penguin
  • Pig
  • Rabbit
  • Raccoon
  • Ray
  • Rhino
  • Rooster
  • Shark
  • Sheep
  • Sloth
  • Snake
  • Spider
  • Squirrel
  • Swan
  • Tiger
  • Toucan
  • Turtle
  • Whale

The source code is available to download under the Unlicense, but the animal images are CC-BY licensed (with thanks to Aslan Almukhambetov).

How does it work?

There’s nothing particularly complicated here, although a few of the selectors are a little verbose.

First, we set the initial state of each animal. In “OR” mode, they’re hidden, because each selected checkbox is additive. In “AND” mode, they’re shown, because checking a checkbox can only ever remove an animal from the result set:

#filters:has(#filter-or:checked) ~ #animals .animal {
  display: none;
}

#filters:has(#filter-and:checked) ~ #animals .animal {
  display: flex;
}

The magic of the :has pseudo-class is that it doesn’t change the scope, which means that after checking whether “AND” or “OR” is checked within the #filters, the #animals container is still an adjacent element.

Touchscreen interactive restaurant menu with filter categories on the left and dishes for selection in the centre.
Next time you’re implementing a filter interface, like this restaurant menu, perhaps ask whether you actually need JavaScript.

Then all we need to do is to use daisy-chain :has to show animals with a particular class if that class is checked in “OR” mode, or to hide animals that don’t have a particular class in “AND” mode. Here’s what that looks like:

#filters:has(#filter-or:checked):has(#aquatic:checked)  ~ #animals .aquatic,
#filters:has(#filter-or:checked):has(#bird:checked)     ~ #animals .bird,
...
#filters:has(#filter-or:checked):has(#reptile:checked)  ~ #animals .reptile {
  display: flex;
}

#filters:has(#filter-and:checked):has(#aquatic:checked) ~ #animals .animal:not(.aquatic),
#filters:has(#filter-and:checked):has(#bird:checked)    ~ #animals .animal:not(.bird),
...
#filters:has(#filter-and:checked):has(#reptile:checked) ~ #animals .animal:not(.reptile) {
  display: none;
}

It could probably enjoy an animation effect to make it clearer when items are added and removed2, but that’s a consideration for another day.

Many developers would be tempted to use JavaScript to implement the client-side version of a filter like this. And in some cases, that might be the right option.

But it’s always worth remembering that:

  • A CSS solution is almost-always more-performant than a JS one.
  • A JS solution is usually less-resilient than a CSS one: a CDN failure, unsupported API, troublesome content-blocker or syntax error will typically have a much larger impact on JavaScript.
  • For the absolutely maximum compatibility, consider what you can do in plain HTML, or on the server-side, and treat anything on the client-side as progressive enhancement.

Footnotes

1 The thing I was actually working on when I got distracted was an OAuth provider implementation for Three Rings, connected with work that took place at this weekend’s hackathon to (eventually) bring single-sign-on “across” Three Rings CIC’s products. Eventually being the operative word.

2 Such an animation should, of course, be wrapped in a @media (prefers-reduced-motion: no-preference) media query!

×

Avanti’s Accessibility Failure

I’m travelling by train in just over a week, and I signed up an account with Avanti West Coast, who proudly show off their Shaw Trust accessibility certificate.

Clearly that certificate only applies to their website, though, and not to e.g. their emails. When you sign up an account with them, you need to verify your email address. They send you a (HTML-only) email with a link to click. Here’s what that link looks like to a sighted person:

Tail end of a formatted email with a black-on-orange "Verify email address" button.So far, so good. But here’s the HTML code they’re using to create that button. Maybe you’ll spot the problem:

<a href="https://www.avantiwestcoast.co.uk/train-tickets/verify-email?prospectId=12345678&customerKey=12345:1234567"
 target="_blank"
  style="font-family:Averta, Arial, Helvetica, sans-serif;">

  <div class="mobile_image"
       style="font-family:Averta, Arial, Helvetica, sans-serif; display:none; border:none; text-align:center;">

    <img width="40%"
           src="https://image.e.avantiwestcoast.co.uk/lib/fe371170756404747d1670/m/1/9069c7a2-b9ed-4188-9809-bf58592ec002.png"
           alt=""
         style="font-family:Averta, Arial, Helvetica, sans-serif;width:143px; height:40px;" />

  </div>
</a>

Despite specifying the font to use three times, they don’t actually have any alt text. So for somebody who can’t see that image, the link is completely unusable1.

This made me angry enough that I gave up on my transaction and bought my train tickets from LNER instead.

Accessibility matters. And that includes emails. Do better, Avanti.

Footnotes

1 Incidentally, this also makes the email unusable for privacy-conscious people who, like me, don’t routinely load remote images in emails. But that’s a secondary concern, really.

×

Variable-aspect adaptive-bitrate video… in vanilla HTML?

The video below is presented in portrait orientation, because your screen is taller than it is wide.

The video below is presented in landscape orientation, because your screen is wider than it is tall.

The video below is presented in square orientation (the Secret Bonus Square Video!), because your screen has approximately the same width as as its height. Cool!

This is possible (with a single <video> element, and without any Javascript!) thanks to some cool HTML features you might not be aware of, which I’ll briefly explain in the video. Or scroll down for the full details.

Variable aspect-ratio videos in pure HTML

I saw a 2023 blog post by Scott Jehl about how he helped Firefox 120 (re)gain support for the <source media="..."> attribute. Chrome added support later that year, and Safari already had it. This means that it’s pretty safe to do something like this:

<video controls>
  <source src="squareish.mp4"
        media="(min-aspect-ratio: 0.95) and (max-aspect-ratio: 1.05)" />
  <source src="portrait.mp4"
        media="(orientation: portrait)" />
  <source src="landscape.mp4" />
</video>
This code creates a video with three sources: squareish.mp4 which is shown to people on “squareish” viewports, failing that portrait.mp4 which is shown to people whose viewports are taller than wide, and failing that landscape.mp4 which is shown to anybody else.

That’s broadly-speaking how the video above is rendered. No JavaScript needed.

Browsers only handle media queries on videos when they initially load, so you can’t just tip your phone over or resize the window: you’ll need to reload the page, too. But it works! Give it a go: take a look at the video in both portrait and landscape modes and let me know what you think1.

Adding adaptive bitrate streaming with HLS

Here’s another cool technology that you might not have realised you could “just use”: adaptive bitrate streaming with HLS!

You’ve used adaptive bitrate streaming before, though you might not have noticed it. It’s what YouTube, Netflix, etc. are doing when your network connection degrades and you quickly get dropped-down, mid-video, to a lower-resolution version2.

Turns out you can do it on your own static hosting, no problem at all. I used this guide (which has a great description of the parameters used) to help me:

ffmpeg -i landscape.mp4 \
       -filter_complex "[0:v]split=3[v1][v2][v3]; [v1]copy[v1out]; [v2]scale=w=1280:h=720[v2out]; [v3]scale=w=640:h=360[v3out]" \
       -map "[v1out]" -c:v:0 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:0 5M -maxrate:v:0 5M -minrate:v:0 5M -bufsize:v:0 10M -preset slow -g 48 -sc_threshold 0 -keyint_min 48 \
       -map "[v2out]" -c:v:1 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:1 3M -maxrate:v:1 3M -minrate:v:1 3M -bufsize:v:1 3M -preset slow -g 48 -sc_threshold 0 -keyint_min 48 \
       -map "[v3out]" -c:v:2 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:2 1M -maxrate:v:2 1M -minrate:v:2 1M -bufsize:v:2 1M -preset slow -g 48 -sc_threshold 0 -keyint_min 48 \
       -map a:0 -c:a:0 aac -b:a:0 96k -ac 2 \
       -map a:0 -c:a:1 aac -b:a:1 96k -ac 2 \
       -map a:0 -c:a:2 aac -b:a:2 48k -ac 2 \
       -f hls -hls_time 2 -hls_playlist_type vod -hls_flags independent_segments -hls_segment_type mpegts \
       -hls_segment_filename landscape_%v/data%02d.ts \
       -master_pl_name landscape.m3u8 \
       -var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2" landscape_%v.m3u8
This command splits the H.264 video landscape.mp4 into three different resolutions: the original “v1” (1920×1080, in my case, with 96kbit audio), “v2” (1280×720, with 96kbit audio), and “v3” (640×360, with 48kbit audio), each with a resolution-appropriate maximum bitrate, and forced keyframes every 48th frame. Then it breaks each of those into HLS segments (.ts files) and references them from a .m3u8 playlist.

The output from this includes:

  • Master playlist landscape.m3u8, which references the other playlists with reference to their resolution and bandwidth, so that browsers can make smart choices,
  • Playlists landscape_0.m3u8 (“v1”), landscape_1.m3u8 (“v2”), etc., each of which references the “parts” of that video,
  • Directories landscape_0/, landscape_1/ etc., each of which contain
  • data00.ts, data01.ts, etc.: the actual “chunks” that contain the video segments, which can be downloaded independently by the browser as-needed

Bringing it all together

We can bring all of that together, then, to produce a variable-aspect, adaptive bitrate, HLS-streamed video player… in pure HTML and suitable for static hosting:

<video controls>
  <source src="squareish.m3u8"
         type="application/x-mpegURL"
        media="(min-aspect-ratio: 0.95) and (max-aspect-ratio: 1.05)" />

  <source src="portrait.m3u8"
         type="application/x-mpegURL"
        media="(orientation: portrait)" />

  <source src="landscape.m3u8"
         type="application/x-mpegURL" />
</video>
You could, I suppose, add alternate types, poster images, and all kinds of other fancy stuff, but this’ll do for now.

That’ll “just work” in Safari and a handful of mobile browsers… but won’t display anything for most desktop browsers. Boo!

One solution is to also provide the standard .mp4 files as an alternate <source>, and that’s fine I guess, but you lose the benefit of HLS (and you have to store yet more files). But there’s a workaround:

Polyfill full functionality for all browsers

If you’re willing to use a JavaScript polyfill, you can make the code above work on virtually any device. I gave this a go, here, by:

  1. Including the polyfill hls.js, and
  2. Adding some JavaScript code that detects affected `<video>` elements and applying the fix if necessary:
// Find all <video>s which have HLS sources:
for( hlsVideo of document.querySelectorAll('video:has(source[type="application/x-mpegurl"]), video:has(source[type="vnd.apple.mpegurl"])') ) {
  // If the browser has native support, do nothing:
  if( hlsVideo.canPlayType('application/x-mpegurl') || hlsVideo.canPlayType('application/vnd.apple.mpegurl') ) continue;

  // If hls.js can't help fix that, do nothing:
  if ( ! Hls.isSupported() ) continue;

  // Find the best source based on which is the first one to match any applicable CSS media queries
  const bestSource = Array.from(hlsVideo.querySelectorAll('source')).find(source=>window.matchMedia(source.media).matches)

  // Use hls.js to attach the best source:
  const hls = new Hls();
  hls.loadSource(bestSource.src);
  hls.attachMedia(hlsVideo);
}
It makes me feel a little dirty to make a <video> depend on JavaScript, but if that’s the route you want to go down while we wait for HLS support to become more widespread (rather than adding different-typed sources) then that’s fine, I guess.

This was a fun dive into some technologies I’ve not had the chance to try before. A fringe benefit of being a generalist full-stack developer is that when you’re “between jobs” you get to play with all the cool things when you’re brushing up your skills before your next big challenge!

(Incidentally: if you think you might be looking to employ somebody like me, my CV is over there!)

Footnotes

1 There definitely isn’t a super-secret “square” video on this page, though. No siree. (Shh.)

2 You can tell when you get dropped to a lower-resolution version of a video because suddenly everybody looks like they’re a refugee from Legoland.

Deprecate React

I’m keeping an eye out for my next career move (want to hire me?). Off the back of that I’ve been brushing up on the kinds of skills that I might be asked to showcase in any kind of “tech test”.

Not the kind of stuff I can do with one hand tied behind my back1, but the things for which I’d enjoy feeling a little more-confident2. Stuff that’s on my CV that I’ve done and can do, but where I’d like to check before somebody asks me about it in an interview.

React? Sure, I can do that…

LinkedIn, GlassDoor, and bits of the Fediverse are a gold mine for the kinds of things that people are being asked to demonstrate in tech tests these days. Like this post:

On LinkedIn, Avantika Raj shares a coding question asked during their React Developer interview with Volkswagon Software Solutions. It reads: Create a traffic light component with green, yellow, and red lights. On clicking a button, the light should change. Initially, it should show green. After 2 minutes, it should automatically switch to red for 30 seconds, then yellow for 10 seconds, and repeat this cycle continuously.
I’d describe myself as a “stack-agnostic senior/principal full-stack/backend web developer/security engineer”3, and so this question – which feels like it’s a filter for a junior developer with a React specialisation – isn’t really my wheelhouse. Which makes it a perfect excuse for an hour of playing about with React.

My recent React experience has mostly involved Gutenberg blocks and WordPress theme component. This seemed like an excuse to check that I can wrangle a non-WordPress React stack.

Animated GIF showing traffic lights changing through their phases on-demand or on a toggleable timer.
This isn’t particularly sophisticated. I added customisable durations for each light, but otherwise it’s pretty basic.

Half an hour later, I’d proven to myself that yes, I could throw together a fresh application with React DOM and implement some React components, pass state around and whatnot.

Time to move on to the next thing, right? That’s what a normal person would do.

But that’s not the kind of person I am.

Let’s reimplement this as Web Components

What I found myself thinking was… man, this is chunky. React is… not the right tool for this job.

(Or, increasingly, any job. But I’ll get back to that.)

A minified production build of my new component and its dependencies came in at 202kB (62.3kB compressed). That feels pretty massive for something that does so-little. So as an experiment, I re-implemented my new React component as a vanilla JS Web Component using a custom element. Identical functionality, but no third-party library dependencies. Here’s what I got:

This one’s interactive. Press a button or two!

The Web Component version of this control has no dependency chain and uses no JSX, and so it has no transpilation step: the source version is production-ready. You could minify it, but modern HTTP compression makes the impact of that negligible anyway: the whole thing weighs in at 19.5kB (5.2kB compressed) without minification.

And while I appreciate of course that there’s much more to JavaScript complexity and performance than file sizes… and beyond that I appreciate that there’s a lot more to making great components than the resulting bundle size… it’s hard to argue that delivering the same functionality (and less fragility) in a twelfth of the payload isn’t significant.

Composite screenshots showing the Chrome performance metrics and Network download sizes for the React and Web Components versions of my traffic lights. LCP - React 0.06s, Web Components 0.04s. INP - React 16ms, Web Components 8ms. Transferred - React 62.3kb (compressed), 202kB (uncompressed), in 37ms, Web Components 5.2kB (compressed), 19.5kB (uncompressed), in 22ms.
By any metric you like, the Web Components version outperforms the React version of my traffic light component. And while it’s a vastly-simplified example, it scales. Performance is a UX concern, and if you favour “what we’re familiar with” over “what’s best for our users”, that has to be a conscious choice.

But there’s a bigger point here:

React is the new jQuery

I’m alarmed by the fact that I’m still seeing job ads for “React developers”, with little more requirement than an ability to “implement things in React”.

From where I’m sitting, React is the new jQuery. It:

  • Was originally built to work around missing or underdeveloped JavaScript functionality
    • e.g. React’s components prior to Web Components
    • e.g. jQuery’s manipulation prior to document.querySelectorAll
  • Continued to be valuable as a polyfill and as a standard middleware while that functionality become commonplace
  • No longer provides enough value to be worth using in a new project
    • And yet somehow gets added “out of habit” for many years

If you’ve got a legacy codebase with lots of React in it, you’re still going to need React for a while. Just like how you’re likely to continue to need jQuery for a while until you can tidy up all those edge-cases where you’re using it.

(You might even be locked-in to using both React and jQuery for some time, if say you’ve got a plugin architecture that demands backwards-compatibility: I’m looking at you, WordPress!)

But just as you’re already (hopefully) working to slowly extricate your codebases from any now-unnecessary jQuery dependencies they have… you should be working on an exit plan for your React code, too. It’s done its time; it’s served its purpose: now it’s just a redundant dependency making your bundles cumbersome and harder to debug.

Everything React gives you on the client-side – components, state/hooks, routing4, etc. – is possible (and easy) in modern JavaScript supported in all major browsers. And if you still really want an abstraction layer, there are plenty of options (and they’re all a lot lighter than React!).

The bottom line is, I suppose…

You shouldn’t be hiring “React developers”!

If you’re building a brand new project, you shouldn’t be using React. It should be considered deprecated.

If you’ve got an existing product that depends on React… you should be thinking about how you’ll phase it out over time. And with that in mind, you want to be hiring versatile developers. They’ll benefit from some experience with React, sure, but unless they can also implement for the modern Web of tomorrow, they’ll just code you deeper into your dependency on React.

It’s time you started recruiting “Front-End Developers (React experience a plus)”. Show some long-term thinking! Or else the Web is going to move on without you, and in 5-10 years you’ll struggle to recruit people to maintain your crumbling stack.

You can download all my code and try it for yourself, if you like. The README has lots more information/spicy rants, and the whole thing’s under a public domain license so you can do whatever you like with it.

Footnotes

1 Exploiting or patching an injection vulnerability, optimising an SQL query, implementing a WordPress plugin, constructing a CircleCI buildchain, expanding test coverage over a Rubygem, performing an accessibility audit of a web application, extending a set of high-performance PHP-backed REST endpoints, etc. are all – I’d hope! – firmly in the “hold my beer” category of tech test skills I’d ace, for example. But no two tech stacks are exactly alike, so it’s possible that I’ll want to brush up on some of the adjacent technologies that are in the “I can do it, but I might need to hit the docs pages” category.

2 It’s actually refreshing to be learning and revising! I’ve long held that I should learn a new programming language or framework every year or two to stay fresh and to keep abreast of what’s going on in world. I can’t keep up with every single new front-end JavaScript framework any more (and I’m not sure I’d want to!)! But in the same way as being multilingual helps unlock pathways to more-creative thought and expression even if you’re only working in your native tongue, learning new programming languages gives you a more-objective appreciation of the strengths and weaknesses of what you use day-to-day. tl;dr: if you haven’t written anything in a “new” (to you) programming language for over a year, you probably should.

3 What do job titles even mean, any more? 😂 A problem I increasingly find is that I don’t know how to describe what I do, because with 25+ years of building stuff for the Web, I can use (and have used!) most of the popular stacks, and could probably learn a new one without too much difficulty. Did I mention I’m thinking about my next role? If you think we might “click”, I’d love to hear from you…

4 Though if you’re doing routing only on the client-side, I already hate you. Consider for example the SlimJS documentation which becomes completely unusable if a third-party JavaScript CDN fails: that’s pretty fragile!

× ×

Halifax Shared My Credit Agreement!

Remember that hilarious letter I got from British Gas a in 2023 where they messed-up my name? This is like that… but much, much worse.

Today, Ruth and JTA received a letter. It told them about an upcoming change to the agreement of their (shared, presumably) Halifax credit card.

Except… they don’t have a shared Halifax credit card. Could it be a scam? Some sort of phishing attempt, maybe, or perhaps somebody taking out a credit card in their names?

I happened to be in earshot and asked to take a look at the letter, and was surprised to discover that all of the other details – the last four digits of the card, the credit limit, etc. – all matched my Halifax credit card.

Carefully-censored letter from Halifax, highlighting the parts that show my correct address, last four digits of my card, and my credit limit... and where it shows a pair of names that are not mine.
Halifax sent a letter to me, about my credit card… but addressed it to… two other people I live with

I spent a little over half an hour on the phone with Halifax, speaking to two different advisors, who couldn’t fathom what had happened or how. My credit card is not (and has never been) a joint credit card, and the only financial connection I have to Ruth and JTA is that I share a mortgage with them. My guess is that some person or computer at Halifax tried to join-the-dots from the mortgage outwards and re-assigned my credit card to them, instead?

Eventually I had to leave to run an errand, so I gave up on the phone call and raised a complaint with Halifax in writing. They’ve promised to respond within… eight weeks. Just brilliant.

×

Breakups vs Layoffs

I’ve had a few breakups, but I’ve only been made redundant once. There’s a surprising overlap between the two…

Venn-Euler diagram with circles representing Romantic Breakups and Open-Source Tech Layoffs (the former overlaid with a broken heart icon, the latter with a background reminiscent of the Automattic logo). The items unique to each are 'paired': Who keeps the dog? / Who keeps the laptop?; Half your friends take "their side" / Half your friends still work for them; Slim chance of make-up sex / Slim chance of meaningful reference; Find out in person (unless they're a monster) / Find out by email (with 10 minutes notice); Comes with a brutal list of your flaws / Comes with no useful feedback whatsoever; "I'm happier on my own anyway!" / "Oh fuck, how am I going to pay the bills?"; Bags in the hallway / Bags under your eyes. The intersection of both circles includes: Trash talk; Risk of rebound; "How did I not see this coming?"; Sadness when you wake up without them; Still feel responsible for the things you produced; Emotionally gruelling.

And with that, I’d better get back to it. Today’s mission is to finish checking-in on my list of “companies I’ve always admired and thought I should work for” and see if any of them are actively looking for somebody like me!

(Incidentally: if you’re into open source, empowering the Web, and making the world a better place, my CV is over here. I’m a senior/principal full-stack engineer with a tonne of experience in some radically diverse fields, and if you think we’d be a good match then I’d love to chat!)

×

Geocities Live

I used Geocities.live to transform the DanQ.me homepage into “Geocities style” and I’ve got to say… I don’t hate what it came up with

90s-style-homepage version of DanQ.me, as generated by geocities.live. It features patterned backgrounds, Comic Sans, gaudy colours, and tables.
Sure, it’s gaudy, but it’s got a few things going for it, too.

Let’s put aside for the moment that you can already send my website back into “90s mode” and dive into this take on how I could present myself in a particularly old-school way. There’s a few things I particularly love:

  • It’s actually quite lightweight: ignore all the animated GIFs (which are small anyway) and you’ll see that, compared to my current homepage, there are very few images. I’ve been thinking about going in a direction of less images on the homepage anyway, so it’s interesting to see how it comes together in this unusual context.
  • The page sections are solidly distinct: they’re a mishmash of different widths, some of which exhibit a horrendous lack of responsivity, but it’s pretty clear where the “recent articles” ends and the “other recent stuff” begins.
  • The post kinds are very visible: putting the “kind” of a post in its own column makes it really clear whether you’re looking at an article, note, checkin, etc., much more-so than my current blocks do.
Further down the same page, showing the gap between the articles and the other posts, with a subscribe form (complete with marquee!).
Maybe there’s something we can learn from old-style web design? No, I’m serious. Stop laughing.

90s web design was very-much characterised by:

  1. performance – nobody’s going to wait for your digital photos to download on narrowband connections, so you hide them behind descriptive links or tiny thumbnails, and
  2. pushing the boundaries – the pre-CSS era of the Web had limited tools, but creators worked hard to experiment with the creativity that was possible within those limits.

Those actually… aren’t bad values to have today. Sure, we’ve probably learned that animated backgrounds, tables for layout, and mystery meat navigation were horrible for usability and accessibility, but that doesn’t mean that there isn’t still innovation to be done. What comes next for the usable Web, I wonder?

Geocities.live interpretation of threerings.org.uk. It's got some significant design similarities.
As soon as you run a second or third website through the tool, its mechanisms for action become somewhat clear and sites start to look “samey”, which is the opposite of what made 90s Geocities great.

The only thing I can fault it on is that it assumes that I’d favour Netscape Navigator: in fact, I was a die-hard Opera-head for most of the nineties and much of the early naughties, finally switching my daily driver to Firefox in 2005.

I certainly used plenty of Netscape and IE at various points, though, but I wasn’t a fan of the divisions resulting from the browser wars. Back in the day, I always backed the ideals of the “Viewable With Any Browser” movement.


88x31 animated GIF button in the Web 1.0 style, reading "DAN Q". The letter Q is spinning. Best Viewed With Any Browser button, in original (90s) style.

I guess I still do.

× × ×
OSZAR »