Status Update: November 2024

· im tosti


I decided a little bit ago that I’m going to try to write at least one article per month. Lo and behold, I have made my bed and now I must lie in it, as I have (as per usual) too many ideas I have high standards for, and insufficient time to actually execute them to the degree of my liking. I actually had like 1000 words written up for another post, when I realized it really deserves more effort than this (ever wonder how I end up agonizing for years to produce 100 lines of distilled code? this is how).

Anyway, as an alternative, what I decided to do is monthly updates ala Simon Ser (aka emersion)’s blog. If I have a “serious” article ready, that’s what I publish for the month (in which case, probably no monthly update). If I do not (or if I feel like it despite that), I talk about the stuff I’ve done that month1. With that out of the way, let’s get this show on the road.


As a consequence of catching up to some Clojure/conj talks, I discovered VSAs (Vector Symbolic Architectures). The actual talk wasn’t too interesting on its own, so I’m not going to talk about it much. Instead, I’ll just give you the interesting stuff I got to after some digging into the details. The starting point for this stuff as best as I can tell is Smolensky’s 1990 paper on tensor products. He finds a lot of interesting things to do with the mathematical processes of algebraic fields composed of a whole lot of dimensions. Importantly, one of the operations on the field ends up being a tensor multiplication. If you know what that means, you understand why no one bothered to implement them. However, it draws a really interesting parallel between VSAs and quantum mechanics (where tensor multiplication is commonly used to represent all possible interactions in quantum states). Which is exactly the kind of place this stuff ends up talked about nowadays, such as the recent-ish progress with language recognition that Kanerva participated in. There’s a lot to unpack in this field, but here’s a few more papers to read.

To summarize very tersely, VSAs tend to model biological memory (so lossy and probabilistic, with high dimensionality). This is primarily interesting in connectivist AI (the 90s tech), which is also where neural networks came out of. Unlike neural networks, however, the actual processing bit can be done intelligently: the way we get to an answer using them can be explained (this update just isn’t the place for that). There’s also not much of a “training” step, it mostly looks the same as running things in practice, and adding to the “training” data can be done whenever without needing to necessarily start over (contrast to fine-tuning potentially causing models to lose accuracy). For these reasons, I found it quite interesting. Over a weekend I built a very simple prototype (it’s something like 40 LOC, so truly rough prototype) that I might be able to extend later that performs fuzzy searching over text. I might work on it again at some point in orer to build a linearly-searchable fuzzy index of a dictionary or something like that, and then see which of my ideas to make it indexable are worthwhile. The real issue is that if I wanted to really deep-dive into that subject, I’d have to write compute shaders, and I’m not about that life.


I had my vision tested for the first time since I immigrated. It either got MUCH better (from -4.25 to -1.5 in my left eye, right eye mostly the same as before), or I was severely overprescribed back in Canada. I still haven’t had the time to actually go and order the new glasses, but hey, at least it’s something!

I’ve also played Lies of P. I am a known enjoyer of soulslikes, and this is one made by a studio other than FROMSOFT that actually delivers. It’s very similar in vibes and a lot of gameplay to bloodborne, but is actually preferable in some respects (bloodborne is very glad it has a DLC in this comparison). As is tradition, I will be going for 100%, which I expect to be completable in December, since I tend to play this tired and after work (which is to say, often enough to get significant progress).

I’ve been thinking about actually starting work on the alternative package/project manager for Janet. I don’t really like JPM for a variety of reasons (neither does Calvin, to my understanding), and I have some very specific ideas on how I would like it to be. Chances are, I won’t be able to start with it per se though, and will instead first have to write a few other libraries, such as a libgit2 wrapper and a libpkgconf wrapper to begin with. I think you know where this is going (2026).

Finally, some additional progress has been made on ParaDice, the tabletop roleplaying system my friend Tricky and I have been working on for a couple of years now. I’m not going to talk about it too much, since we’re approaching a point where we can actually publish the damn system (and that’s with my standards!). The recently solved problem is the correspondence of modeling of magic in system vs in lore. While the actual way magic works has not changed since the start of the work (and we remain happy with it, mostly because it’s based on EldritchPunk, a meta-setting I’ve been working on for around a decade now), and the way we represent it mechanically also has not changed since the start of the project, the way that we represent the progression of mages is what we’ve finally fixed up. This is going to be used in the next playtesting session (so after we finish the current ones)!

That’s it for this one, see you in the next one (even if it’s not next month)!


  1. In the converse case, it’s kind of obvious, I was writing the article in question! My “serious” articles tend to go through approximately 5 rewrites and 3 rounds of editing before I actually post them. I’ve been trying to lower the standards a little bit, but there’s a limit as to how far I’m willing to go (not that far, as it turns out, thus the creation of this initial status update). ↩︎