Hola chica/os,
Consider this a very delayed January edition...these ideas have been floating around for a while, but it’s been hard to find the time to get them down. Lots on AI and brains (again), picking up on some previous Anticipant themes, tentatively treading some new paths for me as well.
Some things I’ve been up to since last time:
I wrote a piece for the Paradigm Junction blog on the Apple Vision Pro, suggesting ways it might interact with developments in GenAI. This feels an under-discussed angle in coverage so far. Also did a lot of the drafting for our latest AI Digest email, which is worth subscribing to if you’re interested in AI developments and their likely impacts on organisations / the world of work (the Sora and Gemini 1.5 news came a bit late in the day for us, will cover those next time).
Kinesis club night. Thanks so much to those of you who came! We’re aiming to run a second event sometime in April, keep an eye on the Kinesis IG for updates. In the meantime, we’ve uploaded recordings of the sets on our SoundCloud (crowd noises included!) so you can relive it…or catch up :) Mine and Vas’s closing set as VL++ is here. (This piece from my music newsletter, setting out a provisional typology of club nights, gives some sense of what we were going for — and achieved, I think! — with the night, if you’re interested.)

Assistance end-games
I went to see Stuart Russell — a kind of doyen of AI, co-author of the most popular AI textbook globally — talk at the London Institute for Safe AI at the start of January. His talk didn’t make me concerned about super-intelligent AI wiping out humanity. It did, however, make me concerned about AI-boosted human individuals or communities generally f*cking things up in pursuit of their version of the good.
Much of Russell’s talk was about AI alignment, or how to get AI systems to do what we want them to do. Narrowly conceived, this is a question of setting clear objectives that they can follow and deliver on. But the problem comes when machines realise those objectives in ways that are unexpected and potentially harmful (Bostrom’s ‘paper clip’ scenario is the archetypal thought experiment here, where a super-intelligent paper-clip-production-optimising AI ends up repurposing all of the Earth’s resources to this end). More generally, research suggests that with incompletely or incorrectly defined objectives, better AI can actually lead to worse outcomes.
Russell’s proposed solution to this (I’m paraphrasing slightly) is to flip into a new model where you value a system not only on the basis of its intelligence, but according to how beneficial it is. A good AI system will not simply deliver results according to pre-programmed and invariably incomplete objectives, but will take actions that help achieve our objectives in a broader sense. And broad is very much the word: Russell suggested that the goal of AI systems should be to act in the best interests of humanity.
To facilitate this, he argued that machines should be designed as solvers of ‘assistance games’ (this podcast episode is a long and detailed discussion of these). An AI system should be explicitly uncertain about what the best interests of humanity, or just of its users/designers, are, learning this over time through observation and interactions with the human(s) in question, and updating its internal model accordingly. Some character traits of such a system include deference, minimally invasive behaviours, and willingness to be switched off.
Assistance gaming sounds sensible as a means of improving alignment between systems and their individual (or small group of) users. But what weirded me out in the talk was the notion that the best interests of humanity as a whole could be learned and optimised for in this way, too. There was a passing mention of utilitarianism, but no discussion of politics despite this being (I think?) the closest analogue for what Russell was talking about: a human designed system intended to facilitate beneficial outcomes for large groups of people by aggregating diverse preferences in various ways.
Maybe there’s some future in which an uber-AI is able to aggregate all the preferences of every member of the human community and make tweaks and changes to systems on earth to optimise for everyone. I can see many ways in which this would be good, assuming everyone’s preferences are weighed equally. I would hope for a much greater degree of equality, the excesses of the world’s richest curbed and the poverty of millions assuaged. Actions taken in self-interest by some that hinder the well-being or future survival of others would, surely, be curtailed.
But I’m already showing my own biases here. We’d first need to agree on what the good looks like. Some goods - equality, liberty, beauty, whatever - will be competing at least some of the time. Which trumps which, and under what conditions, will need to be negotiated. We try (and frequently fail) to do this through politics currently. Why is AI going to make it any easier?
There’s also the big question of which beings are included in scope. The humanity framing does not account for the interests of other life forms, despite increasing evidence of many animals’ intelligence and conscious experience. To a significant degree, it will be in the interests of humans to maintain the survival of other living species. But sometimes species’ interests will come into conflict. How are these different interests weighed? (I’ve just had a vision of future super-intelligences working as Earth stewards — treating humans as we do other large, wild mammals in ecosystem management now, with benign observation, interest, and culling.)
A super-intelligent AI, able to see all of the world’s complexity and anticipate the broader consequences of all its actions, if it was truly minimally-invasive, might end up not acting at all. It just wouldn’t be able to reconcile the competing preferences and value systems satisfactorily. Better to leave it all alone.
A system with this degree of sophistication is a way off yet, though. We’ve got at least some number of decades (2? 3??) in which individuals, or groups of individuals, will be able to pursue their own version of the good with not-super but still-highly-intelligent AI helping their actions have maximum impact. The issue, in this framing, is less that these systems will go off and do something rogue (although this remains a concern), but that they will effectively learn and act according to their users’ preferences and prove very beneficial for those users, at the expense of others whose interests are not being factored in adequately. Either this becomes something of an arms race, with the best technology not available to everyone, creating opportunities for those with access to jump ahead in terms of realising their preferred world (→ further wealth and power consolidation). Or this is a democratising technology (yay open source!) and we miraculously all get a similar degree of access at the same time, prompting an era of AI-accelerated individual/inter-group competition over resources, which sounds to me like a pretty good recipe for chaos… (You think regulation is slow and ineffectual now? Just you wait!)
Asides
It seems I’ve had the tab open since October last year(!), but on understanding human thoughts, this research from Meta is interesting. They developed a system which can reconstruct, from brain activity, the images that are being ‘perceived and processed’ by the brain. This is most effective with fMRI, which captures brain activity every few seconds: if you spent those few seconds looking at a plane, it will process that brain activity and auto-generate, you guessed it, an image of a plane. But the results are also impressive using MEG, which takes thousands of readings of brain activity every second: the system can interpret these to generate a shifting image of what it thinks you’re looking at in real time, with reasonable accuracy (the videos in the post are worth a look to make sense of this). No doubt Twitter/X at the time was full of hot takes on mind-reading etc.
There’s oodles of research atm trying to work out what GenAI can and can’t do, but little that I’ve seen on the impact of using these tools on our own brains — surprising, given ongoing concerns about the impact of smart phones and social media on brain health and development. When this is discussed, it’s usually in terms of observed behavioural changes (e.g. falling asleep at the wheel when using AI); actual monitoring of brain activity doesn’t seem to be involved. But my impression is that brain monitoring technology + neuroscience is in a place now, in a way that it wasn’t in the early days of smart phones or social media, where we could get some quite concrete findings to feed into guidance on AI use. Maybe I’m wrong, and we’re not there yet. Or it’s just too expensive, and there’s no funding. Either way, I assume it’s a gap someone is trying to fill?

Changing lane
A question I’ve been toying with a lot lately, and that I include here as an invitation for advice (and probably evidence of my overthinking):
When your focus is split across a range of different projects or types of work, how do you continue putting energy into all of them consistently?
(This is on the assumption that consistency is a key driver of improvement and ‘success.’ But perhaps it’s not — or at least not for all types of work. I’m very open to hearing that e.g. shorter, focused bursts on a rotating cycle of activities, where you might go weeks or months without doing some of the things, also produces good results.)
This isn’t just a question of time allocation and discipline, although that’s obviously important. I have two other sub-questions which feel under-discussed:
How do you harness natural fluctuations in interest and enthusiasm (some might say inspiration)? My brain, at least, seems to have natural cycles; my interests spark up at different points. This means that whether I like it or not, there are times when I’m able to work on something very efficiently, and others when it might take me twice as long (and I’ll probably end up doing it half as well). Over-prescriptive time management misses this dimension.
How can you make the code/context-switch from one domain to the other as smooth as possible? Switching between some things doesn’t feel too difficult (futures work -> tech consulting is an example for me). But in other cases, switching from one area of activity to another feels less like a lane change, and more like trying to get from the outside lane of a busy motorway to the inside in time for a fast-approaching junction, before getting out of your car and mounting a horse. It’s a change of speed and mindset as well as direction, one that requires greater concentration and more time — and more energy as a result (e.g. futures work -> music production is a struggle, in my experience).
These two sub-questions are, perhaps unsurprisingly, linked. My impression is that the context switch is easier if it’s following your current interests. Forcing yourself to do a task that’s really different from what you’ve just been doing (a big lane or even vehicle change) AND not something you really want to do right now is tough going. The problem is compounded by the fact that, often, what I want to do is pretty close to what I’ve been doing lately: the longer I stay in one lane, the more comfortable I become in that lane and the less willing I am to move (a tendency exhibited all-too-often by real-life motorway drivers) — at least until I finally tire of it and immediately lament my neglect of all the other things. Similarly, the more recently I’ve been in one lane, the easier it is to swing back into it.
The obvious solution is to do a narrower range of activities. In other words, to specialise. But while this is the direction that education and the world of work tries to push most of us in, I’m not convinced it suits everyone. I’ve gradually come to accept that this isn’t how my brain works, at least, and I see a similar pattern in many of my (especially freelance) friends and family. But I’m not aware of any concrete advice on how to manage this, or any consistency of approach across the people I know: we’re all just muddling through, frequently stressed and overwhelmed but just about making it work. I suspect this is true for many, many others.
At least with respect to changing lanes, I’ve been trying to listen to the advice on changing location. Going to different spaces for different activities, prompting my brain to enter the correct mode in the process (in much the same way that people try not to work in their bedrooms, keeping them as a place for sleep). There are a load of other quite obvious re-setting activities you could engage in: going outside, having a tea, meditating, etc. And I’ve seen people recommend creating your own rituals (lighting candles, eating certain foods, changing outfit), at least before doing creative work, to signal the shift. But this all seems pretty light-touch; it feels as though we’ve got a lot to learn before we can properly help people navigate a non-specialised world.
I’m not sure what that help really looks like. In a techno-solutionist vein, I’m imagining some future productivity app that:
does activity-based spaced repetition (as has been found to be effective in language learning), keeping you doing the things you want to be doing with sufficient regularity that the lanes don’t become too far away
determines your schedule based on your mental state and cycles of interest — which I presume are predictable, but perhaps not (obviously, patterns around the time of day that you do your best work etc. should also get baked in here)
analyses data from a real-time brain-reading device to determine how much time you’re spending in different activity zones, facilitating points 1 and 2 (brain reading has clearly been on my mind lately, unsure why)
reminds you to engage in pre-set habits or move to the appropriate location when you are anticipating a lane change…
Would love to hear how other people manage this!
Asides
There’s an obvious parallel between this and discussions about technology and hyper-stimulation, attention + possible increases in ADHD… I feel as though digital technology supports my non-specialisation. I can span a pretty wide variety of lanes from the comfort of my room, on my laptop. But it also seems likely that the recurrent sense of overwhelm is as much due to the vehicle(s) I’m navigating the lanes in as the lanes themselves. Both laptop and smartphone are incredibly powerful tools, and I feel lucky to have them. But my brain hasn’t evolved to spend all its time indoors with artificially illuminated screenery full of views that are designed to attract my attention. Two superpowers I haven't yet developed, but would like to (I assume we’ll all get increasing support with these, throughout our education and beyond): 1) being able to function effectively in a stimulus-flooded environment, focusing on the right things and not getting overwhelmed, and 2) being able to slow the brain and body down, at will, to rest and engage with the world in different ways (see Anticipant #3 for more on mental resilience and ways of ‘reconnecting’).
A bit of a tangent, but while unwell late last year I bought a Nintendo Switch and started gaming for the first time in AGES. This was supposed to be a relaxing activity, but I spent a lot of the time stressed. Zelda: Breath of the Wild, which I’d been particularly excited to play, put me in fight-or-flight mode constantly — precisely the opposite of what I was going for. When I tried slower, pootle-about games, they felt insufficiently compelling compared to e.g. reading a novel. Turn-based strategy games like Into the Breach were closer to my sweet spot, but still felt a bit intense and too much like hard work. Which made me wonder if there’s any research on what gaming preferences say about our characters: whether we prefer to act quickly or think first, our degree of conflict-aversion, what makes us feel stressed or relaxed... If anyone is aware of work on this, please send it my way!

Aiming to get a February update out in the next couple of weeks, with some open tabs in between. So should catch you here again very soon.
In the meantime, if you’re in London, get down to The Yard theatre for Rhianna’s exceptional dark comedy, Samuel Takes a Break …In Male Dungeon No. 5 After A Long But Generally Successful Day of Tours, on until 23rd March. Belly laughs and internal wrangling with the history of slavery guaranteed.
This club night run by the wonderful Alex/szoryn on 8th March should also be a fun experiment with the form, doing away with set times and headliners to welcome in a free-flowing chaos. Maybe see you there :)
Hope 2024 is treating you well so far!
Lewis