Kerry Buckley What’s the simplest thing that could possibly go wrong?

18 March 2026

Zooming in on slow code with profiling and benchmarking tools

Filed under: Elixir,Software — Tags: , , , , , , — Kerry Buckley @ 2:59 pm

I recently spent a bit of time trying to speed up some slow tests. There’s still a long way to go, but I managed to tweak some code in an integration test to speed it up, and better still improve the efficiency of some actual production code where the slow test was a symptom of slow code.

The integration test performs various steps, and having previously tried a few things more-or-less at random to make it take less time, I was slightly more organised, and adopted the time-honoured approach of sprinkling puts "Doing thing: #{DateTime.utc_now()}" lines through the code, then staring intently at the bit that took longest to see what it was up to.

The second one was less obvious, and once I’d realised that the bottleneck was the function under test rather than something inefficient going on in the test it felt like I needed to be a bit more scientific and get some actual stats on where the time was being spent, and thus I finally arrive at the point of this post.

I knew I’d used tools to do this in the past, but for the life of me couldn’t remember the name of the generic type of thing I was looking for (yes, I’m getting old). I went round the houses a bit, but for the benefit of the reader – which may just be future me – the word you want to google, along with your language of choice, is profiler. I’m writing in Elixir, and opted to use ExProf, which is a thin wrapper round Erlang’s built-in eprof.

The offending function was Premonition.ManageTasks.dashboard/1, so with some representative data in the database I ran this function inside ExProf, which produced the following output:

iex(1)> import ExProf.Macro
ExProf.Macro
iex(2)> profile do: Premonition.ManageTasks.dashboard("MCC PCC-SM")
FUNCTION CALLS % TIME [uS / CALLS]
-------- ----- ------- ---- [----------]
maps:merge_with_1/3 2 0.00 0 [ 0.00]
maps:to_list_internal/1 2 0.00 0 [ 0.00]
maps:iterator/1 2 0.00 0 [ 0.00]

[ ... lots more uninteresting lines ... ]

'Elixir.Premonition.Tasks.Config.TaskDef':get_task/1 9475 0.03 764 [ 0.08]
'Elixir.Premonition.Tasks.Config':tasks/0 1 0.03 838 [ 838.00]
lists:keyfind/3 3004 0.04 998 [ 0.33]
'Elixir.Access':get/3 9578 0.05 1168 [ 0.12]
'Elixir.Keyword':get/3 2738 0.08 2154 [ 0.79]
erlang:module_loaded/1 1087 0.09 2208 [ 2.03]
'Elixir.Access':get/2 9578 0.10 2459 [ 0.26]
lists:member/2 4239 0.25 6299 [ 1.49]
re:import/1 66340 0.91 23117 [ 0.35]
'Elixir.Premonition.Tasks.Config':task/1 9475 97.95 2490083 [ 262.81]
----------------------------------------------------------------------- ------ ------- ------- [----------]
Total: 212736 100.00% 2542126 [ 11.95]

The profiler lists all the functions that end up getting called , both in our application code and in the standard libraries (the table is generated by the underlying Erlang tool, so if you’re only used to Elixir syntax then the format will look a bit odd). Each line includes the number of times the function was called, the total time (in microseconds) spent in that function, and what percentage that comprises of the total time. Clearly Premonition.Tasks.Config.task/1 is looking suspicious, being called 9,475 times and making up nearly 98% of the total time.

Here’s the code:

  def dashboard(node_type) do
    disabled_tasks = Repo.all(DisabledTask)
    threshold_overrides = threshold_overrides()
    
    tasks =
      Config.tasks()
      |> Map.keys()
      |> Enum.filter(&(TaskDef.node_type(&1) == node_type))
      |> Enum.sort_by(&Tasks.reference_for_sorting/1)
      |> Enum.map(&TaskInfo.new(&1, threshold_overrides, disabled_tasks))
      
    nodes =
      from(n in Node, where: n.type == ^node_type, order_by: n.name)
      |> Repo.all()
      |> Enum.map(&node_with_tasks(&1, tasks))
      
    %{nodes: nodes, tasks: tasks}
  end

Starting on line 5, Config.tasks() returns a compile-time map of around 2,800 task identifiers to task configuration structs. We’re taking the keys from that list, and filtering for the ones with the correct node type before sorting them and building the structs we need. Without going into all the details, it turns out that TaskDef.node_type/1 looks up the task in the same map we started with, using the Premonition.Tasks.Config.task/1 function that the profiler highlighted. So we’re taking every key in the map, and looking it up to get its value (which we already had until we discarded the values with Map.keys/1). What if we filtered on the map itself, which would mean we were just making one pass through the map instead of one per identifier? Then we can just take the keys of the small subset we’re interested in.

Here’s a slightly-modified version of the function that still passes the tests:

def dashboard(node_type) do
    disabled_tasks = Repo.all(DisabledTask)
    threshold_overrides = threshold_overrides()
    
    tasks =
      Config.tasks()
      |> Map.filter(fn {_id, task} -> task.node_type == node_type end)
      |> Map.keys()
      |> Enum.sort_by(&Tasks.reference_for_sorting/1)
      |> Enum.map(&TaskInfo.new(&1, threshold_overrides, disabled_tasks))
      
    nodes =
      from(n in Node, where: n.type == ^node_type, order_by: n.name)
      |> Repo.all()
      |> Enum.map(&node_with_tasks(&1, tasks))
      
    %{nodes: nodes, tasks: tasks}
  end

There are no doubt more efficient approaches than creating a new map just to discard its values, but we’ll just make a note of that to return to later – it’s safer to only measure the effect of one change at a time.

Let’s run the profiler again (I’ve hidden everything apart from that one function call at the bottom of the table):

FUNCTION                                                                  CALLS        %     TIME  [uS / CALLS]
-------- ----- ------- ---- [----------]
...
'Elixir.Premonition.Tasks.Config':task/1 6632 96.80 1742984 [ 262.81]
----------------------------------------------------------------------- ------ ------- ------- [----------]
Total: 181503 100.00% 1800633 [ 9.92]

Well it’s better (about 1.7 seconds rather than 2.5), but it’s still not great, and Premonition.Tasks.Config.task/1 is still taking up the vast majority of that time. Let’s dig a bit deeper – here’s the source again, this time also with the private function that we’re calling from line 10:

  def dashboard(node_type) do
    disabled_tasks = Repo.all(DisabledTask)
    threshold_overrides = threshold_overrides()
    
    tasks =
      Config.tasks()
      |> Map.filter(fn {_id, task} -> task.node_type == node_type end)
      |> Map.keys()
      |> Enum.sort_by(&Tasks.reference_for_sorting/1)
      |> Enum.map(&TaskInfo.new(&1, threshold_overrides, disabled_tasks))
      
    nodes =
      from(n in Node, where: n.type == ^node_type, order_by: n.name)
      |> Repo.all()
      |> Enum.map(&node_with_tasks(&1, tasks))
      
    %{nodes: nodes, tasks: tasks}
  end
  
  defp node_with_tasks(node, tasks) do
    tasks_for_node =
      tasks
      |> Enum.filter(&(node.subtype in TaskDef.node_subtypes(&1.task) and TaskDef.run_on_node?(&1.task, node)))
      |> Enum.sort_by(&Tasks.reference_for_sorting(&1.task))
      
    %{node | tasks: tasks_for_node}
  end

We’re calling another TaskDef function on line 23, and node_subtypes/1 behaves very much like node_type/1, but looking at a different field in the struct. One again, we’re looking up each task identifier in the full list (and repeating that lookup for each node), after throwing away the task struct earlier with our Map.keys() on line 8.

Let’s split up that first pipeline and hold onto the filtered tasks map. After a bit of fiddling around until the tests passed again, here’s another version. We’re now using a list of {task, task_info} tuples, so we can filter based on the task, then return the TaskInfo structs. We could have added more fields to the struct instead, but this didn’t quite feel right as that struct is used externally and passed between processes (which in Elixir don’t share data), so we want to add to the amount of data that has to be copied, when it’s only used internally here.

The code’s starting to get pretty messy, but let’s stick to one thing at a time – I know the mantra is “make it work; make it right; make it fast”, but we’re on the “make it fast” bit at the moment, and we can go back and “make it right” once we’re finished with the performance improvements.

  def dashboard(node_type) do
    disabled_tasks = Repo.all(DisabledTask)
    threshold_overrides = threshold_overrides()
    
    tasks_with_info =
      Config.tasks()
      |> Map.filter(fn {_id, task} -> task.node_type == node_type end)
      |> Enum.map(fn {id, task} -> {task, TaskInfo.new(id, threshold_overrides, disabled_tasks)} end)
      |> Enum.sort_by(fn {task, _task_info} -> Tasks.reference_for_sorting(task) end)
      
    nodes =
      from(n in Node, where: n.type == ^node_type, order_by: n.name)
      |> Repo.all()
      |> Enum.map(&node_with_tasks(&1, tasks_with_info))
      
    %{nodes: nodes, tasks: Enum.map(tasks_with_info, &elem(&1, 1))}
  end
  
  defp node_with_tasks(node, tasks_with_info) do
    tasks_for_node =
      tasks_with_info
      |> Enum.filter(fn {task, _task_info} ->
        node.subtype in task.node_subtypes and TaskDef.run_on_node?(task.identifier, node)
      end)
      |> Enum.map(&elem(&1, 1))
      |> Enum.sort_by(&Tasks.reference_for_sorting(&1.task))
      
    %{node | tasks: tasks_for_node}
  end

Time for another profiler run:

FUNCTION                                                                  CALLS        %    TIME  [uS / CALLS]
-------- ----- ------- ---- [----------]
...
'Elixir.Premonition.Tasks.Config':task/1 2312 95.15 644231 [ 278.65]
----------------------------------------------------------------------- ------ ------- ------ [----------]
Total: 132140 100.00% 677048 [ 5.12]

We’re down to well under a second now, but most of the time is still spent in calls to that one function. It’s important to consciously think about when to stop, but it feels like there’s still a decent improvement to be made (and in production there’ll be more nodes, so it’ll take longer than it does with the dummy data we’ve been using for these runs).

There’s only one place left where we call a TaskDef function, on line 5 of node_with_tasks/2:

  defp node_with_tasks(node, tasks_with_info) do
    tasks_for_node =
      tasks_with_info
      |> Enum.filter(fn {task, _task_info} ->
        node.subtype in task.node_subtypes and TaskDef.run_on_node?(task.identifier, node)
      end)
      |> Enum.map(&elem(&1, 1))
      |> Enum.sort_by(&Tasks.reference_for_sorting(&1.task))
      
    %{node | tasks: tasks_for_node}
  end

Let’s look at TaskDef.run_on_node/2:

  def run_on_node?(task_id, node) do
    module = module(task_id)
    
    if function_exported?(module, :run_on_node?, 1) do
      module.run_on_node?(node)
    else
      true
    end
  end

It’s a bit more complicated than the other ones we effectively inlined, because it checks whether the implementing module for a task has a run_on_node?/1 function, and calls it if so, otherwise defaulting to true. We don’t really want to replicate this logic, so instead we can allow the function to operate on either a task identifier or a TaskDef struct. Let’s add a test:

  describe "Premonition.Tasks.Config.TaskDef.run_on_node?/2" do
    ...
    test "supports TaskDef structs as well as identifiers" do
      assert TaskDef.run_on_node?(Config.task("SignallingMaintenanceServer.NumberOfEPAPBackups"), %Node{id: 1}) ==
               false
    end
    ...
  end

And make it pass:

  def run_on_node?(%TaskDef{} = task, node), do: do_run_on_node?(task.module, node)
  def run_on_node?(task_id, node), do: do_run_on_node?(module(task_id), node)
  
  defp do_run_on_node?(module, node) do
    if function_exported?(module, :run_on_node?, 1) do
      module.run_on_node?(node)
    else
      true
    end
  end

Now we can pass in the struct (on line 23):

  def dashboard(node_type) do
    disabled_tasks = Repo.all(DisabledTask)
    threshold_overrides = threshold_overrides()
    
    tasks_with_info =
      Config.tasks()
      |> Map.filter(fn {_id, task} -> task.node_type == node_type end)
      |> Enum.map(fn {id, task} -> {task, TaskInfo.new(id, threshold_overrides, disabled_tasks)} end)
      |> Enum.sort_by(fn {task, _task_info} -> Tasks.reference_for_sorting(task) end)
      
    nodes =
      from(n in Node, where: n.type == ^node_type, order_by: n.name)
      |> Repo.all()
      |> Enum.map(&node_with_tasks(&1, tasks_with_info))
      
    %{nodes: nodes, tasks: Enum.map(tasks_with_info, &elem(&1, 1))}
  end
  
  defp node_with_tasks(node, tasks_with_info) do
    tasks_for_node =
      tasks_with_info
      |> Enum.filter(fn {task, _task_info} ->
        node.subtype in task.node_subtypes and TaskDef.run_on_node?(task, node)
      end)
      |> Enum.map(&elem(&1, 1))
      |> Enum.sort_by(&Tasks.reference_for_sorting(&1.task))
      
    %{node | tasks: tasks_for_node}
  end

Let’s check we’re still heading in the right direction:

FUNCTION                                                                  CALLS        %    TIME  [uS / CALLS]
-------- ----- ------- ---- [----------]
...
'Elixir.Premonition.Tasks.Config':task/1 1231 91.03 339719 [ 275.97]
----------------------------------------------------------------------- ------ ------- ------ [----------]
Total: 116725 100.00% 373176 [ 3.20]

Looks good – we’ve halved the number of calls to Config.task/1 again, and we‘re now down to a third of a second. Just one more step to go! At least I think so – I’m genuinely writing this up as I work through the process.

We still call Tasks.reference_for_sorting/1 twice, and this seems like another likely candidate for eliminating unnecessary lookups. Looking more closely though, on line 9 we pass in a struct and on line 26 a task ID. the function clearly supports a struct argument already, so we could just pass in a struct both times to avoid the lookups. But hang on a minute – why are we repeating the sort on line 26 anyway? We already sorted the structs before passing them into this function. Sure enough we can remove that line altogether without causing any tests to fail:

  defp node_with_tasks(node, tasks_with_info) do
    tasks_for_node =
      tasks_with_info
      |> Enum.filter(fn {task, _task_info} ->
        node.subtype in task.node_subtypes and TaskDef.run_on_node?(task, node)
      end)
      |> Enum.sort_by(fn {task, _task_info} -> Tasks.reference_for_sorting(task) end)
      |> Enum.map(&elem(&1, 1))
      
    %{node | tasks: tasks_for_node}
  end

One last run of the profiler:

FUNCTION                                                                  CALLS        %   TIME  [uS / CALLS]
-------- ----- ------- ---- [----------]
...
'Elixir.Premonition.Tasks.Config':task/1 144 63.41 39575 [ 274.83]
----------------------------------------------------------------------- ----- ------- ----- [----------]
Total: 60015 100.00% 62413 [ 1.04]

Great! We’ve gone from a function call taking a glacial 2.5s to a much more reasonable 62ms (a 40× improvement). In the words of James Cromwell, “That’ll do, Pig.”

Before we go, though, remember way back near the beginning when I said “there are no doubt more efficient approaches than creating a new map just to discard its values, but we’ll just make a note of that to return to later”? Let’s take a quick look at that now.

When it comes to choosing between algorithms based on performance, the profiler we’ve been using up to now isn’t quite the right tool for the job. Instead, we’re going to reach for a benchmarker – in this case Benchee. This tool runs multiple potential implementations many times each, to give a clear picture of which is faster.

We could just create a duplicate module, tweak the code in one of them and compare performance of the whole function, but let’s isolate the specific lines we want to experiment with, so we’re only looking at the time spent on that one operation.

The code’s changed a bit since we noticed the potentially inefficient creation of a new map, but in the current version it’s lines 7 and 8:

  def dashboard(node_type) do
    disabled_tasks = Repo.all(DisabledTask)
    threshold_overrides = threshold_overrides()
    
    tasks_with_info =
      Config.tasks()
      |> Map.filter(fn {_id, task} -> task.node_type == node_type end)
      |> Enum.map(fn {id, task} -> {task, TaskInfo.new(id, threshold_overrides, disabled_tasks)} end)
      |> Enum.sort_by(fn {task, _task_info} -> Tasks.reference_for_sorting(task) end)
      
    nodes =
      from(n in Node, where: n.type == ^node_type, order_by: n.name)
      |> Repo.all()
      |> Enum.map(&node_with_tasks(&1, tasks_with_info))
      
    %{nodes: nodes, tasks: Enum.map(tasks_with_info, &elem(&1, 1))}
  end

Another version, which doesn’t create an intermediate map, might look like this:

  def dashboard(node_type) do
    disabled_tasks = Repo.all(DisabledTask)
    threshold_overrides = threshold_overrides()
    
    tasks_with_info =
      Config.tasks()
      |> Enum.flat_map(fn
        {id, task} when task.node_type == node_type -> [{task, TaskInfo.new(id, threshold_overrides, disabled_tasks)}]
        _ -> []
      end)
      |> Enum.sort_by(fn {task, _task_info} -> Tasks.reference_for_sorting(task) end)
      
    nodes =
      from(n in Node, where: n.type == ^node_type, order_by: n.name)
      |> Repo.all()
      |> Enum.map(&node_with_tasks(&1, tasks_with_info))
      
    %{nodes: nodes, tasks: Enum.map(tasks_with_info, &elem(&1, 1))}
  end

Having confirmed that this version passes all the tests, let’s compare its performance with the original. We can copy the relevant bits of code into a script, hard-coding some values, and save it as benchmark.exs:

defmodule Benchmark do
  @moduledoc false
  alias Premonition.ManageTasks.TaskInfo
  
  def filter_and_map(tasks) do
    tasks
    |> Map.filter(fn {_id, task} -> task.node_type == "MCC PCC-SM" end)
    |> Enum.map(fn {id, task} -> {task, TaskInfo.new(id, [], [])} end)
  end
  
  def flat_map(tasks) do
    Enum.flat_map(tasks, fn
      {id, task} when task.node_type == "MCC PCC-SM" -> [{task, TaskInfo.new(id, [], [])}]
      _ -> []
    end)
  end
end

tasks = Premonition.Tasks.Config.tasks()
 
Benchee.run(%{
  "Filter and map" => fn -> Benchmark.filter_and_map(tasks) end,
  "Flat map" => fn -> Benchmark.flat_map(tasks) end
})

The important bit is at the bottom, where we tell Benchee to measure the two functions, which we’ve given friendly labels for the report. By default it will repetitively call each function for two seconds to make sure any caches are warmed up, then keep going for another five seconds, counting how many times the call completes (there are loads of configuration options that we’re ignoring here).

Running the script with iex -S mix run benchmark.exs produces the following output:

Operating System: macOS
CPU Information: Apple M1 Pro
Number of Available Cores: 10
Available memory: 16 GB
Elixir 1.19.5
Erlang 28.3.3
JIT enabled: true

Benchmark suite executing with the following configuration:
warmup: 2 s
time: 5 s
memory time: 0 ns
reduction time: 0 ns
parallel: 1
inputs: none specified
Estimated total run time: 14 s
Excluding outliers: false

Benchmarking Filter and map ...
Benchmarking Flat map ...
Calculating statistics...
Formatting results...

Name ips average deviation median 99th %
Filter and map 25.45 39.29 ms ±2.15% 39.15 ms 41.46 ms
Flat map 24.20 41.32 ms ±4.22% 40.83 ms 51.64 ms

Comparison:
Filter and map 25.45
Flat map 24.20 - 1.05x slower +2.04 ms

So it turns out the flat_map version is actually marginally slower! We’ll stick to the code we already had, and remember the importance of actually measuring the effect when we get tempted to write “clever” code that we think ought to be faster.

And now it’s time to stop focussing on performance, decide whether our modifications need any refactoring for readability, get the changes deployed so the users can see the page load faster, and move on to the next feature!

Epilogue

A few hours later, and the new code is now live. I visited the Manage Tasks page a few times before the change, and the Chrome developer tools were showing the Largest Contentful Paint as around 25–30 seconds (which it not unreasonably described as “bad”). Now it’s down to just over a second, which qualifies as “good”. ?

15 March 2026

Weeknotes 2026-11

Filed under: Weeknotes — Kerry Buckley @ 7:15 pm

Back to the Spread Eagle on Wednesday for the quiz. We came joint third out of seven or eight teams, so very much mid-table, but it was a fun night out anyway.

I spent some time hacking my way through the vegetation that had overgrown the passage round the side of my house, to the extent that it’s now possible to walk round that way. This was mainly to see whether I could find any evidence of where the water pipe to the garage goes, but no luck on that front. Looks like I’ll have to get a professional in to sniff out the leak.

To Stowupland on Sunday for the Stowmarket half marathon. I think I’ve finally come out the other side of the cold that I’d been suffering with for a few weeks, but still wasn’t sure how well I was likely to go. The weather would have been perfect if it hadn’t been for the force four wind. We knew it was going to be behind us for most of the first half, then in our faces on the way back, which is also the hilliest bit, and sure enough it was very much a game of two halves. I set off with Holly and Maria, at a quicker pace than I’d intended, and spent the first ten miles or so wondering how long I was going to be able to keep up. Then somehow I managed to get a second wind at around mile ten, and it was Holly that dropped back first, then Maria (so much for youth and carbon shoes!). I ended up with a slightly quicker time than in 2024, which I see I described at the time as “OK but not great”. I’m two years older now though, and may have lowered my expectations a bit!

Enjoying the early tail wind

As is traditional, some of us stopped at the Willow Tree for lunch on the way home. It was my turn to drive, so I got to sample Guinness 0.0 for the first time. It does a pretty passable impression of normal Guinness, although I wouldn’t say that’s a massively high bar.

Once again it feels like more stuff must have happened, but that’s all I can think of.

12 March 2026

Weeknotes 2026-10

Filed under: Uncategorized,Weeknotes — Kerry Buckley @ 8:23 pm

Oops, somewhat late again this week. Still suffering with a cold/cough (not that that’s related to the tardiness). Switched themes on the blog too, because I started writing a more technical post and code examples didn’t really work with the narrow column.

I went to the doctor’s on Friday for a follow-up on high cholesterol numbers from my recent blood test. Looks like I’ll probably be prescribed statins, but I’ve got to have a fasting blood test first (Friday week) to check the numbers. I keep thinking of the old Not The Nine O’Clock News sketch.

I thought I’d be cutting it a bit fine to get back from the 8.20 doctor’s appointment in time to get Casper and Nobby to the vet for 9.30, but as it turns out I was back home by 8.35, so no rushing required. Routine vaccinations for Casper, but confirmation that Nobby (who’s nearly 18) can’t see, which turns out to be due to detached retinas from high blood pressure (there seems to be a theme developing). He’s now got some tablets to bring the blood pressure down, but that probably won’t fix his eyesight.

Not a lot else to report as far as I can remember!

1 March 2026

Weeknotes 2026-09

Filed under: Weeknotes — Kerry Buckley @ 6:59 pm

I started the week with a missing cat, but all is now well. Nobby, who’s nearly 18, has been increasingly showing his age, and losing his sight to the point that I think he’s now more or less blind. I’d left the cat flap open though, as he only ever briefly popped out to the garden, and he seemed to be able to find his way round by touch and smell. On Saturday night he’d been helpfully trying to sleep on my pillow, then in the early hours of Sunday I was woken by a noise from outside that in my half-awake state I couldn’t work out whether was him or a fox. By the time I’d thrown on some clothes on and gone out to look, the noise had stopped and there was no sign of him, but he didn’t appear to be in the house either.

I had a brief look round the neighbourhood before heading off to Tarpley on Sunday morning, but when he still hadn’t come back by the time I got home I started to worry, put up some posters and reported him missing in a couple of local Facebook groups. A few people rang or messaged with potential sightings, but I never managed to spot him when I went to look in the roads they mentioned. Then just as I was about to head out on Tuesday evening I got a notification that I’d been tagged in a post, and it turned out that a couple one road along had found him in their garden and asked whether anyone recognised him, and someone who’d seen my missing cat post connected the two together. I rushed round with a cat carrier to collect him, and found him in their kitchen where they’d kindly given him some water and tuna. He doesn’t seem to have any lasting ill effects from his adventures, but the last time he’s allowed outside on his own!

Nobby safely home

I took Thursday and Friday off work, so naturally the cold I’ve been fighting off for a few days finally won the battle. Apart from a bit of a runny nose it’s mostly jumped straight to the cough stage, which is playing havoc with trying to sleep.

On Thursday I had a follow-up nurse practitioner appointment at the doctor’s after my various elevated blood pressure readings. They took some blood samples, which I haven’t had results back from yet, and did an ECG. Impressively, the latter was sent straight to a GP to have a look at, and after about five minutes of waiting (which was much longer than I’d had to wait when I first arrived) I was sent along the corridor for a quick chat with the GP. Apparently it was mostly fine, but with one abnormal bit that might indicate some minor effects from high blood pressure. The current advice seems to be to try to improve my diet a bit and hope that makes it drop down again.

Far too much running this week, with the Tarpley 20 still in my legs (and a cold), but unfortunately the marathon won’t train for itself. Normal club training on Tuesday, a track session on Wednesday (which I took very easy) and the standard Thursday Tempo Ten which I ended up doing solo. Then a slow parkrun and an even slower 17 with the usual suspects on Sunday. And of those five runs, only three ended up with beers in the Cricketers!

Track
Long run, after crossing the Orwell Bridge

22 February 2026

Weeknotes 2026-08

Filed under: Weeknotes — Kerry Buckley @ 8:10 pm

I may have finally got to the bottom of some slow-running code at work. After trying a variety of things to speed it up, I decided to do what I should have done in the first place, and pasted a version of the troublesome module into a script, with all the functions made public so I could run it against the production cluster a bit at a time to narrow down exactly where the bottleneck was. Without a huge amount of work I pinpointed it to one function, then noticed that instead of pulling out just the failed results from a big list and creating a struct from each, it was creating structs for everything, then filtering the list of structs. Switching two lines of code into the order they should have been in the first place sped it up by a factor of about 30.

Fat Cat meet-up again on Wednesday, with Anders, Tony, Mel, Rupert, Joe and Dave.

My smart water meter was activated this week, and immediately triggered a warning that I seem to have a leak, so that’s another annoying thing to get fixed. There’s nothing obvious going on in the house, so my money’s on the pipe that goes under the garden (somewhere!) to the garage. Sigh.

The Tarpley 20 was on Sunday, and I managed a personal worst, coming in three or four minutes slower than last year, which itself was a six or seven slower than 2024. Ah well, still seven weeks until the marathon!

15 February 2026

Weeknotes 2026-07

Filed under: Weeknotes — Kerry Buckley @ 9:05 pm

I submitted the week’s worth of home blood pressure readings that they asked for after my health check, and once again they were hovering around a slightly elevated level. When I submitted the results, the form had one of those “I am not a robot” checkboxes – I suppose that’s important context. Anyway, I’ve now got another appointment in a couple of weeks for a blood test and ECG. I also finally got round to having an eye test on Friday (probably a couple of years late). No major issues, but they did spot a couple of tiny spots on the back of my eyes that (if I hadn’t already told them about the doctor’s appointment) would apparently have led them to recommend a blood pressure check.

Despite a far-from-hectic social life, I thought I’d ended up with three things I should have been at on Wednesday evening. It turned out though that the pub quiz I’d put in my diary was actually on the 11th of March (curse you February and your multiple-of-seven number of days!), and the Fat Cat meet up got punted to next week, so I ended up at the least interesting of the three: the running club AGM.

On Friday I got talked into joining various other friends at the Harpers’ to watch Ipswich play Wrexham in the FA Cup. Not sure the match was much more entertaining for the rest of them, who actually like football, and the general opinion seemed to be that Town hadn’t tried that hard, and were secretly quite keen to get knocked out so they could concentrate on the league. Did you see that ludicrous display? etc.

Six days in a row of running, for a total of 54 miles. I knew I’d be too tired for a long run with Holly and Maria on Saturday (glorious sunshine), so just did parkrun and ran with the boys on Sunday (sleet) instead. An easy week coming up though, at least until the Tarpley 20 next Sunday.

Apparently Gibraltar now has its own parkrun, so when I finally get round to visiting my sisters in Spain I won’t have to miss my Saturday morning ritual!

8 February 2026

Weeknotes 2026-06

Filed under: Weeknotes — Kerry Buckley @ 8:55 pm

After filling my brown bins early before Christmas, then missing the day-early collection and realising I then had another month to wait for the next one, this week I was back to frantically shoving in more bits of dead apple tree detritus in a last-minute rush before leaving for work on Wednesday. But then I did remember to fill them again at the weekend, so one-all to the demons of procrastination.

The cat that lives on site at Adastral Park (who is rumoured to be called Scoop) obviously found the damp weather as tiresome as everyone else, and had sneaked into the office on Wednesday to sleep in the foyer. It’s not the first time he’s come in – someone had even provided a cat bed for him the other day – but it’s the first time I’ve seen him in there.

Office cat

I’d decided I should probably just get a new rear wheel for my bike, as after over 13,000 miles the cones are worn, it still wobbles slightly after my attempts to clean and adjust the (non-sealed) bearings, and now a spoke seems to have broken too. Then once I started looking I realised that a single speed wheel with a disk brake fitting is what you send someone out to get once they’ve come back with the hen’s teeth you asked them for. I can see why in retrospect, and it explains why my bike has an eccentric bottom bracket to adjust the chain tension rather than horizontal dropouts. It turns out that it is possible to get an appropriate hub though, which I can see leading to the purchase of more workshop equipment and me attempting to get into wheel building, which may or may not end well.

Another 40 miles of running, with far too high a proportion of relatively hard efforts, with a club session on Tuesday, Thursday Tempo Ten, 13 miles on Sunday including a much quicker parkrun than I’d been intending, then the Pakenham cross country on Sunday, which was nowhere near as much of a quagmire as I’d been expecting. Must try to fit in some easy miles between the efforts next week!

4 February 2026

Save/write all before test in Neotest

Filed under: Software,vim — Kerry Buckley @ 8:01 pm

Neotest is an excellent plugin for working with tests in a variety of languages in Neovim, but it was driving me mad having to remember to run :wa before running any tests. Why would anyone want to edit a file then run tests against the last saved version rather than the one in the editor?

Anyway, after a bit of familiarisation with how the lua config for lazy.nvim works, it turned out to be pretty easy to override the default keybindings that run tests (copied from the full spec in the LazyVim docs for Neotest) to save everything first. I ended up with this in ~/.config/nvim/lua/plugins/neotest.lua:

return {
  "nvim-neotest/neotest",
  keys = {
    {
      "<leader>tt",
      function()
        vim.cmd("wa")
        require("neotest").run.run(vim.fn.expand("%"))
      end,
      desc = "Run File (Neotest)",
    },
    {
      "<leader>tT",
      function()
        vim.cmd("wa")
        require("neotest").run.run(vim.uv.cwd())
      end,
      desc = "Run All Test Files (Neotest)",
    },
    {
      "<leader>tr",
      function()
        vim.cmd("wa")
        require("neotest").run.run()
      end,
      desc = "Run Nearest (Neotest)",
    },
    {
      "<leader>tl",
      function()
        vim.cmd("wa")
        require("neotest").run.run_last()
      end,
      desc = "Run Last (Neotest)",
    },
  },
}

1 February 2026

Weeknotes 2026-05

Filed under: Weeknotes — Kerry Buckley @ 7:58 pm

February already! I just about managed to eke out my Christmas cake for all of January, and ate the last piece today.

I had the bright idea of routing all the read-only database queries in my work app to the replica database, in the hope that it would improve UI performance when there were a lot of background writes going on. Unfortunately, despite it looking promising locally and in the test environment, once I deployed it to the live instance a non insignificant number of queries were cancelled by Postgres because of potential conflicts with new data received from the primary. This is the kind of thing that it turns out is very easy to learn once you have the error message to search for, but less obvious beforehand. In the end I backed out the whole thing – it’s nice to be able to do this after a couple of days work, rather than in the old days when it would have gone into a massive release several months later and be an absolute nightmare to unpick.

I finally went for my first over-40 health check on Friday, a mere 16 years after I became eligible for one. I expected a repeat of the message from my flu jab appointment that my blood pressure was a bit high, but this time (measured the old-fashioned way with a manually-operated cuff, analogue manometer and stethoscope) it was merely borderline (or as the nurse amusingly called it “on the cuff”). They’ve asked me to take two readings a day for a week and submit them to decide whether it’s worth worrying about.

The road racing season kicked off on Sunday with the Great Bentley Half. I didn’t have Holly and Maria to keep me honest this year, and finished a couple of minutes slower, but still finished quicker than I expected to at the start.

Forgot to mention the Big Garden Bird Watch last week. It was a damp grey day, and I managed to see the sum total of one blackbird, one wood pigeon and one collared dove, which is a pretty poor showing. There were loads of gulls wheeling over the garden, but it doesn’t count if they don’t land. Also at one point a heron flew over, which I’ve never seen here before – I reckon it was just trolling me.

26 January 2026

Weeknotes 2026-04

Filed under: Weeknotes — Kerry Buckley @ 6:30 pm

First pub quiz with the team of coffee-running people for a while on Wednesday. We tried the one at the Spread Eagle for the first time, and enjoyed it. It was at the “most questions are pretty easy so the winner is the one who gets least number wrong” end of the spectrum, but that made for a nice relaxing evening. We came third out of not very many, but it’s the taking part that counts!

On Saturday a group of us took the train out to Bury St Edmunds for a tour of the Greene King brewery. I’d been there before, but I think in about 2002, so I couldn’t remember much about it!

Mash tuns
Town, cathedral and sugar beet factory from the brewery roof
Beer tasting

After drinking the pint that was included in the tour price we retired to the Corn Exchange for a few more, before catching the train home and finishing the day off with a takeaway curry.

Back to the Fat Cat on Sunday, to celebrate (a day early) Robin’s 50th.

Also lots of running (53 miles’ worth, in fact), which I won’t bore you with.

Older Posts »

Powered by WordPress