What is software?

What actually is software? It’s obviously not a physical thing you can point at. If I imagine a specific piece of software, where does the software stop and not-software begin?

I recently read Sapiens, a fantastic book on the history of humankind. One of the things he talks about is the “legend of Peugeot”. When we think of Peugeot the company, what do we mean? It’s not the cars they produce – the company would exist and would keep on making cars even if all the Peugeot cars in existence were scrapped overnight. It’s not the factories and offices and assembly lines, which could be rebuilt if they all suddenly burned down. It’s not the employees either – if all the employees resigned en masse, the company would hire more staff and would carry on making cars. Peugeot is a fiction – a legal fiction we all choose to believe in.

Back to software? What is software? Maybe it’s the compiled binary artefact? An executable or DLL or JAR file. But is that really what software is? Software is a living, growing, changing thing – a single binary is merely a snapshot at a given point in time.

Perhaps then software includes the source code. Without the source code, what we have is dead software – a single binary that can never (easily) be changed. Sure, we could in theory reverse engineer something resembling source code from the binary, but for any reasonably sized piece of software, would making anything beyond a trivial change be feasible, without the original source?

Even with the source code, could I just pick up, say, the source to Chrome or Excel and start hacking away? It seems unlikely – I’d need to spend time familiarising myself with the code and reading documentation. So maybe documentation is part of what makes software.

Even better than reading documentation I’d talk to other developers who are already familiar with the code – they would be able to explain it to me and answer my questions. Perhaps even more importantly, developers would be able to explain why certain things are the way they are – this tells the story of the software, the history of how it got to be the way it is. The decisions taken along the way, the mistakes made and the paths not taken.

So the knowledge developers have of how the software works and how it got there is part of what makes up software. What other knowledge makes up software? How about the process for releasing a new version? Without that knowledge modified source code is useless. To be real, live software new versions need to get into the hands of users.

When it comes to interacting with the real world, how much of that context is part of what defines the software? Look at Uber, for example – without physical cars, what use is the software? In some sense, the physical cars and their drivers are part of a software stack.

But is software even a single thing, at a single scale? Is the web front end a single piece of software? Without its backend service it is rendered useless. Does that make it a single piece of software or two?

How about as software evolves? Does it become something different, unique from the software that went before? The original version is part of the history of the software, but it isn’t distinct from it. Only if the old version was forked can we end up with a new piece of software – at this point their histories diverge. Over time they will adapt to subtly different contexts, have different dependencies, different decisions and goals: they will become two different pieces of software.

What about if software is re-written? Imagine the team responsible for the backend service decide the only solution to their technical debt problem is a re-write. So they begin re-writing it from scratch. Eventually, they switch over to the new version – the old one is archived, only kept in version control for the curious. Is this a new piece of software? Or logically just a new version? The software stack still performs the same overall purpose, there’s still only one backend service. The original, debt-laden version has become part of the history of the software: we don’t have two systems – we have one. This suggests that the actual source code is not what makes software.

If software isn’t the source code, what is it? It can’t be the team that owns it, although team members may come and go the software lives on. It isn’t the documentation either – the documentation could be re-written and the software would live on. Software is defined by its context but it is more than its context and processes.

Software is all these things and none of them. Just like Peugeot, software is a fiction we all believe in. We all pretend we know what we mean when we talk about “software”, but what actually is it?

If software is anything it is a story. It is the history of how the code got to where it is today: the decisions that were taken, the context it sits within, the components it interacts with. Documentation is an attempt to preserve this history; processes an attempt to codify lessons learned.

If software is a story, the team are the medium through which the story is kept alive. If you’ve ever seen what happens when a new team takes over legacy software, you’ll know what happens when the story dies: zombie software, not quite dead but not quite alive; still evolving and changing, but full of risk that any change could bring about disaster.

What is software? Software is a story: a story of how this got to be the way it is, whatever “this” might be.

Advertisements

ActiveMQ Performance Testing

We use ActiveMQ as our messaging layer – sending large volumes of messages with a need for low-latency. Generally it works fine, however in some situations we’ve seen performance problems. After spending too much time testing our infrastructure I think I’ve learned something interesting about ActiveMQ: it can be really quite slow.

Although in general messages travel over ActiveMQ without problems, we’ve noticed that when we get a burst of messages we start to see delays. It’s as though we’re hitting some message rate limit – when we burst above it messages get delayed, only being delivered at the limit. From the timestamps ActiveMQ puts onto messages we could see the broker was accepting messages quickly, but was delayed in sending to the consumer.

I setup a test harness to replicate the problem – which was easy enough. However, the throughput I measured in the test system seemed low: 2,500 messages/second. With a very simple consumer doing basically nothing there was no reason for throughput to be so low. For comparison, using our bespoke messaging layer in the exact same setup, we hit 15,000 messages/second. The second puzzle was that in production the message rate we saw was barely 250 messages/second. Why was the test system 10x faster than production?

I started trying to eliminate possibilities:

  • Concurrent load on ActiveMQ made no difference
  • Changing producer flow control settings made no difference
  • Changing consumer prefetch limit only made the behaviour worse (we write data onto non-durable topics, so the default prefetch limit is high)
  • No component seems to bandwidth or CPU limited

As an experiment I tried moving the consumer onto the same server as the broker and producer: message throughput doubled. Moving the consumer onto a server with a higher ping time: message throughput plummeted.

This led to an insight: the ActiveMQ broker was behaving exactly as though there was a limit to the amount of data it would send to a consumer “at one time”. Specifically I realised, there seemed to be a limit to the amount of unacknowledged data on the wire. If the wire is longer, it takes longer for data to arrive at the consumer and longer for the ack to come back: so the broker sends less data per second.

This behaviour highlighted our first mistake. We use Spring Integration to handle message routing on the consumer side, we upgraded Spring a year ago and one of the changes we picked up in that version bump was a change to how the message driven channel adapter acknowledges JMS messages. Previously our messages were auto-acknowledged, but now the acknowledgement mode was “transacted”. This meant our entire message handling chain had to complete before the ack was sent to the broker.

This explained why the production system (which does useful work with the messages) had a much lower data rate than the test system. It wasn’t just the 1ms ping time the message had to travel over, the consumer wouldn’t send an ack until the consumer had finished processing the message – which could take a few milliseconds more.

But much worse, transacted acknowledgement appears to prevent the consumer prefetching data at all! The throughput we see with transacted acknowledgement is one unacknowledged message on the wire at a time. If we move the consumer further away our throughput plummets. I.e. the broker does not send a new message until it has received an acknowledgement of the previous. Instead of the consumer prefetching hundreds of messages from the broker and dealing with them in turn, the broker is patiently sending one message at a time! No wonder our performance was terrible.

This was easily fixed with a spring integration config change. In the test system our message throughput went from 2,500 messages/second to 10,000 messages/second. A decent improvement.

But I was curious, do we still see the broker behaving as though there is a limit on the amount of unacknowledged data on the wire? So I moved the consumer to successively more distant servers to test. The result? Yes! the broker still limits the amount of unacknowledged data on the wire. Even with messages auto acknowledged, there is a hard cap on the amount of data the broker will send without seeing an acknowledgement.

And the size of the cap? About 64KB. Yes, in 2018, my messaging layer is limited to 64KB of data in transit at a time. This is fine when broker and consumer are super-close. But increase the ping time between consumer and broker to 10ms and our message rate drops to 5,000 messages/second. At 100ms round trip our message rate is 500 messages/second.

This behaviour feels like what the prefetch limit should control: but we were seeing significantly fewer messages (no more than sixty 1kB messages) than the prefetch limit would suggest. So far, I haven’t been able to find any confirmation of the existence of this “consumer window size”. Nor any way of particularly modifying the behaviour. Increasing the TCP socket buffer size on the consumer increased the amount of data in-flight to about 80KB, but no higher.

I’m puzzled, plenty of people use ActiveMQ, and surely someone else would have noticed a data cap like this before? But maybe most people use ActiveMQ with a very low ping time between consumer and broker and simply never notice it?

And yet, people must be using ActiveMQ in globally distributed deployments – how come nobody else sees this?