"How hard can it be?"

You'd be surprised

As a Software Engineer, I've encountered many situations where I had to work on some User Story and it's effectively a one-liner or some short description of what someone wants you to build. The Product Owner presents it during a meeting and mentions that this should be an easy fix for the team.

You just listen and in your mind explore all the possible ways it can go wrong, how it could possibly negatively interact with pre-existing code or how it might not actually solve the underlying problem and at best only mitigates symptoms of a deeper problem.

If you've written any non-trivial application, you've probably experienced first hand how incredibly 'dumb' computers can be. How you need to be extremely precise in how you describe what you want to computer to do. It's a realm where 1 bit being incorrect be make or break your application. You know this. Your fellow Software Engineers know this. And most likely, everyone else ... simply does not.

From their perspective, you're asking needless details, making it bigger or more complicated than it needs to be. Granted, this might be true in case we're trying to refocus the conversation away from treating symptoms and towards solving the underlying root cause problem; Sometimes there just isn't budget to deal with the root cause and we'll have to settle for mitigating the impact of the problem.

For those situations you are not in that situation and you are having a hard time explaining to your stakeholders that it truely isn't as simple as they think it is, I've collected a couple of YouTube video's that can serve as analogies or war stories about the complexities of Software Engineering and the importancy of getting those anoying and seemingly unimportant details right.

Analogies of dealing with computers

Our core added value as a Software Engineer is solving problems and automating things. Our tool of choise are computers. They can execute instructions lightning fast, and those instructions need to be pin-point precise.

Human language however, is often far from pin-point precise. It's full of unmentioned assumptions and expectations, due to for example shared history, company culture, social stigma, religion or jargon/domain specific language. We humans have learnt to fill in the blanks.

This ability to fill in the blanks, does not exist with computers and to a very little degree exists in the most powerful IDE's we work with. So we have to give very exact instructions to the computer on how to perform a certain task. As Software Engineers we've trained doing that for years on end. Most of our stakeholders do not, which makes the conversion from 'vague, hand wavey description' to 'machine compatible specification'.

The "Exact Instructions Challenge" videos can show how quickly our 'regular language' becomes inadequate:

These video's illustrate what I would describe as an 'underconstrained problem description': a description that's effectively incomplete.

War stories of seemingly simple problems actually being very complex

Sometimes a feature starts out seemingly innocent and simple, as people only have thought about the case common to their situation. When that feature actually needs to support more people than just this individual, the actual required work be at times completely spiral out of control.

Wonderful examples are:

These video's illustrate what I would describe as an 'overconstrained problem description': a description that's only effective for the stakeholder issuing the User Story, but not effective enough for the intended target audience.

The Alignment Problem

Now with tooling like ChatGPT etc, perhaps a stakeholder will think that AI would be magic solution. Most likely (at least at the time of writing), it will not. This is because we as humanity have not solve the 'alignment problem': we assume that all people and AI's think as we do and we're aligned to acchieve the same thing within the same constraints that we are willing to respect.

This video lays out how wrong this can go if this problem remains unsolved: