So I build IT solutions for a living. I enjoy mastering new technologies. I enjoy understanding the complexities in a new domain. I enjoy working with smart people. Still, life in the IT-trenches is often full of frustration. Misunderstandings and surprises when people and technology do not behave as you except them to behave. It takes patience and resilience to get through most days.
The key to any successful project I have worked on is to build the right system and to build it right. Sounds simple, right?
It’s not. The challenge is that you need to understand enough about the business domain you are working in and enough about the technology you are working with. At the beginning of a project, no single person have that insight. The as-is system landscape and business processes at the customer, the to-be vision of systems and processes after the solution has been delivered, and the steps to get there (development, deployment, project management). Complexity to be overcome.
The best way for the project to meet a business requirement is always constrained by the people and technology available. Often the solution is simple, but it takes a lot of hard work to get there.
Here is a recent example from real life:
An organisation is implementing a new HR system that will replace the existing HR system over time. For the first phase, some data are still to be sourced in the existing system.
The organisation has middleware that connects systems, a master data platform. The new system should read data from the old system through this.
The team implementing the new HR system has worked with business stakeholders on how to configure the new HR system and is now ready with a first draft of requirements for the integration.
“We want data to be imported with at most a 15 minute delay.”
“Data should be validated at import. Invalid data should not be imported, instead, HR should be notified with an email that includes information about the invalid data.”
Both sounds reasonable, right?
Imagine it is implemented as written. Say something goes wrong on a Saturday morning, say an automatic update in the source system that then causes the validation to fail for a large number of records.
Every 15 minutes, hundreds of emails are sent to an HR mailbox. No data is imported. By Monday morning when HR opens their computers, their inbox is spammed with 100 000 emails. An unknown number of emails to that mailbox coming from other sources have been lost due to the mailbox overflowing.
You got exactly what you asked for.
Even worse, HR data is typically sensitive data. Your nice user friendly implementation have pushed sensitive data into a shared mailbox. Maybe you also wrote some of that data into a log file just to make it easier to debug. Forget about tracking who have accessed that data and where data is stored. This is a GDPR incident waiting to happen.
In an iteration or two I’m sure we will discover that only a fraction of the data updates need to be replicated within 15 minutes and then only during business hours. Also, HR should simply receive a notification on the mornings where there are data validation errors to be fixed, and then log into a system and see and fix the errors there, with an option to quickly silence or remove false alarms.
Getting the requirements right requires knowing a lot of context that is not stated explicitly.
Isn’t the solution just to write more explicit requirements? Well, it takes a lot of effort to write explicit requirements and you are still limited by the knowledge of the person writing the requirements.
In my experience, the only reasonable way forward is for the subject matter experts to build that shared context in a series of sessions looking at real systems with real data. Hard but crucial work.