As mentioned before, I was long term committed to developing Windows applications in C# or/and C/C++ and not at all anxious to make a big change. But in 2018, for six months, I worked on developing my first web module with ASP.NET Core 2.1, IIS and Angular, and in September 2019 (last year) I was given an opportunity to work with microservices.
Here is the third part of the microservices saga, where the mist starts gradually rising and me and my team can finally see the light at the end of the tunnel. Overview of this chapter:
- Concurrent access
- Creating build configurations for component tests
- Creating individual setups and the main installer
- Manual testing
- Writing requirements
- Installing the application on client site
- Configuring the application on client site
- Fixing bugs
- Wrapping up
- Take away lessons
When we speak about asynchronous programming with internal or external resources, we can’t ignore concurrent access.
First, we were reminded about this by the Lamar library that we were using for Dependency Injection. A bug inside this library delayed us for a couple of days. The reason? Sometimes, the instances of transient type were reused. More specific, when we tried to retrieve an object instance that was configured to be transient (one instance per usage), sometimes it would indeed retrieve a unique instance, and sometimes not.
For this reason, in multiple cases, the application tried to reuse either a database context – which was forbidden in Entity Framework – or other various objects, that were configured and needed to be transient.
To solve this, we started solving each case by applying specific solutions. One of the solutions was to apply the locking mechanism for specific pieces of code, such that asynchronous execution would not allow to enter twice simultaneously in those areas.
Creating component tests was a big challenge, as mentioned previously. Another challenge, just as big, was creating build configurations for those tests. This was somewhat different from creating build configurations for compiling NuGet packages and microservices and for creating the setup. It gave us the opportunity to realize that these tests that we were creating were working just in some cases. In other cases, they failed. This behavior was extremely frustrating, even more because this was happening before we had realized we needed to handle all kinds of exceptions.
Each of the ten component tests we created had its own setback. By individually investigating every one of them, we managed to discover multiple application bugs, and have them fixed one by one. From this point of view, we considered these tests very useful, as well as the configurations that ran them automatically.
As I mentioned before, when we encountered problems in our application, our first impulse was to blame the application dependencies. Sometimes we were right, but other times we were very wrong. One thing was for sure, and we learned it the hard way: it was our responsibility to write quality code.
Regarding the installer, we also had consistent help from our colleagues who had created the infrastructure application. The approach was as follows:
- First, we created a setup for each of the ten microservices.
- Then, we created another application to run the database scripts.
- We also needed to create a utility software to run in the beginning and change a configuration file that was specific for the infrastructure application, and allowed interaction with our application.
- In the end, we needed to create an installer to contain and run all the above, with a graphical user interface for various configurations – like the database server location and authentication details.
Although we had help, we still encountered some surprises.
On one hand, we had to build the big installer interface screens by using WPF – a technology none of us had ever worked with before.
On another hand, when developing the setup, continuous attempts to install the application, on the same system, created some problems due to partial installations residue. This was to be expected in some degree, but still, unpleasant. The problem was until we found where the “debris” was located.
Manual testing required us to adapt to the lack of an application specific graphical user interface. We had to monitor elements like: the interface from the infrastructure application, created for monitoring microservices; the input and output e-mail inboxes; the two databases; the log files; the RabbitMQ interface. All these details, that needed to be monitored in parallel, made manual testing a new challenge.
A funny thing happened, caused by one of my colleagues, shortly before the delivery date. The database backup we had from the client contained the e-mail addresses of his clients (the clients of our client). These addresses had to be used by the application for sending some e-mails to those clients. The reaching of those e-mail addresses on our side would normally have been considered a security breach.
Internally, we took the responsibility that, after each backup restore, we would replace these addresses with one or more internal addresses, to use them for testing. My colleague, of which I spoke earlier, forgot to do this. The result was that a part of our customer’s clients received some of the e-mails generated by our application – summing about 50 e-mails and 10 e-mail addresses. That was the moment when we realized our application was working. 😊
The requirements were written by our team’s project manager. She laid them out after the old requirements document but had to make multiple changes, both for adapting to new features determined by the new architecture, but especially to correlate some functionality errors, that in the first place had not been discovered.
In this phase, we had challenges about the texts’ accuracy, more specifically about synchronizing requirements with the written code. While the requirements were elaborated, the code was also verified, in order to make sure they correspond to each other. There were more cases of partial or incorrect implementation of features, from our side. We investigated and solved those one by one.
After the first delivery, we had a short breather, of about a week. Then, we installed the application on the client site. That was the moment when a new problem occurred.
Some files from the operating system, from the virtual machine that was dedicated for installing the application for testing purposes, were corrupted. It took some time to understand this issue. We got some help from a colleague who developed the infrastructure application. We are grateful to him for that investigation, for it would have taken us a considerably greater amount of time to discover the problem on our own.
After one day of intense investigations, we solved this problem and we managed to install the application on the virtual machine.
As I mentioned before, our actual application didn’t have a graphical user interface – except for the Exception Handling configuration settings tool.
Its configuration implied editing some values in the database, values that we had centralized from the old application, and the old application had them widespread over a diversity of sources, also mentioned in the beginning: registers, a .config file, an .ini file, ODBC configuration, code, database.
Our client proposed a Skype session where we had to offer support for the correct configuration of the application. The Skype session was efficient, and the client understood all the settings.
A period of fixing unforeseeable bugs followed. For example, a bug occurred due to the library reading the e-mails. More specific, some e-mails were not read correctly. We had not reproduced this scenario at our side during our internal testing, and we didn’t know exactly why it reproduced at the client side. We came up with a workaround that would replace the missing information of those e-mails with some custom data. Luckily enough, those pieces of information were not vital to our needs; otherwise, we could have been forced to replace the whole library that was reading e-mails, with another one.
We also fixed some situations generated by the lack of resources within the testing environment, especially the memory. Those situations could have been avoided from the internal testing phase, were we to have had more time for performance testing – which we did not have almost at all.
At the end of this experience, I can be certain of one thing: it would have been much too hard to foresee all the challenges we had to face. We partially relied on the experience of our colleagues who had developed the infrastructure application. Experience that proved useful in many aspects, but insufficient for our challenges.
For this reason, we were one month overdue from the initially estimated delivery date.
This experience helped us learn some valuable lessons:
- Ensure yourself a generous time buffer when developing applications from scratch. We had a buffer, but not generous enough.
- You can’t always be in control. When everything is new and you have a lot to learn, you can be sure that every aspect that can take you unprepared will do so, when you least expect it. Unless you have the gift of foresight, of course.
- You must rely on your intuition more than often, keep your fingers crossed a lot and get any help you can from your past experience and that of your colleagues.
- The more detail-oriented, analytical and disciplined you are, the higher the chances to correctly estimate work from the beginning.
- Get the client on-board with these aspects, the earliest possible (if a client exists, but it’s not always the case). For us, the relationship with the client was definitely a plus.
- Constant communication and collaboration within the team is vital. This, along with taking care that every member of the team maintains a high morale, and that no one is left behind. In my case, I was and I am very lucky to work with people with high integrity standards.
I hope that sharing our adventure with microservices with you has been useful and interesting. If you want to find out more details about this project, don’t hesitate to contact us.
Remember, you can also revisit How I learned to work with microservices: Part I – The opportunity and Working with microservices: Part II – A sequence of challenges. Enjoy!