Making Good Software

A blog by Alberto G (Alberto Gutierrez)

Archive for December, 2009

Loose coupling is overrated.

with 2 comments

In my opinion coupling is a very dangerous metric, is something desirable, but if it drives the architecture, it will make your project over complicated, to try to prove my point I am going to start from the basics.

In object oriented programming, coupling refers on how strongly linked are two objects that need to talk to each other.

The degree of coupling is important because, given a change in one of the components, the chances of having to change the other linked object, is proportional to the degree of coupling. When talking about coupling, objects are considered loosely coupled when they are designed so that a change in one object doesn’t require the other object to change.

Loose coupling is one of the most desired qualities in modern software development, but because its very subjective nature, and because the lack of analysis on the different types of coupling, some wrong decisions may be taken in the architecture just for the sake of making the application more loosely coupled.

Considering “the looser the architecture, the better”, is wrong, for each two objects that need to be coupled, their circumstances should be analyzed and based on these circumstances, the most appropriate type of coupling should be used.

Types of coupling.

Direct coupling.

Direct coupling happens when the two objects that need to be linked, talk to each other directly.

From a hardware point of view, this coupling always happens between two objects from the same application which are sitting in the same shared memory area.

Direct coupling is the simplest coupling to implement, but is the one with the highest level of coupling. A change on any of the linked objects is very likely to cause a change in the other object.

There are two styles of direct coupling. Black box coupling and White box coupling.

  • Black box coupling is a style of direct coupling where the link between the two objects is created without revealing any implementation detail. The most common form of black box coupling is when object A calls directly a public method in object B.
  • White box coupling is a style of direct coupling where the link between the objects carries some implementation information. The most common form of white box coupling is when object A access directly a member variable of the object B. White box coupling is discouraged because it creates even higher coupling than black box coupling and the complexity and effort remains the same.

Indirect coupling.

Indirect coupling uses a third, or more elements, to couple the two objects that need to talk to each other. The purpose of adding additional layers between the two components is so that even if one of the linked objects change, the intermediary doesn’t, or if it does, it changes internally so from the other linked object perspective everything remains the same.

Indirect coupling adds complexity to your application but it offers the loosest coupling.

Indirect coupling can be classified depending on

  1. Location of the components.
    • In memory. The two objects that need to be linked are part of the same application and are loaded at the same time in memory.
    • Remote. The two objects are part of different applications, so they don’t share the memory space. Remote coupling adds complexity but is considered to have a lower coupling than in memory coupling

  2. Synchronicity.
    • Synchronous coupling. The standard indirect coupling.
    • Asynchronous coupling. Also known as fire and forget, if asynchronous coupling is required, the complexity of the application would be higher, but it will be likely to have a lower coupling.

Conclusion.

The main conclusion is: The lowest is the coupling, the highest is the complexity. The “Oh yeah! Let’s just take a loose coupling approach”, doesn’t work. What is really important in your design, is how simple it is, so it should be prioritized the simplest approach as possible, which, funny enough, will probably carry a higher coupling.

The degree of coupling should be dictated by the requirements, and the simplest approach should be preferred.

Written by Alberto Gutierrez

December 30th, 2009 at 2:26 am

Top 3 considerations to deal with uncertainty in software development.

with 2 comments

Not dealing with uncertainty efficiently is one of the main causes for software development projects to fail. Traditional approaches assert that uncertainty can be defeated by designing and planning ahead, but that’s wrong. Even in a small development, uncertainty is so high that to discover all of it up front is impossible. That’s why classic approaches, as waterfall, fail in dealing with uncertainty, and that’s why, in my opinion, they also fail in dealing with change.

Change and uncertainty are at the core of any software development, as Heraclitus said: “Change is the only constant”, and software development is not exception. That’s why dealing with uncertainty is so important. Some of the most common consequences of not dealing properly with uncertainty are: false expectations and bad estimations.

False expectations. Uncertainty is going to cause change, and change needs to get fed back to all the stake holders, but is very easy to leave information and people outside the loop, if this happens then the expectations are going to be different across the parties involved in the project causing that some of them will have false expectations.

Bad estimations. As I’ve already said before in this blog, I strongly believe that big planning up front is a waste of time, and that’s mainly because uncertainty. The cone of uncertainty is a very well known diagram which graphically shows this.

cone-of-uncertainty

Source: http://www.codinghorror.com/blog/archives/000623.html

These are my 3 advices in order to deal with uncertainty.

Small steps

It is better to take many small steps in the right direction than to make a great leap forward only to stumble backward.

baby steps

Source: http://atriskliving.blogspot.com/2008/09/goodbye-baby-step-1.html

It is impossible to get rid of all the uncertainty, so the best way to deal with it is to take as less uncertainty as possible at a time, your estimation then is going to be more accurate and the expectations across all the stakeholders in the project are going to be aligned. Small steps will also provide with quick feed back, so you can correct the direction in your project as soon as is necessary, and it will help you to reduce the total remainder amount of uncertainty.

As a rule of thumb, I don’t like to have tasks longer than 2-3 days. These tasks should cover a whole end to end scenario in your application and should have a clear acceptance criteria, an example of task for a online book store could be: “Add the option of payment with credit card to the checkout page”.

Iterations

Can’t See the Forest for the Trees

Small steps need to have some sort of higher purpose, if not it would be like trying to climb a mountain by never looking further than 5 meters, just always taking the steepest path, but that is usually not the best path to climb a mountain.

Iterations are short time boxed periods that wraps small steps, their purpose is to serve as control points to demo functionality to the product manager and ensure that the direction of the project is correct.

Communication

Good communication plays a primary role when dealing with uncertainty, it is key, that all the parties involved in software development are aware on how uncertainty develops.

Written by Alberto Gutierrez

December 19th, 2009 at 2:47 pm

Testing facts and principles

with 5 comments

What follows is a summary of my own high level approach for testing software, this strategy is based in facts and principles.

Facts

1. It is impossible to detect all the bugs from an application.

The closer you get to 100% coverage, the harder it is to find the remaining bugs.

TestCurve

2. The more important bugs are in the core layer and in the integration layer (backend).

That’s where the testing needs to be focus on. Core layer bugs and integration layer bugs are the most important because they create a cascade effect causing several parts of the application to fail.

3. Using UI automated tests makes harder the detection of bugs.

Even though they are still very popular, UI automated tests are not very effective finding bugs because they test the core layer and the integration layer indirectly. Testing indirectly the backend makes difficult to exercise it, and makes hard to tell where an error is coming from. It is also important to notice that UI automated tests are also slow and expensive to maintain.

4. Manual testing is still necessary.

There are some important bugs that can only be detected through manual testing, that’s the case of the bugs that can be found doing usability testing and exploratory testing.

5. Testing is worthless if is not executed in a continuous basis.

What’s been proven correct through testing now is going to change very soon so it will have to be proven right again. If this feedback is not fast enough, new changes won’t get proven and eventually new bugs will be entered into the system.

Principles

1. Prioritize what’s going to be tested.

Never have a test strategy that expects to cover 100% of the application.

2. Have as much automated tests as possible.

From your previous prioritization, automate as much as you can.

3. Schedule time for the necessary manual testing to be performed on the project.

Two of the main manual testing activities that are necessary to perform are usability testing and exploratory testing.

4. Use preferably backend automated tests instead of UI automated tests.

Written by Alberto Gutierrez

December 16th, 2009 at 5:59 pm

Programming, is it still fun for you?

with 15 comments

I am privileged to work on something which I actually enjoy. It is like being a professional sportsman; I get paid to do stuff that I love.

Actually, based on the majority of programmers I know, I would say that most of us feel the same. We are basically a bunch of geeks trying to prove to each other who is a better programmer, we somehow see the day to day in the office as a grown up version of…

<geekstuff>
     <choices>
          <rts>Civilization</rts>
          <mmorpg>World of warcraft</mmrpg>
          <role game>Lord of the rings</ role game>
          <movie>Star wars</movie>
     </choices>
</ geekstuff >

Going to the office is then like a game, and as in any game, if you are not having fun, what´s the point? So, are you still having fun? Having fun at work is, in my opinion, one of the differences that can make a great software developer.

If you are not having fun, you won´t probably be motivated, and if you are not motivated, you are going to do a poor job, so to try to help you, let me present my four golden rules to keep it fun at work!

See your colleagues, and show yourself as a friend, not as a competitor.

Try to avoid getting too emotional when you have arguments with your colleagues as Dale Carnegie said “The only way to get the best of an argument is to avoid it.”

Change your mind set from “what can I do to show they are wrong” to “what can I do to help my colleagues”.

Look for challenges.

Doing easy and repetitive stuff is simply boring.

Don´t take it too seriously, it is only a job.

At the end is only a job, don’t get too stressed if you don’t want to eventually see yourself with an anxiety attack in the office.

If still is not fun, just find another job.

To me not having fun is critical, we expend a huge amount of time at work, so don’t waste it, if you are not having fun just find another job, even if the money is not as good!!!

Written by Alberto Gutierrez

December 3rd, 2009 at 5:50 pm