My takeaways from a TDD debate between Cope & Uncle Bob

As a developer, there is a high chance that you had a debate on the value of TDD in building software, especially if you apply it!

I had a lot of those debates!

A couple of months ago, I came across such a debate between Jim Coplien and Robert Martin (Uncle Bob). I found this discussion kind of interesting especially that it involves two leaders in software engineering.

You can watch the debate here:

Takeaways

Here are my takeaways from the discussion:

Three Rules of TDD

Uncle Bob defines the following three rules for applying TDD:

  1. Don’t write a line of production code without having a corresponding failing test
  2. Don’t write too many failing tests without writing production code
  3. Don’t write more production code than is sufficient to pass the currently failing test

Architecture is Important

Jim points out that he has no problem with those rules, his concerns are more architecture related. Jim and Uncle Bob would argue for more than ten minutes to finally reach an agreement on the importance of architecture. The below five points summarizes what they agreed on:

  1. Architecture is very important
  2. It is entirely wrong to assume that the code will assemble itself magically just by writing a lot of tests and doing quick iterations
  3. Design evolves with time and should be assembled one bit at a time
  4. An Object should have properties to give it meaning. At the beginning, those properties should be minimal.
  5. Architecture shouldn’t be created based on speculation

TDD and Professionalism

Probably the only disagreement you can sense from this debate is what defines a professional software engineer. For Uncle Bob, it is irresponsible for a software engineer to deliver a single line of code without writing a unit test for it. Jim, on the other hand, considers ‘Design by Contract’ to be more powerful than TDD.

My Point of View!

Personally, I have been applying TDD since I joined my team three years ago. After experiencing the benefits of this practice, we got to a point where we don’t write or refactor any line of code without having a corresponding unit test!

In addition to that, I had the chance to coach other developers by running coding dojo sessions at work.

All that makes me say that I agree more with Uncle Bob on the topic of professionalism!

References
  1. Coplien and Martin Debate TDD, CDD and Professionalism

Coding dojo… One year later

Last week’s coding dojo session was a special one, not only because we brought a cake 🙂 but because it marked the first anniversary of those sessions at Murex Beirut. One year ago, I wrote a post sharing my experience on how we started this activity. Today, I am writing this post to share what has changed during this year.

Widening the scope

After twenty-eight sessions of practicing TDD, we decided it was time to adjust the scope a bit! Frequent attendees grasped TDD pretty well and suggested that we might benefit from those sessions to learn new technologies and practices.

After a brainstorming session, we agreed to label the session by one of four themes: Algorithms & TDD, New Languages, Kata & Workshops, and New Projects. We thought that by applying those themes we would increase the technical knowledge of all attendees, involve volunteers in the session preparation thus improving their presentations skills, and probably attract wider audience

Themes

Algorithms & TDD

Algorithms.png

We are all aligned that our primary purpose of this activity is practicing TDD, but in the case of complex algorithms using TDD is not always the optimal way (sudoku for example). Thus, we decided that in such cases we first would agree on the full algorithm then implement the code and write the sufficient tests to cover the different cases!

Lately, we’ve been using CodinGame to solve complex puzzles and algorithms, as the tests are already written, and they have an excellent graphical simulation of the puzzle execution.

New Languages

7Languages7Weeks

Learning new programming languages is one of the best approaches to improving someone’s programming skills; especially, when it includes learning different programming models (OO, functional, procedural, etc.) For those sessions, a volunteer learns the language alone and then presents it to the group in any form he/she prefers.

So far, we have learned two languages Scala and Ruby! For those sessions, we are using the book Seven Languages in Seven Weeks as our learning material reference and CodinGame as an exercising platform.

Kata & Workshops

JavaAgent.jpg

We defined kata as a presentation or short talk on a particular subject given by someone knowledgeable on that topic. A workshop, on the other hand, is a collaborative session where all take part of experimenting something new.

So far, we’ve had five of those sessions:

  1. A workshop on Git.
  2. Two kata sessions on machine learning.
  3. One presentation on Java agent and one on Android development.

Design patterns, development processes (Agile, lean, pair programming, etc.) and tools (docker, Gradle) are some of the candidates to be presented in next sessions.

New Projects

This is probably the hardest theme! The idea here is to implement an internal tool or app that can benefit other employees. Contributors to this theme will benefit from practicing TDD on a big scale project and learning new tools and technologies.

At this point, we are preparing a short list of proposals to get approval on one!  Hopefully, we can kickstart it soon 🙂

In Numbers

We initially started with thirty-four registered participants, but that number dropped to fourteen. Honestly, I think this isn’t a bad number!

The below graphs show the distribution of all the forty-seven sessions we’ve had so far. The graphs are grouped by theme and topic / language.

By Theme:
By Topic / Language:

More to Come

Thanks to the participants’ commitment and their eagerness to learn and improve their technical skills we celebrated the first anniversary of the coding dojo sessions. We will keep running those sessions, and we will keep improving them as we go! I hope that in one year from now, I will be posting another blog to share what we have done 🙂

Newcomers’ Training Program

Recently, I was in charge of training two fresh-graduate newcomers to our department. My mission was to prepare a two weeks program to ease the integration process with their teams.

After a short brainstorming, I decided to break the training into the following seven topics:

  1. Agile Practices: Their first assignment was getting acquainted with the Agile methodologies (mainly XP our working process). For that, I asked them to read a couple of chapters from two books “The Art of Agile Development” and “Extreme Programming Explained.”
  2. Dev Tools: Configuring some dev tools on their machines was the second step. This involved the installation and configuration of Java, IntelliJ, Maven and Perforce. Some of those tools such as Perforce and Maven were relatively new to them; so they took some time to learn more about it.
  3. TDDBy now, they were ready to write some code! And what would be better than following TDD to do that? Most of our teams started adopting TDD, thus coaching newcomers on TDD for simple dev problems is a must! For that purpose I picked the following two problems:
    • Mars Rover: This might be an easy problem, but I find it well suited to practice TDD especially for TDD newbies as it has a lot of cases to be covered by tests.
    • Coffee Machine: The beauty of this problem, is that it simulates what happens in the life cycle of an agile project, such as:
      • Defining new requirements at the start of each iteration
      • Writing the minimum code to implement the required features
      • Continuous code refactoring
      • Write the sufficient tests at each iteration
  4. Design Pattern and Code Refactoring: The two problems above may not be complex and can be solved in a short time, but the solution wasn’t the primary purpose rather it was introducing new concepts and practices to them. To make sure this purpose was achieved, I was performing multiple code review sessions during each iteration and suggesting enhancement at each time. This process elongated the time for each iteration, but it was worth. Some of the concepts I focused on were:
    • Test coverage
    • Builder pattern
    • Visitor pattern
    • Factory pattern
    • Bad and good code practices
    • Mocking
  5. Maven: They used Maven to build the code they wrote previously, but it was only maven’s basic commands. At this phase of the training, I asked them to dig deeper into maven to have a better understanding how it works; mainly focusing on:
    • Phases of build lifecycle
    • Dependency management
    • Plugins
    • Local and remote repositories
  6. SCM: Whether it is Git or Perforce, there are a couple of must know operations for any developer to be part of a development team. As a practice on those operations, they simulated a real dev cycle scenario by:
    • Sharing a common working directory on Perforce
    • Creating branches
    • Merging/Integrating changes
    • Resolving conflicts
  7. Continuous Integration (CI)As fresh graduates, the continuous integration was a new concept for them. Whereas for us, it is an essential process of our development cycle. It wasn’t possible to use an existing Jenkins instance to perform their testing; thus they executed the below steps:
    • Download and configure Jenkins locally on their machines
    • Submit their code to Perforce
    • Add a new job that syncs, compile code and execute the tests

 

I noticed the benefit of this training from the emails they sent me at the end of the program. They detailed what they learned and most importantly they were able to highlight the advantages of those practices and tools.

I hope you find this post helpful for your next newcomers’ training!

 

An XP Interview

While reading the book “Extreme Programming Explained,” I came across an interview with Brad Jensen a Senior Vice President of Airline Products Development at Sabre Airline Solutions. During the interview, Brad explained how he applied XP within his company and some of the difficulties he faced.

In this blog, I will be sharing my takeaways from this interview; because I thought it is worth sharing with others especially those willing to apply XP.

XPExplained

The Interview

Brad explained that he was able to apply XP in his company that consisted of 300 employees, 240 of which are developers, 25 in management and 35 in testing. His primary purpose was to bring 13 products into one organization with one architecture and one UI.

They started by giving a one-week training for each of the 13 groups then followed by coaching when they started applying XP. But, he advises on doing the training for one team at a time (making sure each time that the team got the concept of XP)

My Takeaways

  • How he made it work
    • XP was a perfect fit for Java projects with a motivated team
    • Testing and refactoring of C++ legacy code were very hard due to the complex design and lack of refactoring tools. Thus, XP had to be applied in a waterfall-ish way for such projects. Which meant:
      • more design and requirement gathering up-front
      • formal testing phase before deploying
    • Using XP decreased the number of defects to zero in some Java projects. For the C++ legacy, the defects dropped to a ratio of one to two per thousand lines of code
    • They noticed a 40% increase in productivity
    • It wasn’t easy to adopt XP! At first, only a third buy-in whereas the rest either are skeptical or just wait and see. Eventually, 80-90% buy-in, 10-20% use XP grudgingly and 3-5% never buy-in
    • If programmers don’t pair or insist on owning code have the courage to fire them
  • On-site Customers
    • A project manager should represent all of his customers
    • Having on-site customers is the most valuable part of XP because it gives you the ability to manage properly the scope and visibility on whether you are going to make it or not.
    • Without careful watching, on-site customers can cause the most problems as scope management can turn into scope creep
  • Advice for Executives
    • Plan by feature
    • Plan release once a quarter
    • Plan fixed-iterations more frequently
    • Have customers sit with the team
    • Put team in one open space

Give It a Try

In my team, we’ve been using this methodology for a while now, and it is working perfectly for us. We maintain clean code with test coverage up to 85%. We also managed to tune it to fit our needs, for example doing remote pair programming (you can check some of Philippe’s posts for more insights on the way we work.)

The importance of this interview is that it provided a real example of applying XP not only theories. Thus, it might encourage leaders to give XP a try!

I highly advise you to read this book. If you are new to XP, this book will give you insights on what this methodology is, why is it important and how to adopt it within your team. If you are already an XP developer, this book is your reference to know whether you are applying it correctly or not.

” You will only value XP when you give it a try!”

 

Using H2 In-Memory to test your DAL

How should we test the Data Access Layer code?

Many developers ask that question. Similar to the other layers of your system, it should be fully tested to prevent unexpected and random behavior in production. There are many ways to achieve that, among of which are mocks or in-memory database.

The problem with mocks is that instead of testing the validity of your SQL queries (syntax and execution), you will only be testing the validity of the system’s flow. On the other hand, using the in-memory database will validate both! That is why I prefer it over mocking. But, you should keep in mind that the SQL syntax might differ from one database engine to another.

In this blog, I will be giving an example of using “H2 In-Memory” in unit tests. The code of this example is available on my GitHub account.

H2 In-Memory in Action

So, let us see H2 in action 🙂

For the sake of this blog, I will assume we want to test three SQL operations on the table “Members” having the following model:

MEMBERS
ID      | NAME        |
INTEGER | VARCHAR(64) |

The SQL operations to be covered in this blog are:

  1. Create Table
  2. Insert (batch of prepared statements)
  3. Select *

Code Explanation

The example will be based on five files (pom.xml, Member.java, SqlRepository.java, H2Repository.java and H2MembersRepositoryTest). In this section, I will give a brief explanation of each file.

Pom.xml (Maven Dependencies):

First, let us modify our pom file (as shown below) to make our project depend on two projects:

  1. com.h2database: Using this dependency, Maven will take care of downloading the h2 jar file we will be referencing in our tests.
  2. Junit: a unit testing framework for Java
<dependencies>
    <dependency>
        <groupId>com.h2database</groupId>
        <artifactId>h2</artifactId>
        <version>1.4.191</version>
    </dependency>
    <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <version>4.11</version>
    </dependency>
</dependencies>

Member.java:

This class represents a Member record. It consists of a factory method that returns an instance of the Member class and two getter methods to return the values of the Id and Name fields.

package dal;

import java.util.Objects;

public final class Member {
    private final int id;
    private final String name;

    private Member(int id, String name) {
        this.id = id;
        this.name = name;
    }

    public static Member aMember(int id, String name) {
        return new Member(id, name);
    }

    public int id() {
        return id;
    }

    public String name() {
        return name;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        Member member = (Member) o;
        return id == member.id &&
                Objects.equals(name, member.name);
    }

    @Override
    public int hashCode() {
        return Objects.hash(id, name);
    }
}

SqlRepository.java

In this class, we establish a connection to our database. What is important in this class is that we pass the connection string as a parameter to the constructor thus making our code unbounded to any specific database engine (MySql, H2, Sybase, Oracle, etc.). This will make writing our tests much easier!

The class consists of:

  1. Constructor: initializes an instance of SQL Connection using the connection string passed as parameter
  2. Three public methods:
    1. createTable: executes an update query to create the”MEMBERS” table.
    2. allMemebers: executes a select query and returns the found records in a list of Members.
    3. insertMembers: takes a list of “Member” as a parameter and inserts the values into the “MEMBERS” table.
package dal;

import dal.Member;

import java.sql.*;
import java.util.ArrayList;
import java.util.List;

public class SqlRepository {

    protected final Connection connection;
    private static final String CREATE_MEMBERS = "CREATE TABLE MEMBERS(ID INTEGER, NAME VARCHAR(64))";
    private static final String SELECT_MEMBERS = "SELECT * FROM MEMBERS";
    private static final String INSERT_MEMBERS = "INSERT INTO MEMBERS(ID, NAME) VALUES(?, ?)";

    public SqlRepository(String connectionString) throws SQLException {
        connection = DriverManager.getConnection(connectionString);
    }

    public boolean createTable() throws SQLException {
        Statement createStatement = connection.createStatement();
        return createStatement.execute(CREATE_MEMBERS);
    }

    public List<Member> allMembers() throws SQLException {
        List<Member> allMembers = new ArrayList<>();
        Statement selectStatement = connection.createStatement();
        ResultSet membersResultSet = selectStatement.executeQuery(SELECT_MEMBERS);
        while (membersResultSet.next()) {
            allMembers.add(Member.aMember(membersResultSet.getInt(1), membersResultSet.getString(2)));
        }
        return allMembers;
    }

    public void insertMembers(List<Member> members) throws SQLException {
        final PreparedStatement insertMembers = connection.prepareStatement(INSERT_MEMBERS);
        members.forEach(member -> insertMember(member, insertMembers));
        insertMembers.executeBatch();
    }

    private void insertMember(Member member, PreparedStatement insertMembers) {
        try {
            insertMembers.setInt(1, member.id());
            insertMembers.setString(2, member.name());
            insertMembers.addBatch();
        } catch (SQLException e) {
            throw new UnsupportedOperationException(e.getMessage());
        }
    }
}

H2Repository.java

I added this class under “Test Sources Root” because it is only used by the tests.

As you notice, it extends the SqlRepository class implemented previously. Thus, we don’t have a lot to implement here. The only method added is a new method “closeConnection” that drops all the existing tables from the database.

You might wonder why would we need that since we are using an In-Memory database. That might be true for this simple example, but it will be a necessity when running multiple tests classes. That is because, in Java, all the tests are run in the same JVM which means that the H2 instance initialized in the first test will be shared with the next test classes. This approach might lead to an unexpected behavior when using the same tables in the different test classes.

package dal;

import java.sql.SQLException;

public final class H2Repository extends SqlRepository {
    public H2Repository(String connectionString) throws SQLException {
        super(connectionString);
    }

    public void closeConnection() throws SQLException {
        connection.createStatement().execute("DROP ALL OBJECTS");
    }
}

H2MembersRepositoryTest.java

This is the simple test class! Our single test (it_correctly_inserts_members_to_a_database) is invoking the three methods we implemented before (createTable, insertMember and allMembers). If any of those methods is badly written our test would fail.

package dal;

import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;

import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;

import static dal.Member.aMember;
import static org.fest.assertions.Assertions.assertThat;

public class H2MembersRepositoryTest {

    private static final String H2_CONNECTION_STRING = "jdbc:h2:mem:test";
    private static final List<Member> MEMBERS = new ArrayList<>(10);
    private static H2Repository h2Repository;

    @BeforeClass
    public static void
    setup_database() throws SQLException {
        h2Repository = new H2Repository(H2_CONNECTION_STRING);
        initializeMembers();
    }

    @Test
    public void
    it_correctly_inserts_members_to_a_database() throws SQLException {
        h2Repository.createTable();
        h2Repository.insertMembers(MEMBERS);

        assertThat(h2Repository.allMembers()).isEqualTo(MEMBERS);
    }

    private static void initializeMembers() {
        for (int index = 0; index < 10; index++) {
            MEMBERS.add(aMember(index, "Name_" + index));
        }
    }

    @AfterClass
    public static void
    tear_down_database() throws SQLException {
        h2Repository.closeConnection();
    }
}

TearDown

If you can test your DAL, you can test anything! (I just came up with this ;))

Keep the tests going!

References:

  1. H2-Database Engine
  2. H2-Maven
  3. JUnit Maven
  4. GitHub

How to move a git repository to subdirectory of another repository

You might come across a situation when working with Git where you want to move a repository to another repository (i.e., make it a subdirectory of the other` repository) while maintaining its full history.

In this blog, I will be pointing out step by step with an example how to achieve that.

Use Case

Let us assume we have two repositories on git, repoA, and repoB, and we want to make repoB a subdirectory of repoA while maintaining its full history.

To achieve that we will move the content of repoB into a subfolder under repoB and then merge repoA and repoB. 

Bellow is the detailed git process:

  1. The first step is to clone repoA and repoB on your machine
    $ git clone repoA
    $ git clone repoB

    Note: For the purpose of this blog I submitted a README.md file in each of the two repositories on Bitbucket. Below are two snapshots are taken for both repositories:

    RepoA:
    REPO-A-Commits

    RepoB:
    REPO-B-Commits

  2. The second step is to copy the content of repoB to a subfolder. Go to the folder repoB and apply the following steps:
    • create a subfolder repoB
      $ mkdir repoB
    • move everything from the parent repoB to the child repoB (except the .git folder)
    • stage the files (the added folder and deleted file(s)) for a later commit
      $ git stage repoB/
      $ git stage README.md
    • commit and push those changes to git
      $ git commit -am '[REPO-B] Move content to a subfolder'
      $ git push origin master
  3. Go to the folder repoA and do the following:
    • add a remote branch with the content of repoB
      $ git remote add repoBTemp (path_to_repoB)
    • fetch repoBTemp the temp repo we created in the previous step
      $ git fetch repoBTemp
    • merge repoBTemp with repoA
      $ git merge repoBTemp/master
    • delete the remote repoBTemp
      $ git remote rm repoBTemp
    • push the changes to the server

      $ git push origin master

  4. Now we can check our git history to verify that our trick worked as intended. In the right image below you can see that the history of repoB is now part of repoA’s history. In the left image, you can notice that repoB is now a subdirectory of repoA.
  5. You it should be sage to delete the repository repoB

I hope you find this blog helpful!

 

 

My First Audible Experience

The featured image is a screen shot of the last chapter of “Soft Skills: The software developer’s life manual” on Audible. It was an indication that I had just finished my first audible book!! Honestly, I never thought I would be listening to books; it always sounded weird to me.

It all started a couple of weeks ago when Philippe suggested that I listen to this book in parallel to the other books I am reading (The Power of Full Engagement and Refactoring: Improving the Design of Existing Code). Knowing Philippe, it has to be worth a try if it something he is suggesting 🙂 After our discussion, I directly logged-in to my Amazon account and noticed that I had two free Audible books!! It was a matter of minutes to have both the book and the Audible application downloaded on my phone.

My Feedback

It turned out I was wrong. It was an exciting and enjoyable experience and here is why:

  1. It is more accessible than reading books. Unlike reading a hard copy of book, you can play Audible at almost any time and place (while driving, jogging, onboard a plane, etc.)
  2. It is unlikely to be exhausted to listen. I was on a night flight back home, after spending the first thirty minutes of the flight reading a book I reached a point where I wasn’t able to digest the words anymore. So I switched to my phone and started listening to the book, I was surprised that the remaining two hours of the flight passed quickly. It was then that I noticed that even if you are too tired to read you still can listen and learn.
  3. You can finish books faster. The above two points have a significant role in speeding up the process. I managed to finish this book in 15 days; it might not be a great timing, but it is definitely faster than reading the book.
  4. Audible is a great application. I haven’t explored each and every feature of this tool yet, but I think it is a great one for the following reasons:
    • You can quickly add bookmarks with notes if needed
    • Synchronization between the web and phone applications makes it easy to switch between the two
    • It motivates the listener by rewarding badges and suggesting bragging about it on social media
    • It has an excellent playback system especially when you get interrupted by a phone call or notification
  5. Soft Skills was a good selection. Probably the book I picked to be my first audible experience helped to build this positive feedback. The book itself is excellent, and a must read for all developers. The narrator, as well, being the author added a unique taste by improvising and sharing his personal experience from outside the book.

I highly advise anyone to give Audible a try. Personally, now am looking for the next book to listen to! Any suggestions? 🙂