My takeaways from a TDD debate between Cope & Uncle Bob

As a developer, there is a high chance that you had a debate on the value of TDD in building software, especially if you apply it!

I had a lot of those debates!

A couple of months ago, I came across such a debate between Jim Coplien and Robert Martin (Uncle Bob). I found this discussion kind of interesting especially that it involves two leaders in software engineering.

You can watch the debate here:

Takeaways

Here are my takeaways from the discussion:

Three Rules of TDD

Uncle Bob defines the following three rules for applying TDD:

  1. Don’t write a line of production code without having a corresponding failing test
  2. Don’t write too many failing tests without writing production code
  3. Don’t write more production code than is sufficient to pass the currently failing test

Architecture is Important

Jim points out that he has no problem with those rules, his concerns are more architecture related. Jim and Uncle Bob would argue for more than ten minutes to finally reach an agreement on the importance of architecture. The below five points summarizes what they agreed on:

  1. Architecture is very important
  2. It is entirely wrong to assume that the code will assemble itself magically just by writing a lot of tests and doing quick iterations
  3. Design evolves with time and should be assembled one bit at a time
  4. An Object should have properties to give it meaning. At the beginning, those properties should be minimal.
  5. Architecture shouldn’t be created based on speculation

TDD and Professionalism

Probably the only disagreement you can sense from this debate is what defines a professional software engineer. For Uncle Bob, it is irresponsible for a software engineer to deliver a single line of code without writing a unit test for it. Jim, on the other hand, considers ‘Design by Contract’ to be more powerful than TDD.

My Point of View!

Personally, I have been applying TDD since I joined my team three years ago. After experiencing the benefits of this practice, we got to a point where we don’t write or refactor any line of code without having a corresponding unit test!

In addition to that, I had the chance to coach other developers by running coding dojo sessions at work.

All that makes me say that I agree more with Uncle Bob on the topic of professionalism!

References
  1. Coplien and Martin Debate TDD, CDD and Professionalism

How to handle Java exceptions in clean code? – Part 2

In my previous post, I explained the different types of exceptions in Java and left two questions for this post!

Which exception to use?

Java documentation and Uncle Bob’s book (Clean Code) were my references when preparing this tutorial. I noticed a slight difference between what is recommended in each!

Java Documentation

While reading the Java documentation, you sense a preference for Checked Exceptions. The documentation states that the usage of UnChecked Exceptions should be restricted to the case where crashing the system is intentional if an exception occurs. In other cases where recovery is still possible, Checked Exceptions should be used.

Robert Martin’s opinion

Uncle Bob has a different opinion!  He argues that although Checked Exceptions might have some benefits and can be useful in some special cases like writing critical libraries, they are not a necessity to have a robust software.

Breaking the ‘Open/Closed Principle‘ and ‘Encapsulation‘ are the main two reasons that make using Checked Exceptions a bad idea!

Let’s see how!

In the below example, the MainBookReader.main method is calling the method MainBookReader.readJsonObject to get the book’s description from a JSON file. And since readJsonObject throws an exception we had to add throws FileNotFoundException clause to the signature of the main method and the interface JsonLoader!

import java.io.FileNotFoundException;

public class MainBookReader {
    public static void main(String[] args) throws FileNotFoundException {
        BookJsonLoader jsonLoader = new BookJsonLoader();
        Book bookFromJson = jsonLoader.readJsonObject("books.json");
        System.out.println(bookFromJson.name());
    }
}
import com.google.gson.Gson;
import java.io.FileNotFoundException;
import java.io.FileReader;

public class BookJsonLoader implements JsonLoader {
    public Book readJsonObject(String fileName) throws FileNotFoundException {
        FileReader jsonFile = new FileReader(fileName);
        Gson gson = new Gson();
        return transformToBook(gson.fromJson(jsonFile, JsonBook.class));
    }

    private Book transformToBook(JsonBook jsonBook) {
        return new Book(jsonBook.getName());
    }
}
import java.io.FileNotFoundException;

public interface JsonLoader {
    Book readJsonObject(String fileName) throws FileNotFoundException;
}

Adding the throws clause to most of our methods means two things!

  1. Our top methods & classes had to know a lot about the lower methods & classes!
  2. If any change is to be applied to the lower methods, this change has to be replicated to all above methods!

This is a violation of the ‘Open/Closed Principle‘ and ‘Encapsulation‘!

Is there a better way?

No matter how careful we are, things can still go wrong. Thus we still need to deal the exceptions when they occur! The alternative to the code above is writing clean code!

In this section, I will refactor the above code to make the code look better, although it might involve more code!

Wrap the Exception

The first step is to wrap the exception in one place! In our case, we can replace the throws clause with a try-catch!

In the catch clause, I’m throwing a JsonFileNotFoundException (UncheckedException) that I have created based on the needs of this system (i.e. providing an informative message to the user).

public class BookJsonLoader implements JsonLoader {
    public Book readJsonObject(String fileName){
        try {
            FileReader jsonFile = new FileReader(fileName);
            Gson gson = new Gson();
            return transformToBook(gson.fromJson(jsonFile, JsonBook.class));
        } catch (FileNotFoundException e) {
            throw new JsonFileNotFoundException("Can't find the file " + fileName + ". Please make sure it exists!", e);
        }
    }

    private Book transformToBook(JsonBook jsonBook) {
        return new Book(jsonBook.getName());
    }
}

The benefits of this refactoring are:

  • Remove all ‘throws clauses.’
  • Hide the logic and information of our class from the outer classes
  • Send an informative message to the user in case the file was not found
  • Minimize the dependency (remove imports from other classes)

Don’t return null

In some cases, developers don’t want to throw an UnCheckedException, so they tend to return a ‘null’ instead! That is a bad practice because it will eventually result in a NullPointerException later.

So, what to do? The answer is simple, return a SPECIAL CASE!

The below class ‘MissingBook‘ is our special case!

public class MissingBook extends Book {

    private MissingBook(String name) {
        super(name);
    }

    public static MissingBook aMissingBook() {
        return new MissingBook("MISSING BOOK");
    }
}

This allows us to return an instance of the MissingBook in the catch instead of throwing an exception!

public class BookJsonLoader implements JsonLoader {
    public Book readJsonObject(String fileName){
        try {
            FileReader jsonFile = new FileReader(fileName);
            Gson gson = new Gson();
            return transformToBook(gson.fromJson(jsonFile, JsonBook.class));
        } catch (FileNotFoundException e) {
            return MissingBook.aMissingBook();
        }
    }

    private Book transformToBook(JsonBook jsonBook) {
        return new Book(jsonBook.getName());
    }
}

Don’t pass null

In your code, you shouldn’t pass null as parameters and instead use the Special Cases as before. But, it gets harder to prevent your clients from passing nulls thus you might need to assert some function parameters before proceeding with your code execution!

Finally

Doing the refactoring for the simple example above might look like over-engineering! But, it becomes beneficial when writing more complex code!

I think that the excessive usage of CheckedExceptions will pollute the code and thus should be avoided! For me, the benefits of CheckedExceptions can still be achieved through:

  1. Following the rules of Clean Code
  2. Increase test coverage of the code by following the TDD practice! With the use of TDD, all the edge cases should be covered and thus eliminating the possibility of having unexpected exceptions during execution!

To understand better the importance and how to write clean code, it is highly recommended that you read the book Clean Code by Robert Martin!

References:

  1. Clean Code: A Handbook of Agile Software Craftsmanship
  2. Unchecked Exceptions — The Controversy
  3. Java Exceptions
  4. The Clean Code Blog

How to handle Java exceptions in clean code? – Part 1

Reading the chapter ‘Error Handling‘ from the book ‘Clean Code‘ made me think whether developers follow the clean code rules when writing production code. It also appeared to me that the difference between the Java exception types might not clear be for some!

The purpose of this short tutorial is to clarify those two points. And to make it clearer, I will be dividing it into two posts. In the first one, I will explain the different types of Java exceptions. Whereas in the second one, I will show how to write clean exception code!

Java Exceptions

In Java, the class Throwable is at the top of the class hierarchy of all exceptions. The classes Error and Exception directly extend Throwable. All the subclasses of Error and Exception are grouped into two types ‘Checked Exceptions‘ and ‘UnChecked Exceptions‘as displayed in the image below.

javaexceptiondiagram

What is the difference between Checked and UnChecked Exceptions?

UnChecked Exceptions

UnChecked Exceptions extend the classes ‘Error‘ or ‘RuntimeException‘. In this case, developers don’t have to worry about catching or handling the exceptions at compile time as the compiler won’t report any error. But, as they are not caught, those exceptions may result in complete failure of the application when thrown during execution.

Below are two examples of UnChecked Exceptions!

1. Runtime Exception

The below method simply divides two numbers:

private static int divide(int numerator, int denominator) {
    return numerator / denominator;
}

The code is fine and will compile with no errors! So what is the problem?

The problem might occur at runtime if we try to call the method while passing zero as the denominator. Since this exception is not caught, our system will crash throwing the below arithmetic exception:

Exception in thread "main" java.lang.ArithmeticException: / by zero
 at MainUnCheckedException.divide(MainUnCheckedException.java:7) ...
2. Error

The below method compiles, but since I missed writing the base cases calling the method will crash the system and throw a StackOverflowError (as shown below) will throw an Error!

private static int fibonacci(int number) {
    return fibonacci(number - 1) + fibonacci(number - 2);
}
Exception in thread "main" java.lang.StackOverflowError
 at MainUnCheckedException.fibonacci(MainUnCheckedException.java:10)...

Checked Exceptions

On the other hand, all other classes extending the ‘Exception‘ class are considered to be Checked Exceptions. In this case, the code will not compile if the developer doesn’t explicitly handle the exception.

This adds a level of security to your code, as you are forced to specify how your code should behave when an exception occurs and thus decreases the chance of having an unrecoverable failure in the system.

Let’s see an example!

public class MainCheckedException {
    public static void main(String[] args) {
        FileInputStream fileInputStream = new FileInputStream("foo.txt");
    }
}

Error:(6, 43) java: unreported exception java.io.FileNotFoundException;
must be caught or declared to be thrown

In the above code, we are trying to read a file ‘foo.txt’, but our compiler complains that we need to handle the FileNotFoundException.

Solving this compilation error can be done in two ways:

  1. Try Catch: The first solution requires surrounding the code with a try-catch.
    private static void readFileTryCatch() {
        try {
            FileInputStream fooFile = new FileInputStream("foo.txt");
        } catch (FileNotFoundException e) {
            System.out.println(e.getMessage());
        }
    }

    Even if the file was not there, this code would not break thanks to the try-catch. As shown in the below code and output, the system will print out the exception and continue execution!

    public static void main(String[] args) {
        readFileTryCatch();
        System.out.println("Done");
    }
    ------------Output------------
    foo.txt (No such file or directory)
    Done
  2. Throw an exception: The other solution is adding a ‘throws FileNotFoundException‘ to the method signature.
    private static void readFileThrow() throws FileNotFoundException {
        new FileInputStream("foo.txt");
    }

    Using this solution will propagate handling the exception to the calling methods as illustrated in the two options below:
    Option 1: Adding throws to all methods signatures 

    public static void main(String[] args) throws FileNotFoundException {
        readFileThrow();
        System.out.println("Done");
    }
    ------------Output------------
    Exception in thread "main" java.io.FileNotFoundException: foo.txt (No such file or directory)
    at java.io.FileInputStream.open0(Native Method) ...

    Option 2: Catching the exception in one of the methods in the call stack

    public static void main(String[] args)  {
        try {
            readFileThrow();
        } catch (FileNotFoundException e) {
            System.out.println(e.getMessage());
        }
        System.out.println("Done");
    }
    ------------Output------------
    foo.txt (No such file or directory)
    Done

 

I hope this post gave you a better understanding of Java exceptions! In the 2nd part, I will be sharing more details on how to properly write exceptions in clean code and which of the two exception types to use!

Using H2 In-Memory to test your DAL

How should we test the Data Access Layer code?

Many developers ask that question. Similar to the other layers of your system, it should be fully tested to prevent unexpected and random behavior in production. There are many ways to achieve that, among of which are mocks or in-memory database.

The problem with mocks is that instead of testing the validity of your SQL queries (syntax and execution), you will only be testing the validity of the system’s flow. On the other hand, using the in-memory database will validate both! That is why I prefer it over mocking. But, you should keep in mind that the SQL syntax might differ from one database engine to another.

In this blog, I will be giving an example of using “H2 In-Memory” in unit tests. The code of this example is available on my GitHub account.

H2 In-Memory in Action

So, let us see H2 in action 🙂

For the sake of this blog, I will assume we want to test three SQL operations on the table “Members” having the following model:

MEMBERS
ID      | NAME        |
INTEGER | VARCHAR(64) |

The SQL operations to be covered in this blog are:

  1. Create Table
  2. Insert (batch of prepared statements)
  3. Select *

Code Explanation

The example will be based on five files (pom.xml, Member.java, SqlRepository.java, H2Repository.java and H2MembersRepositoryTest). In this section, I will give a brief explanation of each file.

Pom.xml (Maven Dependencies):

First, let us modify our pom file (as shown below) to make our project depend on two projects:

  1. com.h2database: Using this dependency, Maven will take care of downloading the h2 jar file we will be referencing in our tests.
  2. Junit: a unit testing framework for Java
<dependencies>
    <dependency>
        <groupId>com.h2database</groupId>
        <artifactId>h2</artifactId>
        <version>1.4.191</version>
    </dependency>
    <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <version>4.11</version>
    </dependency>
</dependencies>

Member.java:

This class represents a Member record. It consists of a factory method that returns an instance of the Member class and two getter methods to return the values of the Id and Name fields.

package dal;

import java.util.Objects;

public final class Member {
    private final int id;
    private final String name;

    private Member(int id, String name) {
        this.id = id;
        this.name = name;
    }

    public static Member aMember(int id, String name) {
        return new Member(id, name);
    }

    public int id() {
        return id;
    }

    public String name() {
        return name;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        Member member = (Member) o;
        return id == member.id &&
                Objects.equals(name, member.name);
    }

    @Override
    public int hashCode() {
        return Objects.hash(id, name);
    }
}

SqlRepository.java

In this class, we establish a connection to our database. What is important in this class is that we pass the connection string as a parameter to the constructor thus making our code unbounded to any specific database engine (MySql, H2, Sybase, Oracle, etc.). This will make writing our tests much easier!

The class consists of:

  1. Constructor: initializes an instance of SQL Connection using the connection string passed as parameter
  2. Three public methods:
    1. createTable: executes an update query to create the”MEMBERS” table.
    2. allMemebers: executes a select query and returns the found records in a list of Members.
    3. insertMembers: takes a list of “Member” as a parameter and inserts the values into the “MEMBERS” table.
package dal;

import dal.Member;

import java.sql.*;
import java.util.ArrayList;
import java.util.List;

public class SqlRepository {

    protected final Connection connection;
    private static final String CREATE_MEMBERS = "CREATE TABLE MEMBERS(ID INTEGER, NAME VARCHAR(64))";
    private static final String SELECT_MEMBERS = "SELECT * FROM MEMBERS";
    private static final String INSERT_MEMBERS = "INSERT INTO MEMBERS(ID, NAME) VALUES(?, ?)";

    public SqlRepository(String connectionString) throws SQLException {
        connection = DriverManager.getConnection(connectionString);
    }

    public boolean createTable() throws SQLException {
        Statement createStatement = connection.createStatement();
        return createStatement.execute(CREATE_MEMBERS);
    }

    public List<Member> allMembers() throws SQLException {
        List<Member> allMembers = new ArrayList<>();
        Statement selectStatement = connection.createStatement();
        ResultSet membersResultSet = selectStatement.executeQuery(SELECT_MEMBERS);
        while (membersResultSet.next()) {
            allMembers.add(Member.aMember(membersResultSet.getInt(1), membersResultSet.getString(2)));
        }
        return allMembers;
    }

    public void insertMembers(List<Member> members) throws SQLException {
        final PreparedStatement insertMembers = connection.prepareStatement(INSERT_MEMBERS);
        members.forEach(member -> insertMember(member, insertMembers));
        insertMembers.executeBatch();
    }

    private void insertMember(Member member, PreparedStatement insertMembers) {
        try {
            insertMembers.setInt(1, member.id());
            insertMembers.setString(2, member.name());
            insertMembers.addBatch();
        } catch (SQLException e) {
            throw new UnsupportedOperationException(e.getMessage());
        }
    }
}

H2Repository.java

I added this class under “Test Sources Root” because it is only used by the tests.

As you notice, it extends the SqlRepository class implemented previously. Thus, we don’t have a lot to implement here. The only method added is a new method “closeConnection” that drops all the existing tables from the database.

You might wonder why would we need that since we are using an In-Memory database. That might be true for this simple example, but it will be a necessity when running multiple tests classes. That is because, in Java, all the tests are run in the same JVM which means that the H2 instance initialized in the first test will be shared with the next test classes. This approach might lead to an unexpected behavior when using the same tables in the different test classes.

package dal;

import java.sql.SQLException;

public final class H2Repository extends SqlRepository {
    public H2Repository(String connectionString) throws SQLException {
        super(connectionString);
    }

    public void closeConnection() throws SQLException {
        connection.createStatement().execute("DROP ALL OBJECTS");
    }
}

H2MembersRepositoryTest.java

This is the simple test class! Our single test (it_correctly_inserts_members_to_a_database) is invoking the three methods we implemented before (createTable, insertMember and allMembers). If any of those methods is badly written our test would fail.

package dal;

import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;

import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;

import static dal.Member.aMember;
import static org.fest.assertions.Assertions.assertThat;

public class H2MembersRepositoryTest {

    private static final String H2_CONNECTION_STRING = "jdbc:h2:mem:test";
    private static final List<Member> MEMBERS = new ArrayList<>(10);
    private static H2Repository h2Repository;

    @BeforeClass
    public static void
    setup_database() throws SQLException {
        h2Repository = new H2Repository(H2_CONNECTION_STRING);
        initializeMembers();
    }

    @Test
    public void
    it_correctly_inserts_members_to_a_database() throws SQLException {
        h2Repository.createTable();
        h2Repository.insertMembers(MEMBERS);

        assertThat(h2Repository.allMembers()).isEqualTo(MEMBERS);
    }

    private static void initializeMembers() {
        for (int index = 0; index < 10; index++) {
            MEMBERS.add(aMember(index, "Name_" + index));
        }
    }

    @AfterClass
    public static void
    tear_down_database() throws SQLException {
        h2Repository.closeConnection();
    }
}

TearDown

If you can test your DAL, you can test anything! (I just came up with this ;))

Keep the tests going!

References:

  1. H2-Database Engine
  2. H2-Maven
  3. JUnit Maven
  4. GitHub

How to move a git repository to subdirectory of another repository

You might come across a situation when working with Git where you want to move a repository to another repository (i.e., make it a subdirectory of the other` repository) while maintaining its full history.

In this blog, I will be pointing out step by step with an example how to achieve that.

Use Case

Let us assume we have two repositories on git, repoA, and repoB, and we want to make repoB a subdirectory of repoA while maintaining its full history.

To achieve that we will move the content of repoB into a subfolder under repoB and then merge repoA and repoB. 

Bellow is the detailed git process:

  1. The first step is to clone repoA and repoB on your machine
    $ git clone repoA
    $ git clone repoB

    Note: For the purpose of this blog I submitted a README.md file in each of the two repositories on Bitbucket. Below are two snapshots are taken for both repositories:

    RepoA:
    REPO-A-Commits

    RepoB:
    REPO-B-Commits

  2. The second step is to copy the content of repoB to a subfolder. Go to the folder repoB and apply the following steps:
    • create a subfolder repoB
      $ mkdir repoB
    • move everything from the parent repoB to the child repoB (except the .git folder)
    • stage the files (the added folder and deleted file(s)) for a later commit
      $ git stage repoB/
      $ git stage README.md
    • commit and push those changes to git
      $ git commit -am '[REPO-B] Move content to a subfolder'
      $ git push origin master
  3. Go to the folder repoA and do the following:
    • add a remote branch with the content of repoB
      $ git remote add repoBTemp (path_to_repoB)
    • fetch repoBTemp the temp repo we created in the previous step
      $ git fetch repoBTemp
    • merge repoBTemp with repoA
      $ git merge repoBTemp/master
    • delete the remote repoBTemp
      $ git remote rm repoBTemp
    • push the changes to the server

      $ git push origin master

  4. Now we can check our git history to verify that our trick worked as intended. In the right image below you can see that the history of repoB is now part of repoA’s history. In the left image, you can notice that repoB is now a subdirectory of repoA.
  5. You it should be sage to delete the repository repoB

I hope you find this blog helpful!

 

 

Let’s make the tests green

Two weeks ago we started running coding dojo sessions at our offices in Beirut, the first sessions we had so far were much better than what we expected. In this article, I will be sharing my experience (as the one running those sessions) starting from the preparation and ending by the feedback on those sessions.

 

What is CodingDojo?

 
CodingDojo is a meeting where a group of people gets together to solve a programming challenge following TDD. We select those challenges from several online sites like Google Code Jam and Cyber Dojo.
Each dojo session is planned to take 2 hours including a 15-minute retrospective at the end and is scheduled once every week.  
All the participants are contributors, as we rotate on the driver (one who is coding) every 5 minutes. Thus, it is important to limit the number of participants per session to a maximum of 8
We carefully chose the session timings to be during lunchtime from 12:00 to 14:00 in order not to interrupt participants’ usual tasks. 
Finally, it is important to note that the participation to those sessions is not restricted to specific teams, rather anyone regardless of their programming expertise is welcome to join. After all, codingdojo is a place where we take knowledge sharing to the extreme. 

 

From Paris to Beirut 

 

It started almost 2 years ago as an activity within our team members only, but 8 months ago it was changed to involve interested people from other teams (in Paris only) as well. By the end of each session a 15-minute retrospective was held, during which we have always received positive feedback from the participants. For them, it was a chance to learn how to write code following TDD, learn new programming languages and techniques and enjoy a teamwork spirit to solve a programming challenge.  
We decided to leverage on the experience we gained in Paris and start similar sessions in Beirut. At that point, we had no clue on the participation level or its success rate but it was a risk we were willing to take. 

                     

 

Preparation 

 

The first step of the preparation was to contact the HR team asking them for sponsorship as such an activity involves communication and involvement of many teams. We had a meeting with them, explained what codingdojo is and what is expected from those sessions; as expected, they were very supportive to such an initiative. They granted us their sponsorship and offered their help in global communication, coordination with the administration to provide lunch for participants and whenever needed later. 

Second came the technical preparation for better running and managing such an activity. Writing code following TDD became a habit as I have been doing that for more than a year and a half (since I joined my new team). But still I decided to practice more by solving some challenges outside working hours.
For me managing such sessions was the main challenge and a new experience. For that, I started attending (via Visio conference) the sessions managed by Philippe (my teammate in Paris). During which, I tried to benefit from Philippe’s experience and save tips and advice he provided. Finally, we also agreed with Antoine (my teammate in Paris) to visit Beirut and assist me in running the first two sessions.

 

Communication 

 
After setting the start date to be on Wednesday, July 15, 2015, we sent a communication email to all Beirut employees announcing the kick-off of codingdojo and opening the door for registration. In less than one day, more than 30 participants were registered. That was a surprise!!! We weren’t expecting that many participants, especially that many of them were not developers. Since running a single session with 30 participants was impossible, we decided to divide the participants into 4 groups/sessions over the following 4 weeks.

 

First Two Sessions 

 
For the first 2 sessions, Antoine and I prepared very well by selecting and solving the problems ahead of time. Despite that, we made sure not to impose our solution, rather we tried to have an open discussion and agree with the rest on the next tests to write.

The first 30 minutes of both sessions were the hardest. Almost all participants were not familiar with the concept of writing the test before the code that is why we made sure to be the first 2 to drive the code and when someone was coding you would hear us saying “That is correct, but you forgot to write the test!” when others were coding. After that, things started to go smoother! Participants got acquainted with the concept and were aware when they missed the tests.  

For us, the first two sessions were great and much better than what we expected. We expected to have someone trying to control keyboard or dictating his solution. On the contrary, everyone respected the 5-minute limit they had. They were all sharing the solution with others. 
It was clear to us, from the feedback (listed below) we got from participants during the retrospective, that they enjoyed their time and are willing to attend future sessions:

  • Nice, good experience; you learn from others 
  • Nice technique to learn when trying to write the first test and nice way to think incrementally
  • It is hard to do it alone; it is easier with a group 
  • I am not a developer and not familiar to Java, but it was good to see the mindset of the developers
  • Nice to meet people from other teams
  • Nice practice because you are enforced to follow a convention; taught us new best practices 
  • I like the idea of refactoring; the complex code was refactored into couple lines of code that made it more readable 
  • TDD is a bit slow; the pros are the interaction between each other 
 
 

My Feedback

Running those sessions is not an easy task, it requires a lot of effort before and during each session. Before any session, the leader should prepare the right environment for the participants. During the session, the leader should always be focused on what the driver is writing and at the same time listen to suggestion proposed by others and answer the questions raised.
Despite that, it is a great experience from which you learn and gain a lot.
Later in coming sessions we might be introducing some new challenges like increasing the problem complexity or trying new programming languages. I hope that people will remain motivated and enthusiastic to attend the coming sessions!

P.S: The code we write during those sessions will be committed to this GitHub repository.