A good commit message can make a difference!

Some developers have a habit of committing their code without a clear commit message.

I was one of them! Now, I consider that to be a bad habit.

With time I realized that it is hard to remember the intention of an old commit by just looking at the code. It becomes even impossible when reviewing a colleagues’ code.

In this post, I will be sharing a template message I adopted with one of my previous teams.

For simplicity, I will be referring to one of my commits to a fork of the GildedRose-Refactoring-Kata on my GitHub account.

Message Template

Below is the template. As you can see, it is split into four colors. In the next part of the blog, I will break down the message to explain each of its sections separately.

[Issue Tracker #] One-line summary of the commit description 

Why is this change necessary? 
  - Detailed description 1
  - Detailed description 2
  ... 

Section I – ‘[Issue Tracker #]’:

Between the two brackets, I write the issue number corresponding to this commit. The format here varies depending on which issue tracking system I am using (ex: [ProjectName-123] for Jira and [#123] for Github.)

Message so far

[#5] One-line summary of the commit description   

Why is this change necessary? 
   - Detailed description 1 
   - Detailed description 2 
   ... 

Section II – ‘One-line summary of the commit description’:

In the second section, I try to summarize the full commit in just one sentence. Failing to do so means that my commit is doing more than one task and needs to be broken down.

Message so far

[#5] Move the constants out of the GildedRose class

Why is this change necessary? 
   - Detailed description 1 
   - Detailed description 2 
   ... 

Section III – ‘Why is this change necessary’: 

In this line, I try to answer the question ‘Why is this change necessary?’ In most cases, this line is a copy of the ‘Issue Title’.

Message so far

[#5] Move the constants out of the GildedRose class 

In order to, refactor and clean the code of GildedRose
   - Detailed description 1 
   - Detailed description 2 
   ... 

Section IV – ‘Detailed Description’:

Finally, I replace the items of the list in this section with a detailed description of the important technical changes I did in the code. Here I try to think of what I should remember if I was to read this message a couple of months ahead.

The final commit message looks like

[#5] Move the constants out of the GildedRose class 

In order to, refactor and clean the code of GildedRose
   - Make MAX_QUALITY, MIN_QUALITY & MIN_SELL_IN_DATE as local variables ItemVisitor
   - Move AGED_BRIE string to the AgedBrie class
   - Move SULFURAS string to the Sulfuras class
   - Move CONCERT string to the Concert class
   - Adapt the tests accordingly

By just reading this last message, anyone should deduce that firstly, in this commit I only moved some constants from the GildedRose class. And secondly, the commit was part of a larger code refactoring of the GildedRose class.

How is this helpful?

So, why do I consider this to be helpful?

Here are my reasons:

  1. Forces me to code in baby steps and thus have small commits.
  2. By having small commits, it would be easier, later on, to track down bugs.
  3. Makes code review easier and faster
  4. Most SCMs and tracking tools are now integrated, that makes linking commits to an issue seamless. This link has 2 benefits:
    1. By just adding the ‘Issue Number‘ in the message, I can refer to the issue’s details to view all the corresponding commits.
    2. Limits my development to only the features and bugs in the project’s backlog. I won’t be submitting any useless code.
  5. Serves as a documentation for my code. More than once, I referred to those messages when writing a detailed technical report of our product.

Finally…

At first, it was a bit annoying to write this message for each commit. But as we realized its benefits, it became a habit for us!

You don’t have to adopt this exact message, you can come up with any format you think is good for you as long as it is clear and concise.

Enjoy coding ūüôā

Tutorial: How to configure maven surefire plugin work with JUnit 5

Recently, I had a task to migrate the unit tests in our project from ‘JUnit 4’ to ‘JUnit 5’. As many developers, I researched a bit to learn about the major differences between the two versions and drafted a plan for smooth migration.

From the first look, I thought my only job was refactoring the tests and probably some helper classes. But, I was wrong! I ran into a configuration issue with the maven-surefire-plugin configuration.

In this post, I will be sharing the encountered issue and its fix. Note that I will not cover the detailed steps of how to migrate from ‘JUnit 4’ to ‘JUnit 5’.

Tests run: 0!

We, as many other developers, use the maven-surefire-plugin¬†to run our tests during the test phase of maven’s build lifecycle. We rely heavily on this plugin because it fails the build when one of the tests is broken!

With JUnit4, everything was working perfectly! But after the migration, the plugin was always reporting that not tests are run:

'Tests run: 0, Failures: 0, Errors: 0, Skipped: 0 ...'!

Calculator Class

For the purpose of this blog, let us assume we want to build a Calculator. For now, we only have the ‘add‘ method implemented as shown below.

Calculator.java

public final class Calculator {
    public static int add(int firstNumber, int secondNumber) {
        return firstNumber + secondNumber;
    }
}

JUnit 4 Code

junit4

I will start with the ‘JUnit 4’ version of the code.

pom.xml

We need to depend on the junit artifacts and add the plugin for the surefire. Here is the required pom file:

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
            <version>2.19.1</version>
        </plugin>
    </plugins>
</build>

<dependencies>
    <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <version>4.12</version>
        <scope>test</scope>
    </dependency>
</dependencies>

CalculatorTest.java

In the test class, we have one test that asserts our method is doing the correct addition.

import org.junit.Assert;
import org.junit.Test;

public class CalculatorTest {
    @Test
    public void
    our_calculator_should_add_2_numbers() {
        Assert.assertEquals(5, Calculator.add(2, 3));
    }
}

Build Output

Now, if we try to run ‘mvn clean install‘ on the above pom file we get the below output:

-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running CalculatorTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.037 sec - in CalculatorTest

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

****
[INFO] --------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] --------------------------------------------------

Everything works perfectly!

JUnit 5 Code

junit5

It’s time to migrate our code to ‘JUnit 5’!

pom.xml

The first step is changing our dependencies in the pom file.

Some significant changes were applied to the ‘Junit 5’ dependency metadata. The framework functionalities have been split into several artifacts:

  • junit-jupiter-api
  • junit-jupiter-engine
  • junit-platform-suite-api
  • junit-jupiter-params
  • junit-platform-surefire-provider

For this simple example, it is enough to depend on ‘junit-jupiter-engine‘ & ‘junit-platform-surefire-provider‘.

So, our pom becomes:

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
            <version>2.19.1</version>
        </plugin>
    </plugins>
</build>

<dependencies>
    <dependency>
        <groupId>org.junit.jupiter</groupId>
        <artifactId>junit-jupiter-engine</artifactId>
        <version>5.0.0</version>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.junit.platform</groupId>
        <artifactId>junit-platform-surefire-provider</artifactId>
        <version>1.0.0</version>
     </dependency>
</dependencies>

P.S. Check ‘JUnit 5 User Guide‘ if you are interested in more details on the ‘JUnit 5’ artifacts

CalculatorTest.java

A good portion of the code in tests has to be changed to migrate to ‘JUnit 5’. Again, I am not going to cover those changes in this blog.

For the simple test we have written before, we need to change two things:

  1. Change the import statements
  2. Change the assertion call
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;

public class CalculatorTest {
    @Test
    public void
    our_calculator_should_add_2_numbers() {
        Assertions.assertEquals(5, Calculator.add(2, 3));
    }
}

Build Output

Although the ‘mvn clean install‘ command is still returning a ‘BUILD SUCCESS‘ message, it is actually¬†not running any tests which make the whole build process suspicious.

This is our issue!

-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running CalculatorTest
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 sec - in CalculatorTest

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

****
[INFO] ------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------

The Fix

The fix of this problem is simple, need to modify the build section in our pom to add the 2 dependencies to the ‘maven-surefire-plugin‘¬† plugin section as shown below.

By doing so, we forced the maven-surefire-plugin to use the latest JUnit artifacts and thus run the JUnit 5 tests.

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
            <version>2.19.1</version>
            <dependencies>
                <dependency>
                    <groupId>org.junit.platform</groupId>
                    <artifactId>junit-platform-surefire-provider</artifactId>
                    <version>1.0.0</version>
                </dependency>
                <dependency>
                    <groupId>org.junit.jupiter</groupId>
                    <artifactId>junit-jupiter-engine</artifactId>
                    <version>5.0.0</version>
                </dependency>
            </dependencies>
        </plugin>
    </plugins>
</build>

Running ‘mvn clean install‘ will return the correct output now:

-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running CalculatorTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.044 sec - in CalculatorTest

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

****
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

 

I hope this blog post would save you some time when migrating to ‘JUnit 5’!

Good luck ūüôā

My takeaways from a TDD debate between Cope & Uncle Bob

As a developer, there is a high chance that you had a debate on the value of TDD in building software, especially if you apply it!

I had a lot of those debates!

A couple of months ago, I came across such a debate between Jim Coplien and Robert Martin (Uncle Bob). I found this discussion kind of interesting especially that it involves two leaders in software engineering.

You can watch the debate here:

Takeaways

Here are my takeaways from the discussion:

Three Rules of TDD

Uncle Bob defines the following three rules for applying TDD:

  1. Don’t write a line of production code without having a corresponding failing test
  2. Don’t write too many failing tests without writing production code
  3. Don’t write more production code than is sufficient to pass the currently failing test

Architecture is Important

Jim points out that he has no problem with those rules, his concerns are more architecture related. Jim and Uncle Bob would argue for more than ten minutes to finally reach an agreement on the importance of architecture. The below five points summarizes what they agreed on:

  1. Architecture is very important
  2. It is entirely wrong to assume that the code will assemble itself magically just by writing a lot of tests and doing quick iterations
  3. Design evolves with time and should be assembled one bit at a time
  4. An Object should have properties to give it meaning. At the beginning, those properties should be minimal.
  5. Architecture shouldn’t be created based on speculation

TDD and Professionalism

Probably the only disagreement you can sense from this¬†debate is what defines¬†a professional¬†software engineer.¬†For Uncle Bob, it is irresponsible for a software engineer to deliver a single line of code without writing a unit test for it.¬†Jim, on the other hand, considers ‘Design by Contract’ to be¬†more powerful than TDD.

My Point of View!

Personally, I have been applying TDD since I joined my team three years ago. After experiencing the benefits of this practice, we got to a point where we don’t write or refactor any line of code without having a¬†corresponding unit test!

In addition to that, I had the chance to coach other developers by running coding dojo sessions at work.

All that makes me say that I agree more with Uncle Bob on the topic of professionalism!

References
  1. Coplien and Martin Debate TDD, CDD and Professionalism

How to handle Java exceptions in clean code? – Part 2

In my previous post, I explained the different types of exceptions in Java and left two questions for this post!

Which exception to use?

Java documentation and Uncle Bob’s book (Clean Code) were my references when preparing this tutorial. I noticed a slight difference between what is recommended in each!

Java Documentation

While reading the Java documentation, you sense a preference for Checked Exceptions. The documentation states that the usage of UnChecked Exceptions should be restricted to the case where crashing the system is intentional if an exception occurs. In other cases where recovery is still possible, Checked Exceptions should be used.

Robert Martin’s opinion

Uncle Bob has a different opinion!  He argues that although Checked Exceptions might have some benefits and can be useful in some special cases like writing critical libraries, they are not a necessity to have a robust software.

Breaking the ‘Open/Closed Principle‘ and ‘Encapsulation‘ are the main two reasons that make using Checked Exceptions a bad idea!

Let’s see how!

In the below example, the MainBookReader.main method is calling the method MainBookReader.readJsonObject to get the book’s description from a JSON file. And since readJsonObject throws an exception we had to add throws FileNotFoundException¬†clause to the signature of the main method and the interface JsonLoader!

import java.io.FileNotFoundException;

public class MainBookReader {
    public static void main(String[] args) throws FileNotFoundException {
        BookJsonLoader jsonLoader = new BookJsonLoader();
        Book bookFromJson = jsonLoader.readJsonObject("books.json");
        System.out.println(bookFromJson.name());
    }
}
import com.google.gson.Gson;
import java.io.FileNotFoundException;
import java.io.FileReader;

public class BookJsonLoader implements JsonLoader {
    public Book readJsonObject(String fileName) throws FileNotFoundException {
        FileReader jsonFile = new FileReader(fileName);
        Gson gson = new Gson();
        return transformToBook(gson.fromJson(jsonFile, JsonBook.class));
    }

    private Book transformToBook(JsonBook jsonBook) {
        return new Book(jsonBook.getName());
    }
}
import java.io.FileNotFoundException;

public interface JsonLoader {
    Book readJsonObject(String fileName) throws FileNotFoundException;
}

Adding the throws clause to most of our methods means two things!

  1. Our top methods & classes had to know a lot about the lower methods & classes!
  2. If any change is to be applied to the lower methods, this change has to be replicated to all above methods!

This is a violation of the¬†‘Open/Closed Principle‘ and ‘Encapsulation‘!

Is there a better way?

No matter how careful we are, things can still go wrong. Thus we still need to deal the exceptions when they occur! The alternative to the code above is writing clean code!

In this section, I will refactor the above code to make the code look better, although it might involve more code!

Wrap the Exception

The first step is to wrap the exception in one place! In our case, we can replace the throws clause with a try-catch!

In the catch clause, I’m throwing a¬†JsonFileNotFoundException¬†(UncheckedException)¬†that I have created based on the needs of this system (i.e. providing an informative message to the user).

public class BookJsonLoader implements JsonLoader {
    public Book readJsonObject(String fileName){
        try {
            FileReader jsonFile = new FileReader(fileName);
            Gson gson = new Gson();
            return transformToBook(gson.fromJson(jsonFile, JsonBook.class));
        } catch (FileNotFoundException e) {
            throw new JsonFileNotFoundException("Can't find the file " + fileName + ". Please make sure it exists!", e);
        }
    }

    private Book transformToBook(JsonBook jsonBook) {
        return new Book(jsonBook.getName());
    }
}

The benefits of this refactoring are:

  • Remove all ‘throws clauses.’
  • Hide the logic and information of our class from the outer classes
  • Send an informative message to the user¬†in case the file was not found
  • Minimize the dependency (remove imports from other classes)

Don’t return null

In some cases, developers don’t want to throw an UnCheckedException, so they tend to return a ‘null’ instead! That is a bad practice¬†because it will eventually result in a NullPointerException later.

So, what to do? The answer is simple, return a SPECIAL CASE!

The¬†below class ‘MissingBook‘ is our special case!

public class MissingBook extends Book {

    private MissingBook(String name) {
        super(name);
    }

    public static MissingBook aMissingBook() {
        return new MissingBook("MISSING BOOK");
    }
}

This allows us to return an instance of the MissingBook in the catch instead of throwing an exception!

public class BookJsonLoader implements JsonLoader {
    public Book readJsonObject(String fileName){
        try {
            FileReader jsonFile = new FileReader(fileName);
            Gson gson = new Gson();
            return transformToBook(gson.fromJson(jsonFile, JsonBook.class));
        } catch (FileNotFoundException e) {
            return MissingBook.aMissingBook();
        }
    }

    private Book transformToBook(JsonBook jsonBook) {
        return new Book(jsonBook.getName());
    }
}

Don’t pass null

In your code, you shouldn’t pass null as parameters and instead use the Special Cases as before. But, it gets harder to prevent your clients from passing nulls thus you might need to assert some function parameters before proceeding with your code execution!

Finally

Doing the refactoring for the simple example above might look like over-engineering! But, it becomes beneficial when writing more complex code!

I think that the excessive usage of CheckedExceptions will pollute the code and thus should be avoided! For me, the benefits of CheckedExceptions can still be achieved through:

  1. Following the rules of Clean Code
  2. Increase test coverage of the code by following the TDD practice! With the use of TDD, all the edge cases should be covered and thus eliminating the possibility of having unexpected exceptions during execution!

To understand better the importance and how to write clean code, it is highly recommended that you read the book Clean Code by Robert Martin!

References:

  1. Clean Code: A Handbook of Agile Software Craftsmanship
  2. Unchecked Exceptions ‚ÄĒ The Controversy
  3. Java Exceptions
  4. The Clean Code Blog

How to handle Java exceptions in clean code? – Part 1

Reading the chapter ‘Error Handling‘ from the book ‘Clean Code‘ made me think whether developers follow the clean code rules when writing production code. It also appeared to me that the difference between the Java exception types might not clear be for some!

The purpose of this short tutorial is to clarify those two points. And to make it clearer, I will be dividing it into two posts. In the first one, I will explain the different types of Java exceptions. Whereas in the second one, I will show how to write clean exception code!

Java Exceptions

In Java, the class Throwable is at the top of the class hierarchy of all exceptions. The classes Error and Exception directly extend Throwable. All the subclasses of Error and Exception are grouped into two types ‘Checked Exceptions‘ and ‘UnChecked Exceptions‘as displayed in the image below.

javaexceptiondiagram

What is the difference between Checked and UnChecked Exceptions?

UnChecked Exceptions

UnChecked Exceptions extend the classes ‘Error‘ or ‘RuntimeException‘. In this case, developers don’t have to worry about catching or handling the exceptions at compile time as the compiler won’t report any error. But, as they are not caught, those exceptions may result in complete failure of the application when thrown during execution.

Below are two examples of UnChecked Exceptions!

1. Runtime Exception

The below method simply divides two numbers:

private static int divide(int numerator, int denominator) {
    return numerator / denominator;
}

The code is fine and will compile with no errors! So what is the problem?

The problem might occur at runtime if we try to call the method while passing zero as the denominator. Since this exception is not caught, our system will crash throwing the below arithmetic exception:

Exception in thread "main" java.lang.ArithmeticException: / by zero
 at MainUnCheckedException.divide(MainUnCheckedException.java:7) ...
2. Error

The below method compiles, but since I missed writing the base cases calling the method will crash the system and throw a StackOverflowError (as shown below) will throw an Error!

private static int fibonacci(int number) {
    return fibonacci(number - 1) + fibonacci(number - 2);
}
Exception in thread "main" java.lang.StackOverflowError
 at MainUnCheckedException.fibonacci(MainUnCheckedException.java:10)...

Checked Exceptions

On the other hand, all other classes extending the ‘Exception‘ class are considered to be Checked Exceptions. In this case, the code will not compile if the developer doesn’t explicitly handle the exception.

This adds a level of security to your code, as you are forced to specify how your code should behave when an exception occurs and thus decreases the chance of having an unrecoverable failure in the system.

Let’s see an example!

public class MainCheckedException {
    public static void main(String[] args) {
        FileInputStream fileInputStream = new FileInputStream("foo.txt");
    }
}

Error:(6, 43) java: unreported exception java.io.FileNotFoundException;
must be caught or declared to be thrown

In the above code, we are trying to read a file ‘foo.txt’, but our compiler complains that we need to handle the FileNotFoundException.

Solving this compilation error can be done in two ways:

  1. Try Catch: The first solution requires surrounding the code with a try-catch.
    private static void readFileTryCatch() {
        try {
            FileInputStream fooFile = new FileInputStream("foo.txt");
        } catch (FileNotFoundException e) {
            System.out.println(e.getMessage());
        }
    }

    Even if the file was not there, this code would not break thanks to the try-catch. As shown in the below code and output, the system will print out the exception and continue execution!

    public static void main(String[] args) {
        readFileTryCatch();
        System.out.println("Done");
    }
    ------------Output------------
    foo.txt (No such file or directory)
    Done
  2. Throw an exception: The other solution is adding a ‘throws FileNotFoundException‘ to the method signature.
    private static void readFileThrow() throws FileNotFoundException {
        new FileInputStream("foo.txt");
    }

    Using this solution will propagate handling the exception to the calling methods as illustrated in the two options below:
    Option 1: Adding throws to all methods signatures 

    public static void main(String[] args) throws FileNotFoundException {
        readFileThrow();
        System.out.println("Done");
    }
    ------------Output------------
    Exception in thread "main" java.io.FileNotFoundException: foo.txt (No such file or directory)
    at java.io.FileInputStream.open0(Native Method) ...

    Option 2: Catching the exception in one of the methods in the call stack

    public static void main(String[] args)  {
        try {
            readFileThrow();
        } catch (FileNotFoundException e) {
            System.out.println(e.getMessage());
        }
        System.out.println("Done");
    }
    ------------Output------------
    foo.txt (No such file or directory)
    Done

 

I hope this post gave you a better understanding of Java exceptions! In the 2nd part, I will be sharing more details on how to properly write exceptions in clean code and which of the two exception types to use!

Using H2 In-Memory to test your DAL

How should we test the Data Access Layer code?

Many developers ask that question. Similar to the other layers of your system, it should be fully tested to prevent unexpected and random behavior in production. There are many ways to achieve that, among of which are mocks or in-memory database.

The problem with mocks is that instead of testing the validity¬†of your SQL¬†queries (syntax and execution), you will only be testing the validity of the system’s flow.¬†On the other hand, using the in-memory database will validate both! That is why¬†I prefer it over mocking. But, you should keep in mind that the SQL syntax might differ from one database engine to another.

In this blog, I will be giving an example of using “H2 In-Memory” in unit tests. The code of this example is available on my GitHub account.

H2 In-Memory in Action

So, let us see H2 in action ūüôā

For the sake of this blog, I will assume we want to test three SQL operations on the table “Members” having the following model:

MEMBERS
ID      | NAME        |
INTEGER | VARCHAR(64) |

The SQL operations to be covered in this blog are:

  1. Create Table
  2. Insert (batch of prepared statements)
  3. Select *

Code Explanation

The example will be based on five files (pom.xml, Member.java, SqlRepository.java, H2Repository.java and H2MembersRepositoryTest). In this section, I will give a brief explanation of each file.

Pom.xml (Maven Dependencies):

First, let us modify our pom file (as shown below) to make our project depend on two projects:

  1. com.h2database: Using this dependency, Maven will take care of downloading the h2 jar file we will be referencing in our tests.
  2. Junit: a unit testing framework for Java
<dependencies>
    <dependency>
        <groupId>com.h2database</groupId>
        <artifactId>h2</artifactId>
        <version>1.4.191</version>
    </dependency>
    <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <version>4.11</version>
    </dependency>
</dependencies>

Member.java:

This class represents a Member record. It consists of a factory method that returns an instance of the Member class and two getter methods to return the values of the Id and Name fields.

package dal;

import java.util.Objects;

public final class Member {
    private final int id;
    private final String name;

    private Member(int id, String name) {
        this.id = id;
        this.name = name;
    }

    public static Member aMember(int id, String name) {
        return new Member(id, name);
    }

    public int id() {
        return id;
    }

    public String name() {
        return name;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        Member member = (Member) o;
        return id == member.id &&
                Objects.equals(name, member.name);
    }

    @Override
    public int hashCode() {
        return Objects.hash(id, name);
    }
}

SqlRepository.java

In this class, we establish a connection to our database. What is important in this class is that we pass the connection string as a parameter to the constructor thus making our code unbounded to any specific database engine (MySql, H2, Sybase, Oracle, etc.). This will make writing our tests much easier!

The class consists of:

  1. Constructor: initializes an instance of SQL Connection using the connection string passed as parameter
  2. Three public methods:
    1. createTable: executes an update query to create the”MEMBERS” table.
    2. allMemebers: executes a select query and returns the found records in a list of Members.
    3. insertMembers: takes a list of “Member” as a parameter and inserts the values into the “MEMBERS” table.
package dal;

import dal.Member;

import java.sql.*;
import java.util.ArrayList;
import java.util.List;

public class SqlRepository {

    protected final Connection connection;
    private static final String CREATE_MEMBERS = "CREATE TABLE MEMBERS(ID INTEGER, NAME VARCHAR(64))";
    private static final String SELECT_MEMBERS = "SELECT * FROM MEMBERS";
    private static final String INSERT_MEMBERS = "INSERT INTO MEMBERS(ID, NAME) VALUES(?, ?)";

    public SqlRepository(String connectionString) throws SQLException {
        connection = DriverManager.getConnection(connectionString);
    }

    public boolean createTable() throws SQLException {
        Statement createStatement = connection.createStatement();
        return createStatement.execute(CREATE_MEMBERS);
    }

    public List<Member> allMembers() throws SQLException {
        List<Member> allMembers = new ArrayList<>();
        Statement selectStatement = connection.createStatement();
        ResultSet membersResultSet = selectStatement.executeQuery(SELECT_MEMBERS);
        while (membersResultSet.next()) {
            allMembers.add(Member.aMember(membersResultSet.getInt(1), membersResultSet.getString(2)));
        }
        return allMembers;
    }

    public void insertMembers(List<Member> members) throws SQLException {
        final PreparedStatement insertMembers = connection.prepareStatement(INSERT_MEMBERS);
        members.forEach(member -> insertMember(member, insertMembers));
        insertMembers.executeBatch();
    }

    private void insertMember(Member member, PreparedStatement insertMembers) {
        try {
            insertMembers.setInt(1, member.id());
            insertMembers.setString(2, member.name());
            insertMembers.addBatch();
        } catch (SQLException e) {
            throw new UnsupportedOperationException(e.getMessage());
        }
    }
}

H2Repository.java

I added this class under “Test Sources Root” because it is only used by the tests.

As you notice, it extends the SqlRepository class implemented previously. Thus, we don’t have a lot to implement here. The only method added is¬†a new method “closeConnection” that drops all the existing tables from the database.

You might wonder why would we need that since we are using an In-Memory database. That might be true for this simple example, but it will be a necessity when running multiple tests classes. That is because, in Java, all the tests are run in the same JVM which means that the H2 instance initialized in the first test will be shared with the next test classes. This approach might lead to an unexpected behavior when using the same tables in the different test classes.

package dal;

import java.sql.SQLException;

public final class H2Repository extends SqlRepository {
    public H2Repository(String connectionString) throws SQLException {
        super(connectionString);
    }

    public void closeConnection() throws SQLException {
        connection.createStatement().execute("DROP ALL OBJECTS");
    }
}

H2MembersRepositoryTest.java

This is the simple test class! Our single test (it_correctly_inserts_members_to_a_database) is invoking the three methods we implemented before (createTable, insertMember and allMembers). If any of those methods is badly written our test would fail.

package dal;

import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;

import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;

import static dal.Member.aMember;
import static org.fest.assertions.Assertions.assertThat;

public class H2MembersRepositoryTest {

    private static final String H2_CONNECTION_STRING = "jdbc:h2:mem:test";
    private static final List<Member> MEMBERS = new ArrayList<>(10);
    private static H2Repository h2Repository;

    @BeforeClass
    public static void
    setup_database() throws SQLException {
        h2Repository = new H2Repository(H2_CONNECTION_STRING);
        initializeMembers();
    }

    @Test
    public void
    it_correctly_inserts_members_to_a_database() throws SQLException {
        h2Repository.createTable();
        h2Repository.insertMembers(MEMBERS);

        assertThat(h2Repository.allMembers()).isEqualTo(MEMBERS);
    }

    private static void initializeMembers() {
        for (int index = 0; index < 10; index++) {
            MEMBERS.add(aMember(index, "Name_" + index));
        }
    }

    @AfterClass
    public static void
    tear_down_database() throws SQLException {
        h2Repository.closeConnection();
    }
}

TearDown

If you can test your DAL, you can test anything! (I just came up with this ;))

Keep the tests going!

References:

  1. H2-Database Engine
  2. H2-Maven
  3. JUnit Maven
  4. GitHub

How to move a git repository to subdirectory of another repository

You might come across a situation when working with Git where you want to move a repository to another repository (i.e., make it a subdirectory of the other` repository) while maintaining its full history.

In this blog, I will be pointing out step by step with an example how to achieve that.

Use Case

Let us assume we have two repositories on git, repoA, and repoB, and we want to make repoB a subdirectory of repoA while maintaining its full history.

To achieve that we will move the content of repoB into a subfolder under repoB and then merge repoA and repoB. 

Bellow is the detailed git process:

  1. The first step is to clone repoA and repoB on your machine
    $ git clone repoA
    $ git clone repoB

    Note: For the purpose of this blog I submitted a README.md file in each of the two repositories on Bitbucket. Below are two snapshots are taken for both repositories:

    RepoA:
    REPO-A-Commits

    RepoB:
    REPO-B-Commits

  2. The second step is to copy the content of repoB to a subfolder. Go to the folder repoB and apply the following steps:
    • create a subfolder repoB
      $ mkdir repoB
    • move everything from the parent repoB to the child repoB (except the .git folder)
    • stage the files (the added folder and deleted file(s)) for a later commit
      $ git stage repoB/
      $ git stage README.md
    • commit and push those changes to git
      $ git commit -am '[REPO-B] Move content to a subfolder'
      $ git push origin master
  3. Go to the folder repoA and do the following:
    • add a remote branch with the content of repoB
      $ git remote add repoBTemp (path_to_repoB)
    • fetch¬†repoBTemp the temp repo we created in the previous step
      $ git fetch repoBTemp
    • merge¬†repoBTemp¬†with repoA
      $ git merge repoBTemp/master
    • delete the remote repoBTemp
      $ git remote rm repoBTemp
    • push the¬†changes to the server

      $ git push origin master

  4. Now we can check our git history to verify that our trick worked as intended. In the right image below you¬†can see that the history of repoB¬†is now part of repoA’s history. In the left image, you can notice that repoB is now a subdirectory of repoA.
  5. You it should be sage to delete the repository repoB

I hope you find this blog helpful!