In my previous articles Why Should We Follow Method Overloading Rules, I discussed about method overloading and rules we need to follow to overload a method. I have also discussed why we need to follow these rules and why some method overloading rules are necessary and others are optional.

In a similar manner in this article, we will see what rules we need to follow to override a method and why we should follow these rules.

Method Overriding and its Rules

As discussed in Everything About Method Overloading Vs Method Overriding, every child class inherits all the inheritable behaviour from its parent class but the child class can also define its own new behaviours or override some of the inherited behaviour.

Overriding means redefining a behaviour (method) again in the child class which was already defined by its parent class but to do so overriding method in the child class must follow certain rules and guidelines.

With respect to the method it overrides, the overriding method must follow following rules.
Why We Should Follow Method Overriding Rules

To understand these reasons properly let's consider below example where we have a class Mammal which defines readAndGet method which is reading some file and returning an instance of class Mammal.

Class Human extends class Mammal and overrides readAndGet method to return instance of Human instead of instance of Mammal.

class Mammal {
    public Mammal readAndGet() throws IOException {//read file and return Mammal`s object}
}

class Human extends Mammal {
    @Override
    public Human readAndGet() throws FileNotFoundException {//read file and return Human object}
}

And we know in case of method overriding we can make polymorphic calls. Which means if we assign a child instance to a parent reference and call an overridden method on that reference eventually the method from child class will get called.

Let's do that

Mammal mammal = new Human();
try {
    Mammal obj = mammal.readAndGet();
} catch (IOException ex) {..}

As discussed in  How Does JVM Handle Method Overloading and Overriding Internally till compilation phase compiler thinks the method is getting called from the parent class. While bytecode generation phase compiler generates a constant pool where it maps every method string literal and class reference to a memory reference

During runtime, JVM creates a vtable or virtual table to identify which method is getting called exactly. JVM creates a vtable for every class and it is common for all the objects of that class. Mammal row in a vtable contains method name and memory reference of that method.

First JVM creates a vtable for the parent class and then copy that parent's vtable to child class's vtable and update just the memory reference for the overloaded method while keeping the same method name.

You can read it more clearly on  How Does JVM Handle Method Overloading and Overriding Internally if it seems hard.
So as of now we are clear that
  • For compiler mammal.readAndGet() means method is getting called from instance of class Mammal
  • For JVM mammal.readAndGet() is getting called from a memory address which vtable is holding for Mammal.readAndGet() which is pointing to a method call from class Human.

Why overriding method must have same name and same argument list

Well conceptually mammal is pointing to an object of class Human and we are calling readAndGet method on mammal, so to get this call resolved at runtime Human should also have a method readAndGet. And if Human have inherited that method from Mammal then there is no problem but if Human is overriding readAndGet, it should provide the same method signature as provided by Mammal because method has been already got called according to that method signature.

But you may be asking how it is handled physically from vtables so I must tell you that, JVM creates a vtable for every class and when it encounters an overriding method it keeps the same method name (Mammal.readAndGet()) while just update the memory address for that method. So both overridden and overriding method must have same method and argument list.

Why overriding method must have same or covariant return type

So we know, for compiler the method is getting called from class Mammal and for JVM call is from the instance of class Human but in both cases, readAndGet method call must return an object which can be assigned to obj. And since obj is of the type Mammal it can either hold an instance of Mammal class or an instance of a child class of Mammal (child of Mammal are covariant to Mammal).

Now suppose if readAndGet method in Human class is returning something else so during compile time mammal.readAndGet() will not create any problem but at runtime, this will cause a ClassCastException because at runtime mammal.readAndGet() will get resolved to new Human().readAndGet() and this call will not return an object of type Mammal.

And this why having a different return type is not allowed by the compiler in the first place.

Why overriding method must not have a more restrictive access modifier

The same logic is applicable here as well, call to readAndGet method will be resolved at runtime and as we can see readAndGet is public in class Mammal, now suppose
  • If we define readAndGet as default or protected in Human but Human is defined in another package
  • If we define readAndGet as private in Human
In both cases code will compile successfully because for compiler readAndGet is getting called from class Mammal but in both cases, JVM will not be able to access readAndGet from Human because it will be restricted.

So to avoid this uncertainty, assigning restrictive access to the overriding method in the child class is not allowed at all.

Why overriding method may have less restrictive access modifier

If readAndGet method is accessible from Mammal and we are able to execute mammal.readAndGet() which means this method is accessible. And we make readAndGet less restrictive Human which means it will be more open to get called.

So making the overriding method less restrictive cannot create any problem in the future and that's it is allowed.

Why overriding method must not throw new or broader checked exceptions

Because IOException is a checked exception compiler will force us to catch it whenever we call readAndGet on mammal

Now suppose readAndGet in Human is throwing any other checked exception e.g. Exception and we know readAndGet will get called from the instance of Human because mammal is holding new Human().

Because for compiler the method is getting called from Mammal, so the compiler will force us to handle only IOException but at runtime we know method will be throwing Exception which is not getting handled and our code will break if the method throws an exception.

That's why it is prevented at the compiler level itself and we are not allowed to throw any new or broader checked exception because it will not be handled by JVM at the end.

Why overriding method may throw narrower checked exceptions or any unchecked exception

But if readAndGet in Human throws any sub-exception of IOException e.g., FileNotFoundException, it will be handled because catch (IOException ex) can handle all child of IOException.

And we know unchecked exception (subclasses of RuntimeException) are called unchecked because we don't need to handle them necessarily.

And that's why overriding methods are allowed to throw narrower checked and other unchecked exceptions.

To force our code to adhere method overriding rules we should always use @Override annotation on our overriding methods, @Override annotation force compiler to check if the method is a valid override or not.

You can find complete code on this Github Repository and please feel free to provide your valuable feedback.
In my previous articles, Everything About Method Overloading Vs Method Overriding and How Does JVM Handle Method Overloading and Overriding Internally, I have discussed what is method overloading and overriding, how both are different than each other, How JVM handles them internally and what rules we should follow in order to implement these concepts.

In order to overload or override a method we need to follow certain rules, some of them are mandatory while others are optional and to become a good programmer we should always try to understand the reason behind these rules.

I am going to write two articles where I will try to look into the method overloading and overriding rules and try to figure out why we need to follow them.

In this article, we will see what rules we should follow to overload a method and we will also try to know why we should follow these rules.

Method Overloading

In general, method overloading means reusing same method name to define more than one method but all methods must have a different argument list.

We can take the example of the print method present in PrintStream class which gets called when we call System.out.print() to print something. By calling this method on several data types it seems like there is just one print method which is accepting all types and printing their values.

But actually, there are 9 different print methods as shown in below image

method-overloading

Well, the PrintStream class creator could have created methods like printBoolean or printInt or printFloat, but the idea behind naming all the 9 methods same is to let the user think that there is only one method which is printing whatever we pass to it.

Which sounds like polymorphism but as discussed in the article How Does JVM Handle Method Overloading and Overriding Internally that how method overloading get resolved at compile time, some people also term method overloading as compile-time polymorphism.

Method Overloading Rules

While defining a method we need to provide it with a proper method signature which includes access specifier, return type, method name, argument list, exceptions method might throw. Based on these five things method overloading has some mandatory rules and some optional rules, which we are going to see below.

Mandatory Rules

  • Overloaded methods must have same method name: Having the same name let us reuse the same method name for different purposes and let the user believe that there is only one method which is accepting different kinds of input and doing the work according to the input.
  • Overloaded methods must have different argument lists: Since all overloaded methods must have the same name, having a different argument list becomes necessary because it is the only way to differentiate the methods from each other. Java compiler differentiates a method from other based on its method name and argument list, So different argument list helps the compiler to differentiate and recognize methods from each other so the compiler will know which method is getting called at compile time only.

Optional Rules

The compiler knows that at the time of method calling JVM needs to know the method name and JVM will pass some arguments to that method so it must also know the argument list. While other method signature elements e.g. return type, access modifier, the exception method throwing also matter but at the time of method call they become optional.

So the different argument list is sufficient for the compiler to differentiate between the methods even if they have the same name so the rules mentioned below are optional and we are free to follow or not follow them. Going with below rules totally depends on your requirements and they are there to just provide us with additional functionality.
  • Overloaded methods can have different return types: Return type matters when the method call is finished and JVM assigning back the value returned by that method call to some variable. But it is not required while calling the method and JVM cannot use it to differentiate between methods based on just return type. So we can either return the same as the overloaded method did or return something different or return nothing.
  • Overloaded methods can have different access modifiers: If a method is getting called by the JVM it means it has passed the compilation phase because executing the bytecode which is already compiled. So access specifier of a method is useful for the compiler but it is useless for JVM and JVM cannot differentiate between methods based on access modifier. So an overloading method can have any access modifier and we can use it according to our need.
  • Overloaded methods can throw different checked or unchecked exceptions: Again what exceptions a method might throw cannot differentiate a method from another method. And also overloaded methods are different from each other but usually, they perform the same operation on different data set and in order to do so overloaded methods may do some different operation as well which may throw a different exception.
You can find complete code on this Github Repository and please feel free to provide your valuable feedback.
In a previous article Everything About ClassNotFoundException Vs NoClassDefFoundError, I have explained ClassNotFoundException and NoClassDefFoundError in details and also discussed their differences and how to avoid them. If you have not read it, please go ahead and give it a look.

Similar to that here in this article, we will look into one more core concept of Java which is method overloading and overriding. As soon as we start learning Java we get introduced to them and their contracts which are pretty simple to understand but sometimes programmers get confused between them or they do not know either it is a correct overload/override because of the different rules.

Here we will discuss What is method overloading and overriding, What contract one must follow to correctly overload or override a method, What are the different rules of method overloading and overriding and what are the differences between them.

Method Overloading

Method overloading means providing two separate methods in a class with the same name but different arguments while method return type may or may not be different which allows us to reuse the same method name.

And this becomes very handy for the consumer of our class, he can pass different types of parameters to the same method (in his eyes but actually they are different) and get the response according to his input e.g. System.out.println() method accepts all types of objects primitive types and print them but in reality there several println present in the PrintStream class.

public class PrintStream {

    // other methods

    public void println() { /*code*/ }
    public void println(boolean x) { /*code*/ }
    public void println(char x) { /*code*/ }
    public void println(int x) { /*code*/ }
    public void println(long x) { /*code*/ }
    public void println(float x) { /*code*/ }
    public void println(double x) { /*code*/ }
    public void println(char x[]) { /*code*/ }
    public void println(String x) { /*code*/ }
    public void println(Object x) { /*code*/ }
    
    // other methods
}

While overloading has nothing to deal with polymorphism but Java programmers also refer method overloading as Compile Time Polymorphism because which method is going to get called will be decided at compile time only.

In the case of method overloading compiler decides which method is going to get called based on the reference on which it is getting called and the method name, return type, and argument list.

class Human {
    public String speak() { return "Hello"; }

    // Valid overload of speak
    public String speak(String language) {
        if (language.equals("Hindi")) return "Namaste";
        else return "Hello";
    }

    public long calculate(int a, long b) { return a + b; }

    // However nobody should do it but Valid overload of calculate
    // by just changing sequence of arguments
    public long calculate(long b, int a) { return a + b; }
}

Method Overloading Rules

There are some rules which we need to follow to overload a method and some of them are mandatory while some are optional.

Two methods will be treated as Overloaded if both follow below mandatory rule.
  • Both must have same method name
  • Both must have different argument lists
And if both methods follow above mandatory rules then they may or may not
  • Have different return types
  • Have different access modifiers
  • Throw different checked or unchecked exceptions
Usually, method overloading happens inside a single class but a method can also be treated as overloaded in the subclass of that class because subclass inherits one version of the method from the parent class and then can have another overloaded version in its class definition.

Method Overriding

Method Overriding means defining a method in the child class which is already defined in the parent class with same method signature i.e same name, arguments and return type (after Java 5 you can also use a covariant type as return type).

Whenever we extend a superclass in a child class, child class automatically gets all the methods defined in the super and we call them derived methods. But in some cases we do not want some derived methods to work in a manner which they are doing in the parent then we can override those methods in child class e.g. we always override equals, hashCode and toString from Object class, you can read more on Why can't we override clone() method from Object class.

In case of abstract methods either from a parent abstract class or interface we do not have any option we need to implement or in other words override all the abstract methods.

Method overriding is also known as Runtime Polymorphism and Dynamic Method Dispatch because which method is going to get called is decided at runtime by JVM.

abstract class Mammal {
    // Well might speak something
    public String speak() { return "ohlllalalalalalaoaoaoa"; }
}

class Cat extends Mammal {
    @Override
    public String speak() { return "Meow"; }
}

class Human extends Mammal {
    @Override
    public String speak() { return "Hello"; }
}

Using @Override annotation on the overridden methods is not necessary but using it will tell you if you are not obeying overriding rules. 

Mammal mammal = new Cat();
System.out.println(mammal.speak()); // Will print Meow

At the line mammal.speak() compiler says the speak() method of reference type Mammal is getting called, so for compiler this call is Mammal.speak().

But at the execution time JVM knows clearly that mammal reference is holding the reference of object of Cat, so for JVM this call is Cat.speak(). You can read more on How Does JVM Handle Method Overloading and Overriding Internally.

Method Overriding Rules

Similar to method overloading we also have some mandatory and some optional rules which we need to follow to override a method.

With respect to the method it overrides, the overriding method must follow below mandatory rule.
  • It must have same method name
  • Must have same arguments.
  • Must have the same return type, from Java 5 the return type can also be a subclass (Subclass is a covariant type to its parent).
  • Must not have a more restrictive access modifier (if parent --> protected then child --> private is not allowed).
  • Must not throw new or broader checked exceptions.
And if both overriding methods follow above mandatory rules then it
  • May have a less restrictive access modifier (if parent --> protected then child --> public is allowed).
  • May throw fewer or narrower checked exceptions or any unchecked exception.
Apart from above rules, there are also some facts
  • Only inherited methods can be overridden, Means methods can be overridden in child class only.
  • Constructors and private methods are not inherited so cannot be overridden.
  • Abstract methods must be overridden by the first concrete (non-abstract) subclass.
  • final methods cannot be overridden.
  • A subclass can use super.overridden_method() to call the superclass version of an overridden method.

Difference Between Method Overloading and Method Overriding


difference-between-method-overloading-and-method-overriding

You can find complete code on this Github Repository and please feel free to provide your valuable feedback.
In my previous article Everything About Method Overloading Vs Method Overriding, I have discussed method overloading and overriding, their rules and differences.

In this article, we will see How Does JVM Handle Method Overloading And Overriding Internally, how JVM identifies which method should get called.

Let’s take the example of a parent class Mammal and a child Human classes from our previous blog to understand it more clearly.

public class OverridingInternalExample {

    private static class Mammal {
        public void speak() { System.out.println("ohlllalalalalalaoaoaoa"); }
    }

    private static class Human extends Mammal {

        @Override
        public void speak() { System.out.println("Hello"); }

        // Valid overload of speak
        public void speak(String language) {
            if (language.equals("Hindi")) System.out.println("Namaste");
            else System.out.println("Hello");
        }

        @Override
        public String toString() { return "Human Class"; }

    }

    //  Code below contains the output and and bytecode of the method calls
    public static void main(String[] args) {
        Mammal anyMammal = new Mammal();
        anyMammal.speak();  // Output - ohlllalalalalalaoaoaoa
        // 10: invokevirtual #4 // Method org/programming/mitra/exercises/OverridingInternalExample$Mammal.speak:()V

        Mammal humanMammal = new Human();
        humanMammal.speak(); // Output - Hello
        // 23: invokevirtual #4 // Method org/programming/mitra/exercises/OverridingInternalExample$Mammal.speak:()V

        Human human = new Human();
        human.speak(); // Output - Hello
        // 36: invokevirtual #7 // Method org/programming/mitra/exercises/OverridingInternalExample$Human.speak:()V

        human.speak("Hindi"); // Output - Namaste
        // 42: invokevirtual #9 // Method org/programming/mitra/exercises/OverridingInternalExample$Human.speak:(Ljava/lang/String;)V
    }
}

We can answer this answer in two ways, Logical way and Physical way, let's take a look at the logical way.

Logical Way

Logically we can say, during compilation phase calling method is considered from the reference type. But at execution time method will be called from the object which the reference is holding.

For Example on humanMammal.speak(); line compiler will say Mammal.speak() is getting called because humanMammal is of type Mammal . But during execution, JVM knows that humanMammal is holding a Human's object so Human.speak() will get called.

Well, it is pretty simple until we keep it at the conceptual level only. Once we get the doubt that how JVM is handling all this internally? or how JVM is calculating which method it should call.

Also, we know that overloaded methods are not called polymorphic and get resolved at compile time and this is why sometimes method overloading is also known as compile time polymorphism or early/static binding.

But overridden methods get resolved at runtime time because the compiler does not know that, the object which we are assigning to our reference have overridden the method or not.

Physical Way

In this section, we will try to find out physical proof of all aforementioned statements and to find them we will read the bytecode of our program which we can do by executing javap -verbose OverridingInternalExample. By using -verbose option we will get the descriptive bytecode same as our Java program.

Above command shows the bytecode in two sections

1. Constant Pool: holds almost everything which is necessary for our program’s execution e.g. method references (#Methodref), Class objects ( #Class ), string literals ( #String ), please click one image to zoom.

java-method-area-or-constant-pool-or-method-table


2. Program’s Bytecode: executable bytecode instructions, please click one image to zoom.

method-overloading-overriding-internals-byte-code

Why Method overloading is called static binding

In the above mention code humanMammal.speak() compiler will say speak() is getting called from Mammal but at execution time it will be called from the object which humanMammal is holding, which is the object of the Human class.

And by looking at the above code and images we can see that the bytecodes of humanMammal.speak() , human.speak() and human.speak("Hindi") are totally different because the compiler is able to differentiate between them based on the class reference.

So in the case of method overloading compiler is able to identify the bytecode instructions and method’s address at compile time and that is why it is also known as static binding or compile time polymorphism.

Why Method overriding is called dynamic binding

Bytecode for anyMammal.speak() and humanMammal.speak() are same ( invokevirtual #4 // Method org/programming/mitra/exercises/OverridingInternalExample$Mammal.speak:()V ) because according to compiler both methods are called on Mammal reference.

So now the question comes if both method calls have same bytecode then how does JVM know which method to call?

Well, the answer is hidden in the bytecode itself and it is invokevirtual instruction, according to JVM specification

invokevirtual invokes an instance method of an object, dispatching on the (virtual) type of the object. This is the normal method dispatch in the Java programming language.

JVM uses the invokevirtual instruction to invoke Java equivalent of the C++ virtual methods. In C++ if we want to override one method in another class we need to declare it as virtual, But in Java, all methods are virtual by default (except final and static methods) because we can override every method in the child class.

Operation invokevirtual accepts a pointer to method reference call ( #4 an index into the constant pool)

invokevirtual #4   // Method org/programming/mitra/exercises/OverridingInternalExample$Mammal.speak:()V

And that method reference #4 again refers to a method name and Class reference

#4 = Methodref   #2.#27   // org/programming/mitra/exercises/OverridingInternalExample$Mammal.speak:()V
#2 = Class   #25   // org/programming/mitra/exercises/OverridingInternalExample$Mammal
#25 = Utf8   org/programming/mitra/exercises/OverridingInternalExample$Mammal
#27 = NameAndType   #35:#17   // speak:()V
#35 = Utf8   speak
#17 = Utf8   ()V

All these references combinedly used to get a reference to a method and class in which the method is to be found. This is also mentioned in JVM Specification

The Java virtual machine does not mandate any particular internal structure for objects 4.

And the bookmark 4 states

In some of Oracle’s implementations of the Java virtual machine, a reference to a class instance is a pointer to a handle that is itself a pair of pointers: one to a table containing the methods of the object and a pointer to the Class object that represents the type of the object, and the other to the memory allocated from the heap for the object data.

It means every reference variable holds two hidden pointers
  1. A pointer to a table which again holds methods of the object and a pointer to the Class object. e.g. [speak(), speak(String) Class object]
  2. A pointer to the memory allocated on the heap for that object’s data e.g. values of instance variables.
But again the question comes, how invokevirtual internally do this? Well, no one can answer this because it depends on JVM implementation and it varies from JVM to JVM.

And from the above statements, we can conclude that an object reference indirectly holds a reference/pointer to a table which holds all the method references of that object. Java has borrowed this concept from C++ and this table is known by various names which such as virtual method table ( VMT ), virtual function table (vftable), virtual table (vtable), dispatch table.

We can not sure of how vtable is implemented in Java because it is JVM dependent. But we can expect that it will be following the same strategy as C++ where vtable is an array like structure which holds method names and their references on array indices. And whenever JVM tries to execute a virtual method it always asks the vtable for it address.

There is only one vtable per class, which means it is unique and same for all objects of a class similar to Class object. I have discussed more on Class object in my articles Why an outer Java class can’t be static and Why Java is Purely Object-Oriented Language Or Why Not.

So there is only one vtable for Object class which contains all 11 methods (if we don't count registerNatives) and references to their respective method bodies.

vtable-of-object

When JVM loads the Mammal class into memory it creates a Class object for it and creates a vtable which contains all the methods from the vtable of Object class with the same references (Because Mammal is not overriding any method from Object) and adds a new entry for speak method.

vtable-human


Now here comes the turn of Human class and now JVM will copy all entries from the vtable of Mammal class to the vtable of Human and adds a new entry for the overloaded version of speak(String).

JVM knows that Human class has overridden two methods one is toString() from Object and second is speck() from Mammal . Now instead of creating new entries for these methods with updated references. JVM will modify the references to the already present methods on the same index where they were present earlier and will keep the same method names.




The invokevirtual causes the JVM to treat the value at method reference #4 , not as an address but as the name of a method to look up in the vtable for the current object.

I hope now it would have become a little bit clear that how the JVM mixes constant pool entries and vtable to conclude which method it is going to call.

You can find the complete code on this Github Repository and please feel free to provide your valuable feedback.
We know Java is an Object Oriented Programming Language and almost everything is an object in Java and in order to create an object we need a class.

While executing our program whenever JVM find a class, First JVM will try to load that class into memory if it has not done it already.

For Example, If JVM is executing below the line of code, before creating the object of Employee class JVM will load this class into memory using a ClassLoader.

Employee emp = new Employee();

In above example, JVM will load the Employee class because it is present in the execution path and JVM want to create an object of this class.

But we can also ask JVM to just load a class through its string name using Class.forName() or ClassLoader.findSystemClass() or ClassLoader.loadClass() methods. For Example below the line of code will only load the Employee class into memory and do nothing else.

Class.forName("Employee");

Both ClassNotFoundException and NoClassDefFoundError occur when a particular class is not found at run time but under different scenarios. And here in this article, we going to study these different scenarios.

ClassNotFoundException

Is a checked exception that occurs when we tell JVM to load a class by its string name using Class.forName() or ClassLoader.findSystemClass() or ClassLoader.loadClass() methods and mentioned class is not found in the classpath.

Most of the time, this exception occurs when you try to run an application without updating the classpath with required JAR files. For Example, You may have seen this exception when doing the JDBC code to connect to your database i.e.MySQL but your classpath does not have JAR for it.

If we compile below example, the compiler will produce two class files Test.class and Person.class. And Now if we execute the program it will successfully print Hello. But if we delete Person.class file and again try to execute the program we will receive ClassNotFoundException.

public class Test {
    public static void main(String[] args) throws Exception {

        // ClassNotFoundException Example
        // Provide any class name to Class.forName() which does not exist
        // Or compile Test.java and then manually delete Person.class file so Person class will become unavailable
        // Run the program using java Test

        Class clazz = Class.forName("Person");
        Person person = (Person) clazz.newInstance();
        person.saySomething();
    }
}

class Person {
    void saySomething() {
        System.out.println("Hello");
    }
}

NoClassDefFoundError

Is a subtype of java.lang.Error and Error class indicates an abnormal behavior which really should not happen with an application but and application developers should not try to catch it, it is there for JVM use only.

NoClassDefFoundError occurs when JVM tries to load a particular class that is the part of your code execution (as part of a normal method call or as part of creating an instance using the new keyword) and that class is not present in your classpath but was present at compile time because in order to execute your program you need to compile it and if you are trying use a class which is not present compiler will raise compilation error.

Similar to above example if we try to compile below program, we will get two class files Test.class and Employee.class. A and on execution it will print Hello.

public class Test {
    public static void main(String[] args) throws Exception {

        // NoClassDefFoundError Example
        // Do javac on Test.java, 
        // Program will compile successfully because Empoyee class exits
        // Manually delete Employee.class file
        // Run the program using java Test
        Employee emp = new Employee();
        emp.saySomething();

    }
}

class Employee {
    void saySomething() {
        System.out.println("Hello");
    }
}

But if we delete Employee.class and try to execute the program we will get NoClassDefFoundError.

Exception in thread "main" java.lang.NoClassDefFoundError: Employee
 at Test.main(Test.java:9)
Caused by: java.lang.ClassNotFoundException: Employee
 at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 ... 1 more

As you can see in above stack trace NoClassDefFoundError is caused by ClassNotFoundException, because JVM is not able to find the Employee class in the class path.

Conclusion

Difference-Between-ClassNotFoundException-and-NoClassDefFoundError

You can find complete code on this Github Repository and please feel free to provide your valuable feedback.
In my previous article JPA Auditing: Persisting Audit Logs Automatically using EntityListeners, I have discussed how we can use Spring Data JPA automate Auditing and automatically create audit logs or history records and update CreatedBy, CreatedDate, LastModifiedBy, LastModifiedDate properties.

So in order to save history records for our File entity, we were trying to auto-wire EntityManager inside our FileEntityListener class and we have come to know that we can not do this.

We can not inject any Spring-managed bean in the EntityListener because EntityListeners are instantiated by JPA before Spring inject anything into it. EntityListeners are not managed Spring so Spring cannot inject any Spring-managed bean e.g. EntityManager in the EntityListeners.

And this case is not just with EntityListeners, you can not auto wire any Spring-managed bean into another class (i.e. utility classes) which is not managed by Spring.

Because it is a very common problem and can also arise with other classes so I tried to come out with a common solution which will not just solve this problem but will also help us getting Spring managed beans in other places.

AutoWiring-Spring-Beans-Into-Classes-Not-Managed-By-Spring-Like-JPA-Entity-Listeners

So I have created one utility class to fetch any bean according to our requirement.

@Service
public class BeanUtil implements ApplicationContextAware {

    private static ApplicationContext context;

    @Override
    public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
        context = applicationContext;
    }

    public static <T> T getBean(Class<T> beanClass) {
        return context.getBean(beanClass);
    }

}

Now to get any a bean in class we will just need call the BeanUtil.getBean(YourClass.class) and pass the class type to it and we will get the bean.

For Example in our case, we were trying to get the EntityManager bean inside FileEntityListener, we can simply do it by writing BeanUtil.getBean(EntityManager.class).

public class FileEntityListener {

    private void perform(File target, Action action) {
        EntityManager entityManager = BeanUtil.getBean(EntityManager.class);
        entityManager.persist(new FileHistory(target, action));
    }

}

You can find complete code on this Github Repository and please feel free to provide your valuable feedback.
In my previous article Spring Data JPA Auditing: Saving CreatedBy, CreatedDate, LastModifiedBy, LastModifiedDate automatically, I have discussed why Auditing is important for any business application and how we can use Spring Data JPA automate it.

I have also discussed how Spring Data uses JPA’s EntityListeners and callback methods to automatically update CreatedBy, CreatedDate, LastModifiedBy, LastModifiedDate properties.

Well, here in this article I am going dig a little bit more and discuss how we can use JPA EntityListeners to create audit logs and keep information of every insert, update and delete operation on our data.

I will take the File entity example from the previous article and walk you through the necessary steps and code portions you will need to include in our project to automate the Auditing process.

We will use Spring Boot, Spring Data JPA (Because it gives us complete JPA functionality plus some nice customization by Spring), MySql to demonstrate this.

We will need to add below parent and dependencies to our pom file

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.5.1.RELEASE</version>
    <relativePath/>
</parent>

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-jpa</artifactId>
    </dependency>

    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <scope>runtime</scope>
    </dependency>
</dependencies>

Implementing JPA Callback Methods using annotations @PrePersist, @PreUpdate, @PreRemove


JPA provides us the functionality to define callback methods for any entity using annotations @PrePersist, @PreUpdate, @PreRemove and these methods will get invoked before their respective life cycle event.

Similar to pre-annotations, JPA also provides post annotations like @PostPersist, @PostUpdate, @PostRemove, and @PostLoad. We can use them to define callback methods which will get triggered after the event.

JPA-Automatic-Auditing-Saving-Audit-Logs

Name of the annotation can tell you their respective event e.g @PrePersist - Before entity persists and @PostUpdate - After entity gets updated and this is same for other annotations as well.

Defining callback methods inside entity


We can define callback methods inside our entity class but we need to follow some rules like internal callback methods should always return void and take no argument. They can have any name and any access level and can also be static.

@Entity
public class File {

    @PrePersist
    public void prePersist() { // Persistence logic }

    @PreUpdate
    public void preUpdate() { //Updation logic }

    @PreRemove
    public void preRemove() { //Removal logic }

}

Defining callback methods in an external class and use @EntityListeners


We can also define our callback methods in an external listener class in a manner that they should always return void and accepts target object as the argument. However, they can have any name and any access level and can also be static.

public class FileEntityListener {
    @PrePersist
    public void prePersist(File target) { // Persistence logic }

    @PreUpdate
    public void preUpdate(File target) { //Updation logic }

    @PreRemove
    public void preRemove(File target) { //Removal logic }
}


And we will need to register this FileEntityListener class on File entity or its superclass by using @EntityListeners annotation

@Entity
@EntityListeners(FileEntityListener.class)
class File extends Auditable<String> {

    @Id
    @GeneratedValue
    private Integer id;
    private String name;
    private String content;

    // Fields, Getters and Setters
}

Advantages of using @EntityListeners


  • First of all, We should not write any kind of business logic in our entity classes and follow Single Responsibility Principle. Every entity class should be POJO (Plain Old Java Object).
  • We can have only one callback method for a particular event in a single class e.g. only one callback method with @PrePresist is allowed in a class. While we can define more than one listener class in @EntityListeners and every listener class can have a @PrePersist.

For example, I have used @EntityListeners on File and provided FileEntityListener class to it and I have also extended an Auditable class in File class.

The Auditable class itself have a @EntityListeners on it with AuditingEntityListener class because I am using this class to persist createdBy and other above-mentioned properties, You can check my previous article Spring Data JPA Auditing: Saving CreatedBy, CreatedDate, LastModifiedBy, LastModifiedDate automatically for more details.

@MappedSuperclass
@EntityListeners(AuditingEntityListener.class)
public abstract class Auditable<U> {

    @CreatedBy
    protected U createdBy;

    @CreatedDate
    @Temporal(TIMESTAMP)
    protected Date createdDate;

    @LastModifiedBy
    protected U lastModifiedBy;

    @LastModifiedDate
    @Temporal(TIMESTAMP)
    protected Date lastModifiedDate;

    // Getters and Setters
}

We will also need to provide getters, setters, constructors, toString and equals methods to all the entities. However, you may like to look Project Lombok: The Boilerplate Code Extractor if you want to auto-generate these things.

Now we are all set and we need to implement our logging strategy, we can store history logs of the File in a separate history table FileHistory.

@Entity
@EntityListeners(AuditingEntityListener.class)
public class FileHistory {

    @Id
    @GeneratedValue
    private Integer id;

    @ManyToOne
    @JoinColumn(name = "file_id", foreignKey = @ForeignKey(name = "FK_file_history_file"))
    private File file;

    private String fileContent;

    @CreatedBy
    private String modifiedBy;

    @CreatedDate
    @Temporal(TIMESTAMP)
    private Date modifiedDate;

    @Enumerated(STRING)
    private Action action;

    public FileHistory() {
    }

    public FileHistory(File file, Action action) {
        this.file = file;
        this.fileContent = file.toString();
        this.action = action;
    }

    // Getters, Setters
}

Here Action is an enum

public enum Action {

    INSERTED("INSERTED"),
    UPDATED("UPDATED"),
    DELETED("DELETED");

    private final String name;

    private Action(String value) {
        this.name = value;
    }

    public String value() {
        return this.name;
    }

    @Override
    public String toString() {
        return name;
    }
}

And we will need to insert an entry in FileHistory for every insert, update, delete operation and we need to write that logic inside our FileEntityListener class. For this purpose, we will need to inject either repository class or EntityManager in FileEntityListener class.

Injecting Spring Managed Beans like EntityManager in EntityListeners


But here we have a problem, EntityListeners are instantiated by JPA not Spring, So Spring cannot inject any Spring-managed bean e.g. EntityManager in any EntityListeners.

So if you try to auto-wire EntityManager inside FileEntityListener class, it will not work

@Autowired EntityManager entityManager; //Will not work and entityManager will be null always

I have also written a separate article on how to AutoWire Spring Beans Into Classes Not Managed By Spring Like JPA Entity Listeners, you can read it if you want to know more.

And I am using the same idea here to make it work, we will create a utility class to fetch Spring managed beans for us

@Service
public class BeanUtil implements ApplicationContextAware {

    private static ApplicationContext context;

    @Override
    public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
        context = applicationContext;
    }

    public static <T> T getBean(Class<T> beanClass) {
        return context.getBean(beanClass);
    }

}

And now we will write history record creation logic inside FileEntityListener

public class FileEntityListener {

    @PrePersist
    public void prePersist(File target) {
        perform(target, INSERTED);
    }

    @PreUpdate
    public void preUpdate(File target) {
        perform(target, UPDATED);
    }

    @PreRemove
    public void preRemove(File target) {
        perform(target, DELETED);
    }

    @Transactional(MANDATORY)
    private void perform(File target, Action action) {
        EntityManager entityManager = BeanUtil.getBean(EntityManager.class);
        entityManager.persist(new FileHistory(target, action));
    }

}

And now if we will try to persist or update and file object these auditing properties will automatically get saved.

You can find complete code on this Github Repository and please feel free to provide your valuable feedback.
In any business application auditing simply means tracking and logging every change we do in the persisted records which simply means tracking every insert, update and delete operation and storing it.

Auditing helps us in maintaining history records which can later help us in tracking user activities. If implemented properly auditing can also provide us similar functionality like version control systems.

I have seen projects storing these things manually and doing so become very complex because you will need to write it completely by your own which will definitely require lots of code and lots of code means less maintainability and less focus on writing business logic.

But why should someone need to go to this path when both JPA and Hibernate provides Automatic Auditing which we can be easily configured in your project.

And here in this article, I will discuss how we can configure JPA to persist CreatedBy, CreatedDate, LastModifiedBy, LastModifiedDate columns automatically for any entity.

Spring-Data-JPA-Automatic-Auditing-Saving-CreatedBy-CreatedDate-LastModifiedBy-LastModifiedDate-Automatically


I will walk you through to the necessary steps and code portions you will need to include in your project to automatically update these properties. We will use Spring Boot, Spring Data JPA, MySql to demonstrate this. We will need to add below parent and dependencies to our pom file

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.5.1.RELEASE</version>
    <relativePath/>
</parent>

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-jpa</artifactId>
    </dependency>

    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <scope>runtime</scope>
    </dependency>
</dependencies>

Spring Data Annotations @CreatedBy, @CreatedDate, @LastModifiedBy and @LastModifiedDate


Let’s suppose we have a File entity and a single record in file table stores name and content of the file and we also want to store who created and modified any file at what time. So we can keep track like when the file was created by whom and when it was last modified by whom.

So we will need to add name, content, createdBy, createdDate, lastModifiedBy, lastModifiedDate properties to our File entity and to make it more appropriate we can move createdBy, createdDate, lastModifiedBy, lastModifiedDate properties to a base class Auditable and annotate this base class by @MappedSuperClass and later we can use the Auditable class in other audited entities.

You will also need to write getters, setters, constructors, toString, equals along with these fields. However, you should take a look at Project Lombok: The Boilerplate Code Extractor, if you want to auto-generate these things.

Both classes will look like

@MappedSuperclass
@EntityListeners(AuditingEntityListener.class)
public abstract class Auditable<U> {

    @CreatedBy
    protected U createdBy;

    @CreatedDate
    @Temporal(TIMESTAMP)
    protected Date creationDate;

    @LastModifiedBy
    protected U lastModifiedBy;

    @LastModifiedDate
    @Temporal(TIMESTAMP)
    protected Date lastModifiedDate;

    // Getters and Setters

}

@Entity
public class File extends Auditable<String> {
    @Id
    @GeneratedValue
    private int id;
    private String name;
    private String content;

    // Getters and Setters
}

As you can see above I have used @CreatedBy, @CreatedDate, @LastModifiedBy and @LastModifiedDate annotation on respective fields.

Spring Data JPA approach abstracts working with JPA callbacks and provides us these fancy annotations to automatically save and update auditing entities.

Using AuditingEntityListener class with @EntityListeners


Spring Data JPA provides a JPA entity listener class AuditingEntityListener which contains the callback methods (annotated with @PrePersist and @PreUpdate annotation) which will be used to persist and update these properties when we will persist or update our entity.

JPA provides the @EntityListeners annotation to specify callback listener classes which we use to register our AuditingEntityListener class.

However, We can also define our own callback listener classes if we want to and specify them using @EntityListeners annotation. In my next article, I will demonstrate how we can use @EntityListeners to store audit logs.

Auditing Author using AuditorAware and Spring Security


JPA can analyze createdDate and lastModifiedDate using current system time but what about the createdBy and lastModifiedBy fields, how JPA will recognize what to store in these fields?

To tell JPA about currently logged in user we will need to provide an implementation of AuditorAware and override getCurrentAuditor() method. And inside getCurrentAuditor() we will need to fetch currently logged in user.

As of now, I have provided a hard-coded user but you are using Spring security then you use it find currently logged in user same as I have mentioned in the comment. 

public class AuditorAwareImpl implements AuditorAware<String> {

    @Override
    public String getCurrentAuditor() {
        return "Naresh";
        // Can use Spring Security to return currently logged in user
        // return ((User) SecurityContextHolder.getContext().getAuthentication().getPrincipal()).getUsername()
    }
}

Enable JPA Auditing by using @EnableJpaAuditing


We will need to create a bean of type AuditorAware and will also need to enable JPA auditing by specifying @EnableJpaAuditing on one of our configuration class. @EnableJpaAuditing accepts one argument auditorAwareRef where we need to pass the name of the AuditorAware bean.

@Configuration
@EnableJpaAuditing(auditorAwareRef = "auditorAware")
public class JpaConfig {
    @Bean
    public AuditorAware<String> auditorAware() {
        return new AuditorAwareImpl();
    }
}

And now if we will try to persist or update and file object CreatedBy, CreatedDate, LastModifiedBy, LastModifiedDate properties will automatically get saved.

In the next article JPA Auditing: Persisting Audit Logs Automatically using EntityListeners, I have discussed how we can use JPA EntityListeners to create audit logs and generate history records for every insert, update and delete operation.

You can find complete code on this Github Repository and please feel free to give your valuable feedback.
Lombok is a tool that generates code like getters, setters, constructors, equals, hashCode, toString for us in the same way that our IDE does. While IDE generates all these things in our source code file, Lombok generates them directly in the class file.

So Lombok basically takes out all these things from your source code to bytecode so we don't need to write them in our source code which means less code in our source code file. And in this article, I am going to explain how Lombok can help us in removing this kind of boilerplate code.

To understand it let's suppose we have an entity class Employee and we want to use it to hold a single employee record. We can use it as a DTO or persistent entity or anything else we want but idea is that we want to use it to store id, firstName, lastName and salary fields.

For this requirement, we will need a simple Employee POJO and according to General directions for creating Plain Old Java Object,
  • Each variable in a POJO should be declared as private.
  • Default constructor should be overridden with public accessibility.
  • Each variable should have its Setter-Getter method with public accessibility.
  • POJO should override equals(), hashCode() and toString() methods of Object.
And generally our Employee class will look like

public class Employee {
  private long id;
  private int salary;
  private String firstName;
  private String lastName;

  public Employee() {
  }

  public long getId() {
    return id;
  }
  public void setId(long id) {
    this.id = id;
  }
  public int getSalary() {
    return salary;
  }
  public void setSalary(int salary) {
    this.salary = salary;
  }
  public String getFirstName() {
    return firstName;
  }
  public void setFirstName(String firstName) {
    this.firstName = firstName;
  }
  public String getLastName() {
    return lastName;
  }
  public void setLastName(String lastName) {
    this.lastName = lastName;
  }

  @Override
  public boolean equals(Object o) {
    if (this == o) return true;
    if (o == null || getClass() != o.getClass()) return false;

    Employee employee = (Employee) o;

    if (id != employee.id) return false;
    if (salary != employee.salary) return false;
    if (!firstName.equals(employee.firstName)) return false;
    if (!lastName.equals(employee.lastName)) return false;

    return true;
  }

  @Override
  public int hashCode() {
    int result = (int) (id ^ (id >>> 32));
    result = 31 * result + firstName.hashCode();
    result = 31 * result + lastName.hashCode();
    result = 31 * result + salary;
    return result;
  }

  @Override
  public String toString() {
    return "Employee{" +
           "id=" + id +
           ", firstName='" + firstName + '\'' +
           ", lastName='" + lastName + '\'' +
           ", salary=" + salary +
          '}';
  }
}

But generally, we always use auto-generation strategies of our IDE to generate getters, setters, default constructor, hashCode, equals and toString e.g. alt+insert in IntelliJ.

As you can see the size of Employee class is more than 50 lines where field declaration is contributing only 4 lines. And these things are not directly contributing anything to our business logic but just increasing the size of our code.

Project Lombok provides a way to remove above boilerplate code and simplify development process while still providing these functionalities at the bytecode level. With project Lombok, we can combine all these things within 10 lines

@Data
public class Employee {
  private long id;
  private int salary;
  private String firstName;
  private String lastName;
}

With @Data annotation on top of our class Lombok will process our Java source code and produce a class file which will have getters, setters, default arg constructor, hasCode, equals and toString methods in it. So basically Lombok is doing the trick and instead of us adding all those things in our source code and then compiling it a class file Lombok is automatically adding all these things directly to our class files.

But if we need to write some business code in our getters or setters or in any of above method or we want trick these methods to function a little bit differently, we can still write that method in our class and Lombok will not override it while producing all these stuff in bytecode.

In order to make it works, we need to
  1. Install Lombok plugin in our IDE e.g. In IntelliJ we can install it from Settings -> Plugins -> Browse Repositories window.installing-lombok-plugin-in-intellij-idea
  2. Enable annotation processing e.g. In IntelliJ we need to check “Enable annotation processing” option on Settings -> Compiler -> Annotation Processors window.enabling-annotation-in-intellij-idea
  3. Include Lombok jar in our build path, we can do it by adding the Lombok dependency in pom.xml file if we are using maven or we will need to download the Lombok jar manually and add it to our classpath.
    <dependency>
      <groupId>org.projectlombok</groupId>
      <artifactId>lombok</artifactId>
      <version>1.16.12</version>
      <optional>true</optional>
    <dependency>
    
Lombok provides a variety of annotations which we can use and manipulate according to our need. Some of these annotations are
  • @NonNull Can be used with fields, methods, parameters, and local variables to check for NullPointerException.
  • @Cleanup Provides automatic resource management and ensures the variable declaration that you annotate will be cleaned up by calling its close method. Seems similar to Java’s try-with-resource.
    @Cleanup InputStream in = new FileInputStream("filename");
    
  • @Getter/@Setter Can be used on class or field to generate getters and setters automatically for every field inside the class or for a particular field respectively.
    @Getter @Setter private long id;
    
  • @ToString Generates a default toString method
  • @EqualsAndHashCode Generates hashCode and equals implementations from the fields of your object.
    @ToString(exclude = "salary")
    @EqualsAndHashCode(exclude = "salary")
    
  • @NoArgsConstructor , @RequiredArgsConstructor and @AllArgsConstructor Generates constructors that take no arguments, one argument per final / non-null field, or one argument for every field.
  • @Data A shortcut for @ToString , @EqualsAndHashCode , @Getter on all fields, and @Setter on all non-final fields, and @RequiredArgsConstructor .
  • @Value is the immutable variant of @Data, Helps in making our class Immutable.
  • @Builder annotation will generate nice static factory methods in our class which we can use to create objects of our class in more oriented manner e.g. if we will add “@Builder” annotation to our Employee class then we can create object of Employee in the following manner
    Employee emp = Employee.builder()
                           .firstName("Naresh")
                           .lastName("Joshi")
                           .build();
    
  • @SneakyThrows Allows us to throw checked exceptions without actually declaring this in our method’s throws clause, e.g.
    @SneakyThrows(Exception.class)
    public void doSomeThing() {
      // According to some business condition throw some business exception
      throw new Exception();
    }
    
  • @Synchronized A safer variant of the synchronized method modifier.
  • @CommonsLog, @JBossLog, @Log, @Log4j, @Log4j2, @Slf4j and @XSlf4j which produces log fields in our class and let us use that field for logging. e.g. If we will mark a class with @CommonsLog Lombok will attach below field to our class.
    private static final org.apache.commons.logging.Log log = org.apache.commons.logging.LogFactory.getLog(YourClass.class);
    
You can also go to the official website of project Lombok for the complete feature list and examples.

Advantages of Lombok

  • Lombok helps us remove boilerplate code and decrease line of unnecessary code
  • It makes our code highly maintainable, we don’t need to worry about regenerating hashCode, equals, toString, getters, and setters whenever we change our properties.
  • Lombok provides an efficient builder API to build our object by using @Builder
  • Lombok provides efficient way to make our class Immutable by using @Value
  • Provides other annotations like @Log - for logging, @Cleanup - for cleaning resources automatically, @SneakyThrows  - for throwing checked exception without adding try-catch or throws statement and @Synchronized to make our methods synchronized.

Disadvantages of Lombok

The only disadvantage Lombok comes with is its dependency, If you are using it then everyone in your project must use it and configure it (install the plugin and enable annotation processing) to successfully compile the project. And all your project mates need to be aware of it otherwise, they will not be able to build the project and receive lots of compilation errors. However, this is only an initial step and will not take more than a couple of minutes.
This is my third article on Java Cloning series, In previous articles Java Cloning and Types of Cloning (Shallow and Deep) in Details with Example and Java Cloning - Copy Constructor versus Cloning I had discussed Java cloning in detail and explained every concept like what is cloning, how does it work, what are the necessary steps we need to follow to implement cloning, how to use Object.clone(), what is Shallow and Deep cloning, how to achieve cloning using serialization and Copy constructors and advantages copy of copy constructors over Java cloning.

If you have read those articles you can easily understand why it is good to use Copy constructors over cloning or Object.clone(). In this article, I am going to discuss why copy constructors are not sufficient?

Why-Copy-Constructors-Are-Not-Sufficient

Yes, you are reading it right copy constructors are not sufficient by themselves, copy constructors are not polymorphic because constructors do not get inherited to the child class from the parent class. If we try to refer a child object from parent class reference, we will face problems in cloning it using the copy constructor. To understand it let’s take examples of two classes Mammal and Human where Human extends MammalMammal class have one field type and two constructors, one to create the object and one copy constructor to create a copy of an object

class Mammal {

    protected String type;

    public Mammal(String type) {
        this.type = type;
    }

    public Mammal(Mammal original) {
        this.type = original.type;
    }

    public String getType() {
        return type;
    }

    public void setType(String type) {
        this.type = type;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;

        Mammal mammal = (Mammal) o;

        if (!type.equals(mammal.type)) return false;

        return true;
    }

    @Override
    public int hashCode() {
        return type.hashCode();
    }

    @Override
    public String toString() {
        return "Mammal{" + "type='" + type + "'}";
    }
}

And Human class which extends Mammal class, have one name field, one normal constructor and one copy constructor to create a copy

class Human extends Mammal {

    protected String name;

    public Human(String type, String name) {
        super(type);
        this.name = name;
    }

    public Human(Human original) {
        super(original.type);
        this.name = original.name;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        if (!super.equals(o)) return false;

        Human human = (Human) o;

        if (!type.equals(human.type)) return false;
        if (!name.equals(human.name)) return false;

        return true;
    }

    @Override
    public int hashCode() {
        int result = super.hashCode();
        result = 31 * result + name.hashCode();
        return result;
    }

    @Override
    public String toString() {
        return "Human{" + "type='" + type + "', name='" + name + "'}";
    }
}

Here in both copy constructors we are doing deep cloning.

Now let’s create objects for both classes

Mammal mammal = new Mammal("Human");
Human human = new Human("Human", "Naresh");

Now if we want to create a clone for mammal or human, we can simply do it by calling their respective copy constructor

Mammal clonedMammal = new Mammal(mammal);
Human clonedHuman = new Human(human);

We will get no error in doing this and both objects will be cloned successfully, as we can see below tests

System.out.println(mammal == clonedMammal); // false
System.out.println(mammal.equals(clonedMammal)); // true

System.out.println(human == clonedHuman); // false
System.out.println(human.equals(clonedHuman)); // true

But what if we try to refer object of Human from the reference of Mammal

Mammal mammalHuman = new Human("Human", "Mahesh");

In order to clone mammalHuman, we can not use constructor Human, It will give us compilation error because type mammalHuman is Mammal and constructor of Human class accept Human.

Mammal clonedMammalHuman = new Human(mammalHuman); // compilation error

And if we try clone mammalHuman using copy constructor of Mammal, we will get the object of Mammal instead of Human but mammalHuman holds the object of Human

Mammal clonedMammalHuman = new Mammal(mammalHuman);

So both mammalHuman and clonedMammalHuman are not the same objects as you see in the output below code

System.out.println("Object " + mammalHuman + " and copied object " + clonedMammalHuman + " are == : " + (mammalHuman == clonedMammalHuman));
System.out.println("Object " + mammalHuman + " and copied object " + clonedMammalHuman + " are equal : " + (mammalHuman.equals(clonedMammalHuman)) + "\n");

Output:

Object Human{type='Human', name='Mahesh'} and copied object Mammal{type='Human'} are == : false
Object Human{type='Human', name='Mahesh'} and copied object Mammal{type='Human'} are equal : false

As we can see copy constructors suffer from inheritance problems and they are not polymorphic as well. So how can we solve this problem, Well there various solutions like creating static Factory methods or creating some generic class which will do this for us and the list will go on?

But there is a very easy solution which will require copy constructors and will be polymorphic as well. We can solve this problem using defensive copy methods, a method which we are going to include in our classes and call copy constructor from it and again override it the child class and call its copy constructor from it.

Defensive copy methods will also give us the advantage of dependency injection, we can inject dependency instead of making our code tightly coupled we can make it loosely coupled, we can even create an interface which will define our defensive copy method and then implement it in our class and override that method.

So in Mammal class, we will create a no-argument method cloneObject however, we are free to name this method anything like clone or copy or copyInstance

public Mammal cloneObject() {
    return new Mammal(this);
}

And we can override same in “Human” class

@Override
public Human cloneObject() {
    return new Human(this);
}

Now to clone mammalHuman we can simply say

Mammal clonedMammalHuman = mammalHuman.clone();

And for the last two sys out we will get below output which is our expected behaviour.

Object Human{type='Human', name='Mahesh'} and copied object Human{type='Human', name='Mahesh'} are == : false
Object Human{type='Human', name='Mahesh'} and copied object Human{type='Human', name='Mahesh'} are equal : true

As we can see apart from getting the advantage of polymorphism this option also gives us freedom from passing any argument.

You can found complete code in CopyConstructorExample Java file on Github and please feel free to give your valuable feedback.
In my previous article Java Cloning and Types of Cloning (Shallow and Deep) in Details with Example, I have discussed Java Cloning in details and answered questions about how we can use cloning to copy objects in Java, what are two different types of cloning (Shallow & Deep) and how we can implement both of them, if you haven’t read it please go ahead.

In order to implement cloning, we need to configure our classes to follow the below steps
  • Implement Cloneable interface in our class or its superclass or interface,
  • Define clone() method which should handle CloneNotSupportedException (either throw or log),
  • And in most cases from our clone() method we call the clone() method of the superclass.
Java Cloning versus Copy Constructor

And super.clone() will call its super.clone() and the chain will continue until call will reach to clone() method of the Object class which will create a field by field mem copy of our object and return it back.

Like everything Cloning also comes with its advantages and disadvantages. However, Java cloning is more famous its design issues but still, it is the most common and popular cloning strategy present today.

Advantages of Object.clone()

Object.clone() have many design issues but it is still the popular and easiest way  of copying objects, Some advantages of using clone() are
  • Cloning requires very less line of code, just an abstract class with 4 or 5 line long clone() method but we will need to override it if we need deep cloning.
  • It is the easiest way of copying object especially if we are applying it to an already developed or an old project. We just need to define a parent class, implement Cloneable in it, provide the definition of clone() method and we are ready every child of our parent will get the cloning feature. 
  • We should use clone to copy arrays because that’s generally the fastest way to do it.
  • As of release 1.5, calling clone on an array returns an array whose compile-time
    type is the same as that of the array being cloned which clearly means calling clone on arrays do not require typecasting.

Disadvantages of Object.clone()

Below are some cons due to which many developers don't use Object.clone()
  • Using Object.clone() method requires us to add lots of syntax to our code like implement Cloneable interface, define clone() method and handle CloneNotSupportedException and finally call to Object.clone() and cast it our object.
  • The Cloneable interface lacks clone() method, actually, Cloneable is a marker interface and doesn’t have any method in it and still, we need to implement it just to tell JVM that we can perform clone() on our object.
  • Object.clone() is protected so we have to provide our own clone() and indirectly call Object.clone() from it.
  • We don’t have any control over object construction because Object.clone() doesn’t invoke any constructor.
  • If we are writing the clone method in a child class e.g. Person then all of its superclasses should define clone() method in them or inherit it from another parent class otherwise super.clone() chain will fail.
  • Object.clone() support only shallow copy so reference fields of our newly cloned object will still hold objects which fields of our original object were holding. In order to overcome this, we need to implement clone() in every class whose reference our class is holding and then call their clone them separately in our clone() method like in below example.
  • We can not manipulate final fields in Object.clone() because final fields can only be changed through constructors. In our case, if we want every Person objects to be unique by id we will get the duplicate object if we use Object.clone() because Object.clone() will not call the constructor and final final id field can’t be modified from Person.clone().
class City implements Cloneable {
    private final int id;
    private String name;
    public City clone() throws CloneNotSupportedException {
    return (City) super.clone();
    }
}

class Person implements Cloneable {
    public Person clone() throws CloneNotSupportedException {
        Person clonedObj = (Person) super.clone();
        clonedObj.name = new String(this.name);
        clonedObj.city = this.city.clone();
        return clonedObj;
    }
}

Because of the above design issues with Object.clone() developers always prefer other ways to copy objects like using
  • BeanUtils.cloneBean(object) creates a shallow clone similar to Object.clone().
  • SerializationUtils.clone(object) creates a deep clone. (i.e. the whole properties graph is cloned, not only the first level), but all classes must implement Serializable.
  • Java Deep Cloning Library offers deep cloning without the need to implement Serializable.
All these options require the use of some external library plus these libraries will also be using Serialization or Copy Constructors or Reflection internally to copy our object. So if you don’t want to go with the above options or want to write our own code to copy the object then you can use
  1. Serialization
  2. Copy Constructors

Serialization

As discussed in 5 different ways to create objects in Java, deserialising a serialised object creates a new object with the same state as in the serialized object. So similar to above cloning approaches we can achieve deep cloning functionality using object serialization and deserialization as well and with this approach we do not have worry about or write code for deep cloning, we get it by default.

We can do it like it is done below or we can also use other APIs like JAXB which supports serialization.

// Method to deep clone an object using in memory serialization.
public Employee copy(Person original) throws IOException, ClassNotFoundException {
    // First serializing the object and its state to memory using ByteArrayOutputStream instead of FileOutputStream.
    ByteArrayOutputStream bos = new ByteArrayOutputStream();
    ObjectOutputStream out = new ObjectOutputStream(bos);
    out.writeObject(original);

    // And then deserializing it from memory using ByteArrayOutputStream instead of FileInputStream,
    // Deserialization process will create a new object with the same state as in the serialized object.
    ByteArrayInputStream bis = new ByteArrayInputStream(bos.toByteArray());
    ObjectInputStream in = new ObjectInputStream(bis);
    return (Person) in.readObject();
}

However, cloning an object using serialization comes with some performance overhead and we can improve on it by using in-memory serialization if we just need to clone the object and don’t need to persist it in a file for future use, you can read more on How To Deep Clone An Object Using Java In Memory Serialization.

Copy Constructors

This method copying object is most popular between developer community it overcomes every design issue of Object.clone() and provides better control over object construction

public Person(Person original) {
    this.id = original.id + 1;
    this.name = new String(original.name);
    this.city = new City(original.city);
}

Advantages of copy constructors over Object.clone()

Copy constructors are better than Object.clone() because they
  • Don’t force us to implement any interface or throw any exception but we can surely do it if it is required.
  • Don’t require any type of cast.
  • Don’t require us to depend on an unknown object creation mechanism.
  • Don’t require parent class to follow any contract or implement anything.
  • Allow us to modify final fields.
  • Allow us to have complete control over object creation, we can write our initialization logic in it.
By using Copy constructors strategy, we can also create conversion constructors which can allow us to convert one object to another object e.g. ArrayList(Collection<? extends E> c) constructor generates an ArrayList from any Collection object and copy all items from Collection object to newly created ArrayList object.


Next Post Newer Posts Previous Post Older Posts Home