Thursday, July 31, 2008

Guarded Block in Java - what's it & why is it used?


Guarded Blocks in Java - what are they & why are they used?

This is one of the most popular mechanisms of co-ordinating the execution of multiple threads in a multithreaded application. Such a block keeps checking for a particular condition to become true and only in that case the actual execution of the thread resumes.

Why should a thread's execution required to be co-ordinated with that of other threads? There may be several reasons - one of them being, if multiple threads are modifying a shared data and one (or maybe more) threads can't really proceed unless the shared data acquires a particular value (otherwise the execution may cause some incosistency/business-error or something of that sort) then in such a case we can have a condition checking the value of that shared data in that particular thread and can allow the execution of the thread to proceed only when the condition is true.

Now what the thread will do until then? Based on how the thread reacts to a 'false' condition, guarded blocks are of two types:-

  • synchronized guarded block - in this case if the condition is false then the block (which is a synchronized block) simply calls the Object.wait() method to release the acquired monitors on that object and leaves the processor to be used by other threads (probably more effectively as the current thread would not have done anything significant in this case unless the condition becomes true).
  • non-synchronized guarded block - in this case we simply cause the execution to keep executing a blank loop until the condition becomes true. This approach has an obvious disadvantage of wasting the precious CPU time, which could have been better utilized by some other threads otherwise.

Example: synchronized guarded block - Java code snippet

public synchronized guardedBlock() {

while(!sharedBooleanFlag) {
try {
wait();
} catch (InterruptedException e) {}
}

System.out.println("Shared Boolean Flag is true - you may proceed now!");

}

non-synchronized guarded block - Java code snippet

public guardedBlock() {

while(!sharedBooleanFlag) {
//... empty loop
}


System.out.println("Shared Boolean Flag is true - you may proceed now!");

}

A thread which is in waiting state on an object may be interrupted/notified by some other thread either intentionally or accidentally and hence may start execution after getting re-scheduled, but that doesn't guarantee that the condition (which is required to be true for the thread to proceed further) has also become true and hence one needs to make sure that the wait() method is always called inside the loop which checks for the condition. If the thread wakes up and starts its execution before the condition becomes true then it'll immediately go to the waiting state once again which will obviously save the precious CPU time to be utilized by other threads and at the same time such an approach will always guarantee that the execution of the thread will proceed only in the expected manner.

Don't forget to ensure that the application has other threads which acquire locks on the objects (on which few other threads have previously called wait() method) and call either notify() and notifyAll() methods so that the application can avoid startvation of those threads which have called wait() on the objects under consideration. notifyAll() scheduled all the waiting threads whereas notify() randomly picks one of the waiting threads and schedules it. Use the method which suits the design of your application. Read more aout the differences between the two methods in this article - notify()vs notifyAll() >>.



Share/Save/Bookmark


Tuesday, July 29, 2008

Deadlock - what's it? How to deal with this situation?


Deadlock - what's it? How to deal with a Deadlock situation?


What's Deadlock?


It's basically a situation where two or more threads are blocked forever waiting for each other to release an acquired monitor to proceed further.


Let me try to explain it using an example. Suppose we have two thread - threadA and threadB. There are two objects of a class TestClass - objA and objB and we have two synchronized instance methods in TestClass named A and B. Method A accepts an argument of type TestClass and calls method B from the passed object reference. Now, consider a situation where the first thread - threadA acquires the monitor of objA and enteres into the synchronized method A() on objA with the passed object reference as objB and at the same time threadB enteres into the same synchronized method 'A' on the object reference - objB with the passed object reference to the method as objA. Now, both the threads will keep waiting for the monitors of the objects - objB and objA respectively to complete the execution of the synchronized method A() and hence such a situation will result into a Deadlock. Of course such a situation will probably happen only rarely, but it's definitely a possible scenario. Calling the start() method on a Thread instance only ensures that the particular thread will participate in the CPU Scheduling, but we never know when exactly the thread will actually be allocated the CPU and in turn will start the execution.


Example: Java program having a possibility of Deadlock


public class PossibleDeadlockDemo {


static class TestClass {

...

...


public synchronized void A (TestClass testClass) {

...

testClass.B();

}


public synchronized void B () {

...

}

}


public static void main(String[] args) {

final TestClass objA = new TestClass("ObjectA");

final TestClass objB = new TestClass("ObjectB");


Thread threadA = new Thread(new Runnable() {

public void run() { objA.A(objB); } });


Thread threadB = new Thread(new Runnable() {

public void run() { objB.A(objA); } });


threadA.start();

threadB.start();

}


}


How to deal with a Deadlock situation?


Probably the only possible way to rectify a Deadlock situation is to avoid it. Once the deadlock has already happened then you'll probably left with the only option of restarting the JVM process again OR it may even require you to restart the underlying OS. This is very hard to reproduce and debug because one can never be sure about when exactly the threads will be executed and hence the deadlock may not happen when you test it and eventually it may happen when the code actually goes into production. So, review the design of your mulithreaded application before you really start coding. You may somehow manage to escape any such situation or at least minimize the possibility to a great extent with a better design.



Share/Save/Bookmark


Wednesday, July 23, 2008

Methods accepting Variable Argument Lists in Java?


How to implement methods accepting Variable Argument Lists in Java?


Java doesn't have a built-in feature to handle variable argument lists to a method call, the way we have in C/C++. But, there are several alternative ways of achieving this in Java using simple programming constructs. Let's try to discuss these alternative ways:-


Method Overloading - it directly comes to our mind whenever we start thinking about a method which has the same name and different argument list (based on order, type, and number). We can obviously overload the method to have it accepting variable argument lists, but as it's quite evident that this approach will have two serious problems:-

  • Knowledge of various argument combinations at desgn time - unless we are aware of what all combinations of method calls we may encounter during run time how can we overload the method. So, we can't really achieve a generic variable argument lists call to the method following this approach.
  • Duplication of code - evidently most part of the code (except that which deals with the new arguments of the particular overloaded version) may be duplicate. This won't really cause any performance problem as the particular overloaded method will be bind at compile time itself, but it may certainly increase the size of the .class file of the particular class.


Using Object array - as we know that every class in Java either directly or indirectly extends the cosmic superclass named Object and hence an Object array (Object[]) as the parameter type of a method will enable the method to accept variable argument lists by accepting an Object[] of corresponding length. For primitive type arguments, we'll require to wrap them in their corresponding wrapper classes before adding to the Object array and un-wrapping them back inside the method. This approach is almost a complete equivalent of variable argument lists feature of C/C++, but the only problem with this approach is that in this case any changes made inside the method will actually change the original data as well (as is the case with any method using object references).



Using Collection framework - with the introduction of Collection framework in Java, we now have a better approach of handling a situation requiring a variable argument list to a method call. The reason is very simple that we can handle the problem of modifying the original object (which was the case with Object[] as discussed above) by first obtaining an unmodifiable collection object containing the variable argument lists before passing it as the method parameter. This will ensure that any changes made within the method on the passed collection object will cause an exception to be raised. Example: most commonly we use a List to handle such a situation. Once we are done with adding all the arguments to the List object then we simply retrieve a reference of the unmodifiable list by calling Collections.unmodifiableList(List originalList) and we pass this unmodifiable List object reference to the method.


Note: if we want to keep a modifiable reference of the List as well then we need to capture the return value of the method Collections.unmodifiableList(List) in a separate List reference otherwise if we assign that to the original reference then we won't have any reference to change the List either in the calling method OR in the called method. Be sure of what you want before deciding.


...

List originalVAList = new ArrayList();

//... obviously we can add any Object, not just a String

list.add("Argument #1");

list.add("Argument #2");

...

list.add("Argument #n");

//... retrieving unmodifiable List

List unmodifiableVAList = Collections.unmodifiableList(originalVAList);

...

//... call the method with the unmodifiable List reference

variableArgMethod(unmodifiableVAList);

...


Another advantage of using Collection framework instead of Object[] is that we don't need to bother about the trivial stuff like looping through the entire array to print, fetch the size, etc. as the Collection framework is rich enough to support all those features with very less coding and the implementation is supposedly either better or at least as good as what most of us can think of. So, using Collection framework to handle such a scenario is always advisable unless of course you have a very strong reason to do it otherwise.


Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.



Share/Save/Bookmark


IEEE 754 floating point arithmetic used by Java



IEEE 754 floating point arithmetic used by Java


Be careful while using floating point arithmetic in Java and don't reply on the mathematically accurate results produced by an expression involving floating point arithmetic for the simple reason that these arithmetics are not guaranteed to be exactly identical to their mathematical counterparts. The underlying reason for such a discrepancy is that the IEEE 754 floating point arithmetic used by Java is NOT capable of representing every float/double value as it doesn't store them using decimal points instead it uses binary fractions and exponents to represent them and hence many floating-point numbers are needed to be rounded off, cut off, etc. at some point and hence you should consider the float/double values only to be nearly equal to specified float/double value and not exactly equal. Only those float/double values which can be completely represented by binary fractions will be identical to the specified value. For example:


0.5 = 1/2

0.75 = 1/2 + 1/(2^2)


The above two float numbers are exactly representable and hence the internal representation of these values will always be exactly equal to these values. But, any arithmetic involving these numbers is again not guaranteed to be exactly same as their mathematical counterpart as the result of the expression (or intermediate expressions) will decide whether that will be exactly representable or not.


0.1 = 1/(2^4) + 1/(2^5) + 1/(2^8) + ... never ending series and hence the value '0.1' is not exactly representable in IEEE 754 floating point arithemetic. This is the reason why the below statement will print "0.1 + 0.1 is not equal to 0.2" and not "0.1 + 0.1 is equal to 0.2" as one might expect.


if(0.1 + 0.1 != 0.2){

System.out.println("0.1 + 0.1 is not equal to 0.2");

else

System.out.println("0.1 + 0.1 is equal to 0.2");


How to deal with such a situation?


Good question. Avoid using floating point arithmetic in those cases where the exact value is of great importance otherwise use the BigDecimal class. The usage of BigDecimal class will of course hamper the performance to certain extent, but you probably have no other choice but to go for that in case the accuracy of the floating-point arithmetic is important for your application.


Few strange-looking facts about floating-point arithmetic


Find below a small list of few strange-looking facts about the floating-point arithmetic in Java. It may be of some use at some point of time during the design and development of your application:-

  • -0.0 and +0.0 (or 0.0 which is same as +0.0) are not the same in the sense that they cause the floating-point arithmetic to produce different results otherwise if you test (-0.0 == +0.0) then that returns 'true'.
  • any floating-point value (except 0.0, -0.0, +0.0) divided by 0.0 (or -0.0 or +0.0) will produce either -Infinity or Infinity soley based on whether you used -0.0 or +0.0 (or 0.0), respectively. The sign of the dividend doesn't play any role here. Notice that it won't throw any Exception.
  • 0.0/0.0 or -0.0/0.0 or +0.0/-0.0 or... any such combination of 0.0, -0.0, and +0.0 results into NaN (Not a Number). Again no exception thrown in these cases as well.
  • Double.MAX_VALUE and Double.POSITIVE_INFINITY are different. The former has a value 1.7976931348623157E308 whereas the latter has a value Infinity.
  • There are no numerical equalivalents of -Infinity, Infinity or NaN. These three are special notations used by IEEE 754 floating-point arithmetic to denote special conditions.



Share/Save/Bookmark


Monday, July 21, 2008

Concurrent execution of static and non-static synchronized methods


Concurrent execution of static and non-static synchronized methods


We have a scenario where we have a static synchronized method and a non-static synchronized method. As we know that a static method can only access static members of the class whereas a non-static method can access both static as well as non-static members. This liberty can put us into a situation where a non-static method might be updating some static data which a static method would also be changing. What will happen if two different threads try to access the static and non-static synchronized methods concurrently and in turn try to change the static data using the two methods? Will the synchronization guarantee mutual exclusion in such a case?


No... synchronization won't guarantee mutual exclusion in such a case as both the threads may access the static and non-static synchronized methods concurrently because a non-static synchronized method requires the invoking thread to acquire a lock of the particular object on which the thread invokes the non-static synchronized method whereas a static synchronized method will require the invoking thread to acquire a lock on the java.lang.Class object associated with the particular class (which the static method is a part of). Both these locks have nothing to do with each other and they can be acquired by different threads at the same time and hence two different threads may execute a static method and a non-static synchronized method of a class respectively at the same time (of course once they acquire the two locks).


It is certainly not advisable to allow such a thing to happen and hence the design of such a class needs to be re-checked. Otherwise the errors caused by simultaneaous modification of the same static data by different threads may infuse some really-hard-to-detect bugs.


Thus, we see that only declaring the methods as synchronized doesn't ensure mutual exclusion and the programmer should always think about the locks involved to achieve the actual purpose of synchronization.


Have a look at the following simple example where the same static data is being modified by two public synchronized methods - one being static and the other being non-static. The whole purpose of synchronization is getting defeated here as in such a case mutual exclusion can't be guaranteed, which is one of main purposes of using synchronization.


public final class BadDesign{

private static int sensitiveData;

public synchronized static void changeDataViaStaticMethod(int a){

//... updating the sensitiveData

sensitiveData = a;

}


public synchronized void changeDataViaNonStaticMethod(int b){

//... updating the sensitiveData

sensitiveData = b;

}


public static void showSensitiveDataStatic(){

System.out.println("Static: " + Thread.currentThread().getName()+ " - " + sensitiveData);

}


public void showSensitiveData(){

System.out.println(Thread.currentThread().getName() + " - " + sensitiveData);

}


public static void main(String[] args){

new Thread(new TestThread11()).start();

new Thread(new TestThread11()).start();

}


}


class TestThread11 implements Runnable{


public void run(){

int i = 0;

do{

BadDesign.changeDataViaStaticMethod(5);

BadDesign.showSensitiveDataStatic();


//... new object for every iteration

//... so synchronization of non-static method

//... doesn't really do anything significant here

BadDesign bd = new BadDesign();

bd.changeDataViaNonStaticMethod(10);

bd.showSensitiveData();

}while (i++ < 100);

}

}


An excerpt from the output:-

...

Static: Thread-0 - 5

Thread-0 - 10

Static: Thread-0 - 5

Thread-0 - 10

Thread-1 - 10

Static: Thread-0 - 5

Static: Thread-1 - 5

Thread-1 - 10

...


By looking at the above exceprt you can easily conclude that the static and non-static synchronized method calls are intermixed between the two threads: Thread-0 and Thread-1. In this case the non-static synchronized method is always being called on a new object (as the do-while loop of the run() method is creating a new object of the class BadDesign for every non-static synchronized method call) by both the threads and hence synchronization doesn't really do anything significant for them as every new object will have its own lock which will be easily acquired by the invoking thread everytime the do-while loop iterates. I've kept this just to make the code simple as my purpose of the above example is just to show that static synchronized methods and non-static synchronized methods are completely independent of each other and both of them can run at the same time by different threads for the simple reason that both these methods are governed by different locks which may be acquired at the same time by two different threads.


Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.



Share/Save/Bookmark


Tuesday, July 15, 2008

Composition vs Aggregation. What's Association?


Composition vs Aggregation. How is Association related to them?


Association - it's a structural relationship between two classifiers - classes or use cases. Cyclic association is also possible when the same entity plays two different roles and contains an association describing the relationship between the two roles.


In UML notation an association is depicted by a solid line. For example: suppose we have two entities Employee (EmpNo, Name, EmailId, Salary, DeptNo, Address) and Department (DeptNo, DeptName). Now these two entities are having an association. An association can have any cardinality at any of the two ends depening on the particular relationship.


Aggregation - this is special type of association used for modelling 'Possession' (not Ownership) with the restriction that it's NOT cyclic. That means, an entity can't have an aggregation relationship with itself. This is the reason why an aggregation relationship doesn't form a Graph, instead it forms a Tree.


Aggegation is normally understood as a "has-a" relationship. Here both the entities continue to have their own independent existence. In UML notation, an aggregation is depicted by an unfilled diamond and a solid line. For example: Consider two entities Employee (EmpNo, Name, EmailId, Salary, DeptNo, Address) and Address (FlatNo, PlotNo, StreetName, Area, PinCode). As we might have noticed that every Employee has an Address, but the lifetime of the an Address instance is not governed by any Employee instance. It may continue to exist even after the reclaimation of the Employee instance. Right?


Composition - this is also a special type of association used for modelling 'Ownership'. This is very similar to Aggregation with the only difference that it depicts a Whole-part relationship and the 'part' entity doesn't have its own independent existence. The existence of entity depicting 'whole' giverns the lifetime of the 'part' entity as well. For example: Consider two entities Employee (EmpNo, Name, EmailId, Salary, DeptNo, Address) and EmailId (ID, domain). Now an independent instance of EmailId entity doesn't really make sense unless it's associated with an Employee instance.


Composition is normally understood as a "contains-a" relations. Similar to Aggregation, a Composition also forms a Tree only and not a Graph as it can't be cyclic. In UML notation, a Compsoition is depicted by a filled diamond and a solid line.


Aggregation vs Composition

  • Aggregation represents a "has-a" relationship whereas Composition represents a "contains-a" OR "whole-part" relationship.
  • In case of Aggregation, both the entities will continue to have their independent existence whereas in case of Composition the lifetime of the entity representing 'part' of the "whole-part" is governed by the entity representing 'whole'.
  • Aggregation represents a loose form of relationship as compared to that represented by Composition which depicts a stronger relationship.
  • In UML notation, an Aggregation relationship is depicted by an unfilled diamond and a solid line whereas a Composition relationship is depicted by a filled diamond and a solid line.
  • A 'part' of a Composition relationship can have only one 'whole' at a time i.e., a multiplicity of 1 whereas an Aggregation relationship can have any multiplicity 0..* at the aggregate end.

Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.



Share/Save/Bookmark


Java architecture, JRE, Java Platform, JVM, JDK


Java architecture, JRE, Java Platform, JVM, JDK


Java Architecture


Java is much more than just a popular programming language instead it's a complete architecture. The entire Java archirecture can be broadly categorized into three technologies which work together to make Java a powerful software development architecture. The components of the Java architecture are:-

  • The Java Programming Language - JLS (Java Langauge Specifications) provide the syntax, constructs, grammar, etc. of the Java language.
  • The Java APIs - Java Application Programming Interfaces (APIs) make available plethora of functionalities to the application developers so that they can focus on their core business functionality without bothering about the common tasks including those from the areas - I/O, Network, Language Constructs, Multithreading, Serialization, etc. Many vendors have implemented these APIs including the standard implementation given by Sun Microsystems which is normally used as a benchmark. The Java APIs are simply a collection of Java .class files. A .class file is the bytecode representation of a valid Java program. Bytecodes are a collection operator-operand tuples which are converted into the native machine level instructions by the underlying JVM.
  • The Java Virtual Machine - The JVM is the abstract machine which runs all the bytecodes on a particular machine. JVM converts the bytecodes into native instructions which are executed by the underlying Operating System.


JVM - Java Virtual Machine


This software component is the key to achieve Platform Independence feature of Java. Most of the popular platforms have their own JVM implementations and hence a Java program written and compiled on a Java-compliant platform can be run on any other Java-compliant platform.


JVM is mainly composed of two components - Classloader and Execution Engine. The first component loads the .class files after converting them into the implementation dependent internal data structures. The Execution Engine uses these data structures are converts the bytecode instructions into machine-level instructions for the underlying Operating System which are subsequently executed by the OS.


Java Runtime Environment (JRE) / Java Platform


Both the terms JRE and Java Platform represent the same thing. Java Platform started with the verion 1.2, and it was called The Java 2 Platform. JRE is nothing but a combination of JVM and Java APIs - Java SE APIs as well as Java Web (Deployment) which consists of Java Web App Development/Deployment, Java Web Start, and Applet (plug-in).


Java Development Kit (JDK)


As the name suggest it's a Kit and it contains all the software components used for compiling, documenting, and executing Java programs. It's basically a logical collection of the The Java Programming Language, JRE (JVM + Java APIs), and Tools & Tool APIs, which include java, javac, javadoc, apt, jar, javap, Deploy, Monitoring, Java VisualVM, Scripting, etc.



Share/Save/Bookmark


Wednesday, July 9, 2008

Loading, Linking, & Initialization of Types in Java


Loading, Linking, & Initialization of Types in Java


The Java Virtual Machine makes the Types (we'll discuss here the user defined Types - classes and interfaces... built-in types also undergo similar set of phases and they are normally loaded as part of the JVM process start-up) available to a Java program under execution and any Type undergoes the following phases during its entire life cycle:-

  • Loading - this is the process of loacting the bytecodes (.class file) of the correponding Type and bringing that into the JVM memory.
  • Linking - this is the process of incorporating the loaded bytecodes into the Java Runtime System so that the loaded Type can be used by the JVM.
  • Initialization - this is the process of executing the static initializers of the loaded and linked Type.
  • Unloading - this is the process of reclaiming the memory occupied by the Loaded, Linked, and Initialized Type. This happens when there are no references of that Type and hence the Type becomes eligible for garbage collection. It's important to note that garbage collection of objects is different from that of Types. An object of a Type is collected during the next garbage collection cycle soon after it becomes unreachable, but the Type is unloaded only when all the objects of that Type are already garbage collected and the Type doesn't have any other reference to it.

Creation of java.lang.Class instance of a Type


An instance of the class java.lang.Class is created for every Type loaded into the underlying JVM. This instance creation marks the end of the loading phase of the Type. If an application is distributed across multiple JVMs then for the same application one (or more) Types may be required to be loaded multiple times (one per JVM) as a JVM can't use a Type without the corresponding java.lang.Class instance. While loading a Type, the bytecodes (of the .class file) are first converted into a binary stream so that they can be brought into the JVM memory. Once the stream has been successfully brought into JVM memory, the format of the stream is checked to verify if it's as per the specifications or not. This step helps making the Java code very secure as the underlying Security Manager can simply raise an alarm if the loaded binary stream is not havng a valid structure (either tempered accidently or intentionally) and hence it can safely eliminate the possibility of running a malicious and dangerous code on the machine. This is not possible in other languages like C, where the compiled and linked .exe (on Windows) and .out (on Linux) files are directly executed by the underlying Operating Systems as they are composed of native instructions.


But, we know Verification is a part of the Linking phase, so what does the verification done in the Loading phase mean? How are the two verifications different? Well... verification is certainly a part of the Linking phase (it's the first step of the Linking phase), but implementations do use verifications of different type during various other stages as well. This is done to ensure the integrity of the running JVM. The specification clearly says that a loaded Type should not cause the JVM process to hang and if the implementations remove the verification part completely from the Loading phase then a malicious bytecode may cause the JVM to hang (or to perform something unpredictable) while creation of the java.lang.Class instance for the loaded Type.


This java.lang.Class instance is actually the complete representation of the loaded Type built using the implementation dependent internal data structures. This is subsequently used by the various steps of the Linking phase (and also during the Initialization phase). These two phases - Linking and Initialization don't deal with the raw byte stream loaded during the Loading phase.


java.lang.LinkingError - what does it signify & when is it thrown?


The Java Specifications doesn't enforce any restriction on the actual timing of the Loading & Linking phases of a Type (also not on the exact timing of the Unloading phase as that is decided by the garbage collection algorithm), but Initialization phase is required to be done only when the Type gets its first active usage. One thing is very obvious that these phases will be performed in this order only. How will a Type be linked before being loaded otherwise? Without linking how will the initializers be executed otherwise? Most of the implementations do the Loading (and also the Linking in many) in anticipation of the usage of Types and only Initialization is required to be done at first usage of those Types in such cases.


Any problem encountered while any of the three phases is captured by instances of LinkageError class or an appropriate sub class of this class. Most of the JVM implementations normally load the Types much earlier and delay the linking and initialization phase till the first use of the Type. Many others delay only the Initilization till the first use and they perform both Loading and Linking much before the first usage of the Type. Irrespective of whether the any or all the phases are completed just before first use or earlier, any error encountered during any of the phases (captured by the appropriate subclasses of the LinkageError claas) must be thrown only at the time when the Type's first use is encountered not before that. If a Type is never used actively then the captured error is never thrown and the program proceeds normally.


Thus we see that the loading, linking, and initialization of a Type must give an impression that they are done only when the Type is actually used even if Loading and/or Linking phases have already been completed in the particular JVM implementation.


java.lang.LinkageError simply indicates that the loaded class has encountered some problem during Loading, Linking or Initialization phase. The name is misnomer here and as it doesn't only indicate a problem encountered only during the Linking phase (It'll be clear in the next paragraph).


One possible scenario when this error java.lang.LinkageError (or a suitable subclass of this class) may occur:- Suppose we have two classes and one is dependent on the other. Now, after compilation of the dependent class if the other class is changed and compiled then the former class may throw LinkageError while linking - such a situation will actually throw an instance of the subclass of this class named IncompatibleClassChangeError.


java.lang.LinkageError has seven subclasses - all specifying a specific scenario indicating the kind of error encountered while Loading, Linking, or Initialization of a Type. These sub classes are:-

As you can easily notice that the LinkageError doesn't only specify a problem encountered only in the Linking phase, instead it has specific sub classes to capture the potential errors which may occur during any of the three phases - Loading, Linking, and Initialization.


Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.



Share/Save/Bookmark


Saturday, July 5, 2008

Synchronization of static and instance methods in Java


Synchronization of static methods/fields in Java

What's an Intrinsic lock OR a monitor lock in Java?

An intrinsic lock, a monitor lock, or a monitor - all three refer to the same internal entity associated with every object in Java which enables the implementation of the Synchronization mechanism. This lock is used to enforce an exclusive and consistent access in Java as a monitor can be acquired by only one thread at a time.

A thread needs to acquire the monitor of an appropriate object before entering a synchronized method/block which it releases when the thread returns from the method (or completes the block). The monitor is released even if an uncaught exception is thrown from the method/block.

Synchronized methods vs Synchronized blocks

These are the two ways of achieving synchronized access in Java. In Java, a method is also a block only, but we normally refer to a block as a part of the method definition. As we know that a thread needs to acquire an appropriate monitor (which it releases when the method returns) before entering any synchronized method/block, so we can easily figure out that in case of a synchronized method a thread may need to have the lock for a longer period of time as compared to that in case of synchronized block.

Another difference is that we don't specify the particular object whose monitor is required to be obtained by a thread for entering a synchronized method whereas we can specify the particular object in case of a synchronized block.

static synchronized method vs instance synchronized method

When a thread needs to enter a static synchronized method, it acquires the monitor of the Class object associated with the particular class to which the static method belongs to whereas in case of a synchronized instance method, the thread requires to obtain the monitor of the particular object on which the method call is being made. Easy to understand the reason... right?

Synchronization of static fields in an instance synchronized method

As we just saw that a thread needs to acquire the monitor of the Class object of the class in case synchronized static method, but how can you handle the synchronization of static fields in instance methods?

In Java, an instance method can access static fields as well (vice versa is not possible for the obvious reason), suppose a synchronized instance method having code accessing static fields, is being accessed on different objects by different threads (which is quite possible as each of the different threads would have acquired monitor of one of the different objects) then in such a case we can't guarantee a exclusive access to those static fields. The reason is very simple - a static field doesn't belong to an instance instead to the class and hence the access to it is not controlled by monitors associated with the instances of the class instead the access to them is controlled by the monitor of the Class object associated with the class. Hence, we need to be very careful in such scenarios and any synchronized instance method should have an explicit synchronized block for accessing static fields if the static fields require an exclusive access. This synchronized block can specify the Class object associated with the class and hence a thread which enters the synchronized instance method after acquiring the monitor of the particular instance will first need to acquire the monitor of the Class object as well before executing the code which accesses static fields. And this way we can guarantee exclusive access to the static fields accessed inside a synchronized instance method.

Read next: Why synchronization of constructors is not allowed? Do you know Re-entrant Synchronization? You may like to read the article - What's Reentrant Synchronization in Java?


Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.



Share/Save/Bookmark


Synchronization of constructors not allowed in Java - why?


Synchronization of constructors not allowed in Java - why?


In Java, synchronization of contructors is not allowed (results in a compile time error) as only the thread which is constructing the object should have access to the object and hence any other thread is not granted access until the construction of the object is complete. So, no explicit synchronization needed for constructors.


Caution: even though other threads will be given the access to an object only when it's already constructed, but if the constructor itself is allowing a memory leak then an imcomplete object can also be accessed by other threads which is obviously undesirable as the behavior is undefined. This may happen in case we are maintaining a Collection of instances (say with the reference 'instances') of the class and if the constructor is using the statement 'instances.add(this);' to add the current instance to the Collection then a half-baked object may be accessed by other threads.


Read next - static method/field synchronization >> The article tries to answer these questions: How does the static method synchronization differ from instance method synchronization? How do we ensure exclusive access to static fields being accessed in synchronized instance methods?


Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.



Share/Save/Bookmark


What is Reentrant Synchronization in Java?


What is Reentrant Synchronization in Java?

We know that a thread can not acquire a monitor which is owned by another thread. A thread own a monitor for the period between the time it acquires the monitor for entering a synchronized method/block and the time when it releases the monitor when the thread either returns from the method (or completes the block) OR throws an uncaught exception.

But, a thread is allowed to acquire a monitor owned by itself. Confused? Why would a thread need to acquire a monitor which it already owns? It heppens when a synchronized code either directly or indirectly invokes a synchronized method/block which requires the same monitor. For example: a recursive synchronized method. Allowing a thread to acquire the monitor it already owns is called Reentrant Synchronization and without which it'll be very difficult to ensure that a thread in Java doesn't block itself.

Read Next - Synchronization of static fields/methods in Java >> - the article discusses how static methods and static fields are synchronized in Java? How can we ensure exclusive and consistent access to static fields in instance synchronized methods? To understand the why constructors can't be synchronized and how can a half-baked object be exposed to other threads read the article - Why constructors can't be synchronized in Java?



Share/Save/Bookmark


Puzzle: 2 crystal balls, 100 storey building, find no. of drops required?


Puzzle: There are two identical crystal balls and one needs to find out which is the maximum floor in a 100 storey building from where the balls can fall before they break. In the most efficient way, what will be the maximum number of drops required to find the right floor in any possible scenario? The balls are allowed to be broken (if required) while finding the right floor.

Solution:
The maximum drops required will be 14 to find the right floor considering all possible scenarios.

The approach should be to drop the first crystal ball from a floor and if it breaks then try dropping the second crystal ball (of course until it breaks) from the floor next to the previously tested floor until one less than that floor from where the first crystal ball broke when dropped.

In the most efficient way, the maximum number of trials needed should be kept to the minimum possible. We have a maximum of 100 floors, so we need to divide these floors in such a way that the sum of the number of trials consumed by first crystal ball and that of the second crystal ball (if the first breaks) remains the minimum. For this to happen we need to reduce the difference between the previously tested floor and the next floor to test the first ball from every time by 1 as the first balls has already been dropped by those many times. Confused? Let's try to understand it this way: Suppose we drop the first ball from the Nth floor for the first time then we'll require a maximum of N drops if the first ball breaks as in this case the second ball will be tried from floor #1 to floor #(N-1). If the first ball doesn't break then we'll have to select the next floor to test the drop of the first ball from. This can't be more than (N-i) from the previously tested floor where i = 1 ... N-1 for every subsequent step in this order only. Reason being, if we test the first ball from a higher floor and if the ball breaks then we may require to test the second ball more number of times and hence the maximum attempts may exceed N in this case. Similarly, if we test the first ball from a lower floor then we'll require to test the first ball more than N times if it doesn't break at all.

Obviously this N will of course be dependent upon the maximum number of floors which is 100 in our case. Putting N = 14, we will require the first ball test from floor #14, #(14 + 14 - 1 = 27), #(27 + 14 - 2 = 39), #(39 + 14 - 3 = 50), #(50 + 14 - 4 = 60), #(60 + 14 - 5 = 69), #(69 + 14 - 6 = 77), #(77 + 14 - 7 = 84), #(84 + 14 - 8 = 90), #(90 + 14 - 9 = 95), #(95 + 14 - 10 = 99), and finally from #100. The test will continue until we reach #100 OR the first ball breaks in which case the second crystal ball will be used for the floors lying between the floor where the first ball breaks and the previously tested floor.

I doubt if there is any easy formula to find out this number N, but we can certainly reach this by using a heuristic approach. As we move on we consume trials at every stage and this should always be kept in mind. This is the reason why we can't keep the difference between the previously tested floor and the next floor to test from as constant for the first crystal ball drop.

For N = 14, if the first ball doesn't break at all then we need to try a maximum of 12 times and if it breaks the very first time itself then we may require to try a maximum of 14 times (1 by the first ball from floor #14 and a maximum of 13 by the second ball from floor #1 ... #13). Similarly, if the first ball breaks at floor #27 then maximum attempts required will again be 14 (2 by first ... [from floor #14 and from floor #27] plus 12 by second [from floor #15 ... floor #26]). And and so on. Thus we see that in this case if the first ball breaks at any of the floors then we need a maximum of 14 attempts to figure out the correct floor and if it doesn't break at all then we'll done with a maximum of 12 attempts only.



Share/Save/Bookmark


Puzzle: Deck of 52 cards, two players, who will win?


Puzzle: The rules of a card game (having 52 cards) played between two peole are:- they will turn over two cards at a time. If both the cards are Black they will go the first player's pile and if both are Red they they will go to the second player's pile. If one is Red and one Black then both the cards will simply be discarded.

The process of turning over two cards at a time is repeated till all the 52 cards are exhausted. Whoever is having more number of cards in their pile wins the game. In case of tie the first player is declared winner. What's the chance of second player winning the game?

Solution: Zero ... yeah you read it right. Second player will never win the game. We can have only the following possibilities:-

  • If, All the pairs are perfectly matched - 13 Black pairs and 13 Red pairs and hence a tie and therefore First Player wins the game.
  • Else If, 12 pairs are matched - If we get 12 matched Black pairs then we'll have 12 matched Red pairs as well as if the rest two pairs are also matched then it'll be same as the above case. Okay... so in this scenario both the mixed pairs will be discarded and it'll be tie again and hence First Player wins in this case as well.
  • Else If, 11 pairs are matched - we'll have 4 mixed and hence discarded pairs and 11 matched pairs each for both Red and Black... again a tie and First Player wins the game.
  • Else If, 10 pairs are matched... tie and First player wins.
  • Else If, 9 pairs are matched... tie and First Player wins.
  • ... and so on.
Thus we that irrespective of how mixed the deck of cards is... it'll always be a tie and hence the First Player will always emerge as the winner.



Share/Save/Bookmark


Thursday, July 3, 2008

Thread.State in Java? BLOCKED vs WAITING


What is Thread.State in Java? What's it used for?

Thread.State - This is a static nested class (Read more about nested classes in the article - Nested Classes & Inner Classes in Java >>) of the Thread class. This is one of the additions of Java 5 and this class actually inherits the abstract class Enum which is the common base class of all Java language enumeration types i.e., Thread.State is actually is actually an enumeration type.

Thread.State enumeration contains the possible states of a Java thread in the underlying JVM. These states are different from the Operating System thread states. The possible values of the Thread.State are:-

  • NEW - this state represents a new thread which is not yet started.
  • RUNNABLE - this state represents a thread which is executing in the underlying JVM. Here executing in JVM doesn't mean that the thread is always executing in the OS as well - it may wait for a resource from the Operating system like the processor while being in this state.
  • BLOCKED - this state represents a thread which has been blocked and is waiting for a moniotor to enter/re-enter a synchronized block/method. A thread gets into this state after calling Object.wait method.
  • WAITING - this state represnts a thread in the waiting state and this wait is over only when some other thread performs some appropriate action. A thread can get into this state either by calling - Object.wait (without timeout), Thread.join (without timeout), or LockSupport.park methods.
  • TIMED_WAITING - this state represents a thread which is required to wait at max for a specified time limit. A thread can get into this state by calling either of these methods: Thread.sleep, Object.wait (with timeout specified), Thread.join (with timeout specified), LockSupport.parkNanos, LockSupport.parkUntil
  • TERMINATED - this state reprents a thread which has completed its execution either by returning from the run() method after completing the execution OR by throwing an exception which propagated from the run() method and hence caused the termination of the thread.
Difference between BLOCKED state and WAITING / TIMED_WAITING states?

When a thread calls Object.wait method, it releases all the acquired monitors and is put into WAITING (or TIMED_WAITING if we call the timeout versions of the wait method) state. Now when the thread is notified either by notify() or by notifyAll() call on the same object then the waiting state of the thread ends and the thread starts attempting to regain all the monitors which it had acquired at the time of wait call. At one time there may be several threads trying to regain (or maybe gain for the first time) their monitors. If more than one threads attempt to acquire the monitor of a particular object then only one thread (selected by the JVM scheduler) is granted the monitor and all other threads are put into BLOCKED state. Got the difference?

Difference between WAITING and TIMED_WAITING states?

The difference is quite obvious between the two. A thread in a TIMED_WAITING state will wait at max for the specified timeout period whereas a thread in the WAITING state keeps waiting for an indefinite period of time. For example, if a thread has called Object.wait method to put itself into WAITING state then it'll keep waiting until the thread is interrupted either by notify() method (OR by notifyAll() method) call on the same object by another thread. Similarly, if a thread has put itself into WAITING state by calling Thread.join method then it'll keep waiting until the specified thread terminates.

We can easily figure out that a thread in a WAITING state will always be dependent on an action performed by some other thread whereas a thread in TIMED_WAITING is not completely dependent on an action performed by some other thread as in this case the wait ends automatically after the completion of the timeout period.



Share/Save/Bookmark


Ways of combining Web Services. Orchestration vs Choreography


Ways of combining Web Services. Orchestration vs Choreography

If you want to refresh your understanding of Web Services, you may read this article - Web Services - What, How, Why, & Shortcomings >>

As we know that Web Services are actually application components each of which normally performs one discrete functionality of the overall application. So, we definitely need some way of combining these individual components to make the entire application work. There are two popular ways of combining Web Services, which are:-

  • Orchestration - we have a central controller in this case and it can be executed
  • Choreography - we don't have any controller here and it can't be executed directly
Orchestration

In this case we have a central controller process which controls and co-ordinates all the Web Services involved in the application. This central process can be a Web Service a well. The point to note in this case is that all other Web Services don't really know that they are participating in a higher-level business process. How the participating Web Services will be called, what will be the control flow, what all transformation will take place, ... these all things are known only to the central controller process. The other Web Services simply honor the requests whenever called. The below diagram makes it quite easier to understand the overall process.



Since Orchestration provides a controlled environment hence alternative scenarios can be used in case a fault occurs in the normal flow. For example, suppose we need to call a Web Service which may result into fault and in such a case we may need to either call another Web Service OR to simply use a default value. IN case of Orchestration it's very easy to achieve - maybe by just having a switch activity to transfer the control either to the alternative Web Service OR to compute the required Default Value.

Choreography

Here we don't have any central controller process and hence all the participating Web Services know when to call, whom to interact, when to execute operations, etc. We can visualize choreography just like a collaborative effort of many participating Web Services and since we don't have any controller hence all the Web Services need to know the actual business process and things involved in it like message exchanges, time of call, etc. Find below a diagram depicting a typical Choreography process.


Orchestration vs Choregraphy

Easy to fugure out, right? Orchestration has a central controller process and all other participating Web Services don't know about the actual business process. They are called only by the controller process and they don't know anything about other Web Services involved in the application. Whereas Choreography doesn't have any controller process/service and all the participating Web Services know the actual business process and they are well aware of which all Web Services they need to interact with, when to execute the operations, etc.

So, we can say that Orchestration is a controlled and co-ordinated way of utilizing the services of all the participating Web Services whereas Choreography is just a collaborative effort of utilizing the services of the participating Web Services.

Falut handling is easier in Orchestration as the execution is controlled which is not the case with Choreography. Web Services can be easily and transparently replaced in case of Orchestration as the involved Web Services don't know the actual business process whereas it'll be difficult in case of Choreography.

Thus, we see that Orchestration is having quite a few advantages over Choreography and hence Orchestrtaion is normally preferred for business process implementations, but Choreography may certainly have its own usage in some selected scenarios.



Share/Save/Bookmark


Tuesday, July 1, 2008

Alternatives of stop, suspend, resume of a Thread


Did you go through the first part of the article which discussed - Why Thread.stop() method is deprecated in Java? Why ThreadDeath is a subclass of Error and not of Exception? Can't we catch the ThreadDeath and fix the damaged objects? What will happen if stop() method is called on a thread which is yet to start? ... You may like to go through the first part before proceeding with this part.

This part of the article tries to answer the following questions:-

  • What should we do instead of using stop() method?
  • How can we stop a long waiting (maybe due to I/O) thread?
  • What if a thread doesn't respond to Thread.interrupt() call?
  • Why are Thread.suspend & Thread.resume deprecated?
  • What should be done instead of using suspend & resume?
  • What's Thread.destroy()?
What should be done instead of using stop() method?

If stop() is deprecated and therefore suggested not to be used then how should a situation requiring the usage of stop() method should be handled? Good question... we can follow the recommended approach by Sun which requires to have a variable, which should either be volatile or the access to the variable should be synchronized. The thread will check this variable regularly and hence the thread may be stopped in an orderly fashion by setting an appropriate value to the variable which will communicate the thread that it should stop in an orderly way now.

For example:

Instead of having the stop() as

...
private Thread theCurrentThread;
...
public void run(){
theCurrentThread = Thread.currentThread();
...
...
}

public void stop(){
theCurrentThread.stop();
}

we can have a graceful and safe stop as

...
public void stop(){
theCurrentThread = null;
}

What'll happen if we use the above mentioned graceful stop alternative on a thread which is into an infinite loop? Read this article to understand that - Impact of assigning null to a running thread executing an infinite loop >>.

How can we stop a thread that waits for long periods maybe for I/O?

We can follow the same graceful stop technique discussed above in this case as well. We just need to call the interrupt() method of the thread after assigning null to the thread. For example, we may have the stop method as:-

...
public void stop(){

Thread t = theCurrentThread;
theCurrentThread = null;
t.interrupt();

}

Why do we need to have another reference to call interrupt() on? Why don't we call on the same thread reference 'theCurrentThread' in this case? Easy question, I know. But leaving for you to think ... just to ensure that you're still awake :-)

Ans yes, in this approach if any method catches this InterruptedException thrown from the stop() method, then that method should either have InterruptedException in its throws list OR it should re-interrupt itself by throwing calling the interrupt() method again otherwise the purpose of rasing the InterruptedException from the stop() method will go in vain.

What if Thread.interrupt doesn't affect a thread?

It's bizarre, but if the thread stops responding to Thread.interrupt() method then we need to some application specific tricks to overcome the situation. For example, if the thread is waiting on a Socket and stops responding to interrupt() method, then we may close the Socket which will cause the thread to return immediately.

Be sure that if a thread doesn't respond to interrupt() method, it will not respond to the stop() method either. So, don't be tempted to use stop() in such situations.

Why are Thread.suspend and Thread.resume deprecated?

suspend() and resume() methods are deprecated as it may cause a deadlock to happen. The reason for this is that the suspend() method doesn't release the acquired monitors and if a suspended thread has already acquired the monitor of a critical resource then an attempt to acquire the same resource in the thread which would resume the suspended target thread will cause a deadlock.

What should be done instead if using suspend() and resume()?

The same approach what we follow to have a graceful stop can be followed in this case as well. We'll simply have a variable which will signify the various states of the thread and the target thread will keep polling the value of the variable and if the value indicates a suspend state then the thread will wait by calling the Object.wait method and if a value indicating the resumed state will cause the target thread to be notified using Object.notify method.

What does the method Thread.destroy do?

public void destroy() - this method was originally designed to destroy a thread without any clean-up i.e., without releasing any of the acquired monitors, which may cause deadlocks. This method was never implemented and a call to this method throws
NoSuchMethodError always.



Share/Save/Bookmark