Saturday, January 24, 2009

java.lang.UnsupportedClassVersionError: why is it thrown & how to resolve?


java.lang.UnsupportedClassVersionError: when is it thrown by a JVM & how to resolve it? 

UnsupportedClassVersionError
is (evidently a descendant of the Error class in Java) which subclasses java.lang.ClassFormatError which in turn subclasses java.lang.LinkageError (which is a direct subclass of java.lang.Error). This error is thrown in the cases when the JVM (Java Virtual Machine) attempts to read a class file and finds that the major and minor version numbers in the particular class file are not supported. This happens in the cases when a higher version of Java Compiler is used to generate the class file than the JVM version which is used to execute that class file. Please do notice that this is not true vice-versa which means you can comfortably use a higher version of JVM to run a class file compiled using an earlier version of Java Compiler. Let’s understand why? We'll see below a valid Java program and subsequently an alert and a stack trace of the error which are thrown as a result of the above explained situation which causes UnsupportedClassVersionError. FYI: The output (with explanation) of this sample program can be found here - an interesting try-catch-finally usage in Java >>


public class TestFinally {

      public static void main(String[] args) {
              
            try{
                  try{
                        throw new NullPointerException ("e1");
                  }
                  finally{
                        System.out.println("e2");
                  }
            }
            catch(Exception e){
                        e.printStackTrace();
            }
      }
}


The error alert and the stack trace if the right JVM is not used



java.lang.UnsupportedClassVersionError: TestFinally (Unsupported major.minor version 49.0)
      at java.lang.ClassLoader.defineClass0(Native Method)
      at java.lang.ClassLoader.defineClass(ClassLoader.java:539)
      at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:123)
      at java.net.URLClassLoader.defineClass(URLClassLoader.java:251)
      at java.net.URLClassLoader.access$100(URLClassLoader.java:55)
      at java.net.URLClassLoader$1.run(URLClassLoader.java:194)
      at java.security.AccessController.doPrivileged(Native Method)
      at java.net.URLClassLoader.findClass(URLClassLoader.java:187)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:289)
      at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:274)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:235)
      at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:302)


The minor and major version numbers are represented by the major_version and minor_version items respectively in a Java Class File structure. A combination of these two unsigned integers (both of length 2 bytes each) determines the particular version of the class file format. If a class file has major version number M and minor version number m, we denote the version of its class file format as M.m.

As per Java VM Spec, “A Java virtual machine implementation can support a class file format of version v if and only if v lies in some contiguous range Mi.0 v Mj.m. Only Sun can specify what range of versions a Java virtual machine implementation conforming to a certain release level of the Java platform may support.” For example: Implementations of version 1.2 of the Java 2 platform can support class file formats of versions in the range 45.0 through 46.0 inclusive.

Major version number of the class file formats of the popular Java versions are: J2SE 6.0 = 50, J2SE 5.0 = 49, JDK 1.4 = 48, JDK 1.3 = 47, JDK 1.2 = 46, JDK 1.1 = 45. Now you understand what in our example “major.minor version 49.0” signifies? It simply means we have used J2SE 5.0 for compiling the source code as the class file format is 49.0 where major_version is ‘49’ and minor_version is ‘0’.

Errors are never detected at compile time. This is quite easier to understand in this case – how can the compiler have the information at the time of compilation about which version of JVM would be used execute the compiled class file? It can't, right? So is the case with other errors as well. This is the reason why all the Errors are unchecked. The error alert also rightly shows that it's a JVM Launcher error.

How to resolve UnsupportedClassVersionError?

Whenever you encounter this error, do check if you’re using an earlier version of JVM to execute the class file than the corresponding version of compiler you used to compile the source code. The example shown here was compiled using Java 5.0 compiler, but when I tried to run using JVM 1.4, I got the above error. I just needed to switch either to a JVM version 5.0 or above OR needed to switch to a Java Compiler of JDK 1.4 or below (you of course need to make sure the source code is compatible with the corresponding version of the compiler otherwise you’ll start getting other compiler errors).

A higher JVM version doesn’t cause a problem in most of cases unless the class file format is quite old (and hence doesn’t lie in the supported range as specified by Sun for that particular JVM version ... as discussed above). But, it’s always a good practice to have both the Compiler and the JVM of the same version.

Liked the article? Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. If interested then please find the 'Followers' widget in the rightmost sidebar.



Share/Save/Bookmark


Friday, January 16, 2009

The 25 Most Dangerous Programming Errors


25 Most Dangerous Programming Errors listed by CWE/SANS

The US National Security Agency has helped building a list of the top 25 world's most dangerous programming errors which may lead to security vulnerabilities that can be attacked by the cyber criminals and this list of 25 errors is now assumed to be the minimum set of coding errors that must be fixed completely before letting the code to go live.

SANS Institute (based in Maryland) believes that just two of these 25 errors led to more than 1.5 million web-site security breaches during the year 2008. One can easily imagine how large the total number of security breaches all the 25 most dreaded errors may lead to. This is the reason why the industry felt a need for having a common consolidated list of the probable errors which the programmers should refer to as a minimum set of errors they need to be extremely careful about. It's assumed that this is the first time has reached to an agreement. More than 30 organisations (including various government, corporate, and educational institutions like US National Security Agency, US Department of Energy, Microsoft, Symantec, McAfee, Purdue, etc.) have participated in this collaborative effort. CWE (Common Weakness Enumeration) and SANS Institute together issued the list on January 12, 2009.

The list of the most dreaded 25 programming errors have been listed under three categories - Insecure Interaction Between Components, Risky Resource Management, and Porous Defences. The first two categories contain 9 errors each whereas the third category contains 7 errors. The official list of The Top 25 Most Dangerous Programming Errors, listed category-wise is:-

Insecure Interaction Between Components
  • CWE-20:Improper Input Validation
  • CWE-116:Improper Encoding or Escaping of Output
  • CWE-89:Failure to Preserve SQL Query Structure
  • CWE-79:Failure to Preserve Web Page Structure
  • CWE-78:Failure to Preserve OS Command Structure
  • CWE-319:Cleartext Transmission of Sensitive Information
  • CWE-352:Cross-Site Request Forgery
  • CWE-362:Race Condition
  • CWE-209:Error Message Information Leak
Risky Resource Management 
  • CWE-119:Failure to Constrain Operations within the Bounds of a Memory Buffer
  • CWE-642:External Control of Critical State Data
  • CWE-73:External Control of File Name or Path
  • CWE-426:Untrusted Search Path
  • CWE-94:Failure to Control Generation of Code
  • CWE-494:Download of Code Without Integrity Check
  • CWE-404:Improper Resource Shutdown or Release
  • CWE-665:Improper Initialization
  • CWE-682:Incorrect Calculation
Porous Defenses 
  • CWE-285:Improper Access Control
  • CWE-327:Use of a Broken or Risky Cryptographic Algorithm
  • CWE-259:Hard-Coded Password
  • CWE-732:Insecure Permission Assignment for Critical Resource
  • CWE-330:Use of Insufficiently Random Values
  • CWE-250:Execution with Unnecessary Privileges
  • CWE-602:Client-Side Enforcement of Server-Side Security
The errors may look quite trivial, but it's not-so-easy to avoid them creeping into an application of a considerable size, especially when a large application is being developed by a team(s) of sizeable people and hence it requires a well-defined process of code-review (peer and self both), checklist verification, etc. to be in place throughout the development phase. Should you require an explanation of what the above listed errors mean (which you probably won't require for most of them) then explore this link - CWE/SANS Top 25 errors list. . BBC News report on this list can be found here.

Liked the article? Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. If interested then please find the 'Followers' widget in the rightmost sidebar.



Share/Save/Bookmark


Thursday, January 8, 2009

finally contd.: an interesting try-catch-finally scenario


If not very comfortable with finally block and its execution then you may probably like to go through this article first - playing with try-catch-finally in Java >>. It covers some pretty interesting scenarios covering various possible scenarios of try, catch, and finally blocks followed by the precise explanation of their execution in each of these scenarios. Come back to this article once you finish that as this article is more like a logical extension of that article. If already comfortable with try-catch-finally then you may proceed straightaway with this one.

One of our visitors has asked another interesting scenario involving nested try blocks in response to the above mentioned article. I have created two possible cases taking reference of that scenario and have tried to explain the execution of each of these cases below:

Case #1: nested 'try' with a 'finally' block associated with the inner 'try' and a 'catch' with the outer


public class TestFinally {

public static void main(String[] args) {

try{
try{
throw new NullPointerException ("e1");
}
finally{
System.out.println("e2");
}
}
catch(Exception e){
e.printStackTrace();
}
}
}


Output: what do you expect as the output here? As we all know that 'finally' will always get executed irrespective of 'try' finishes its execution with or without exception. In our case the inner 'try' throws an exception which has no matching 'catch' associated with the corresponding (inner) 'try' block. As per the Exception Mechanism in Java, the uncaught exception will be propagated until it's either executed by some matching 'catch' block on its way back or it'll be handled by the default JVM handler which simply terminates the program after printing a stack trace.

But, before the control leaves the inner 'try' block, the entire inner block should complete its execution and hence the associated 'finally' block will get executed before the control leaves the inner block for the simple reason that 'finally' in any case has to be executed and the control won't go back to the inner block once it comes out of it.

The 'catch' block associated with the outer 'try' block expects an exception of type 'Exception' and hence a 'NullPointerException' (a subclass of the the Exception class) thrown from the inner 'try' block (which has been propagated) can be handled by this 'catch' handler and hence this 'catch' block will be executed. Do I still need to tell you the output of this code-snippet. Here is it for your reference:

e2

java.lang.NullPointerException: e1

at TestFinally.main(TestFinally.java:8)


Case #2: nested 'try' with 'catch' block associated with the inner 'try' and 'finally' with the outer

public class TestFinally2 {

public static void main(String[] args) {

try{
try{
throw new NullPointerException ("e1");
}
catch(Exception e){
e.printStackTrace();
}
}
finally{
System.out.println("e2");
}
}
}


Output: This is pretty straightforward I think. Since an exception is being thrown from the inner 'try' block and we have an associated matching 'catch' handler for this exception so obviously it will be handled there. Now the control leaves the inner block and hence the outer 'finally' gets executed as 'finally' block in any case has to be executed. All right? Please find below the output for your reference:

java.lang.NullPointerException: e1

at TestFinally2.main(TestFinally2.java:8)

e2


Liked the article? Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. Please find the 'Followers' widget in the rightmost sidebar.



Share/Save/Bookmark


Friday, January 2, 2009

XML Schema and its anatomy, Uses of XSDs, Namespaces


XML Schema and its anatomy, Applications of XML Schemas, Namespaces

XML Schema was approved as an official recommendation by the WOrld Wide Web Consortium (W3C)
in 2001. W3C defines XML Schema as "XML Schemas express shared vocabularies and allow machines to carry out rules made by people. They provide a means for defining the structure, content, and semantics of XML documents."

As it's clear from the definition that XML Schema is a mechanism of defining basically the
structure, content and semantics of XML Documents which conform to the Schema under consideration. But, DTD (Document Type Definition) also serves a similar purpose. So, why a need for something different was felt?

Well... the short answer is that XML Schema just evolved from DTDs in the wake of getting a
more powerful and flexible mechanism as compared to what DTD supported. XML Schemas are XML documents themselves and hence all the benefits of XML - parsing, programmatically accessing, validating, and extending automatically apply to XML Schemas as well. These all benefits make Schemas a far better alternative to DTDs.

Do we still need DTDs?


Yeah, we need them, but probably only for the legacy applications where they have been in
use since long. Almost all the newer applications now use Schemas only for the simple reason that XML Schemas are much more powerful & flexible and therefore offer many advantages over DTDs. However, DTDs are still supported and they can be used in tandem with XML Schemas as well.

Anatomy of XML Schemas


An XML Schema is made up of the following declarations and definitions. Few of these might
be optional as well.

  • Document Type Declaration: Since an XML Schema is also an XML Document which obviously conforms to the W3C XML recommendations and hence they may contain the particular document type declaration, but this is not a mandatory requirement. If present, it's inferred by the root element named <Schema>
  • Namespace Declaration: Again an optional declaration which is used to provide a context for element names and attribute names used within an XML Document (which conforms to this Schema). This helps in avoiding ambiguity in name resolution and thereby helps building and extending XML Documents using URIs ensuring unique names for elements and attributes. Namespaces can either be defined inline (called inline xml namespace which are defined inline with element and atribute in the XML) or as Expanded names where the namespace name is combined with the local name to uniquely identify the particular name. Both the namespace and the local name are separated with a colon (e.g., NamespaceName:LocalName).
  • Type Definitions: These definitions are used to define Simple or Complex data types or structures, which are later re-used by all the client content models.
  • Element/Attribute Declarations: This section of an XML Schema defines the elements and their respective attributes which are used for tags in the XML instances using the Schema. Various constraints like id, type, max/minOccurs, substitutionGroup, etc. can also be defined in this section of an XML Schema.
  • Sequence Definition: As the name might suggest this section is used to enforce the order in which the child elements of the XML instances (using the XML Schema in consideration) are required to appear.

XML Schema applications/uses

  • Data Validation: XML Schema is used to define the structure of the elements and the attributes within the elements and this definition helps the XML parsers to validate the XML Document Syntax, Datatypes used by the XML Document/Instance, and/or the Inclusion of mandatory elements or attributes. This obviously helps the application designers to delegate some of the basic data validation (and data sufficiency) tasks to the XML Schema rather than doing all of them programmatically.
  • Content Model Definition: XML Schema supports both simple and complex data types definitions which ultimately provides flexibility of using concepts like inheritance to the data syntax. This consequently helps building the XML Schema defining extensible models highly suitable for large and complex applications.
  • Data Exchange/Integration: Since XML Schemas are themselves XML documents and hence they can be parsed and accessed similar to other XML instances by variety of XML companion tools for various purposes. For example: an XSD when used in conjunction with an appropriate XSLT and an XML-enabled database can support the changes made to the global elements defined by the XSD to be processed consistently. In addition, the output can simultaneously be produced in various formats like PDF, Doc, RTF, HTML, etc. using the single source publishing methodology. The data-oriented datatypes provided in XML Schema 1.1, in addition to the document-oriented datatypes as supported in the previous versions, facilitate complex document exchange and data integration scenarios. Namespaces supported by XML Schema can be used to have more than one vocabulary at a time in an XML instance as the namespaces enable the XML documents to contain unique identifiers. Namespaces facilitate ample opportunities for data exchange and integration by enabling the entire XML frameworks to co-exist within the same architecture. This feature is extremely helpful in mergers and acquisitions, and supply chain requirements, where we generally have a plethora of heterogeneous data constructs.
  • Industry XML Standards: These standards aim to streamline and provide a basis for industry-wide data integration by implementing common XML vocabularies which enable the business partners to seamlessly exchange data across different systems and architectures. Several new industry standards are strongly being followed and they are paving way for a seamless data integration and inter-operability. Some of these standards are DITA, DocBook, SCORM, ACORD, CXML, FIXML, and XBRL.

Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.



Share/Save/Bookmark