How are www, www1, www2, etc. different? Role of CNAME & DNS?
Let's start with discussing what the different parts of a typical web site address mean? Say for example, if we take http://www.google.com then the first part from left http denotes the protocol being used for communication on Internet. The two most popular protocols being used are http and ftp. Requests using different protocols connect to server at a separate (default unless otherwise explicitly specified) port on the targeted server. Like an http request connects to the server via TCP and connects by default at port number 80. An 'http' request is also called a Web request.
Now that we have kind of understood what the first part of a typical URL means, let's move on to the next parts. We're discussing only the web site addresses in this article so the first part will always be either 'http' or 'https' which is nothing but HTTP running on SSL (Secure Sockets Layer). Following this there will be a set of 3 characters (://), usually termed as delimiters and they separate the protocol from the web site address.
The web site address in our example is www.google.com like every other web site address this is also of the form CNAME.secondaryName.TopLevelDomain where TopLevelDomain is something like '.com', '.org', ... etc. and they are resolved by the top level domain name servers called the root servers. These root servers are maintained by naming authroities like ICANN. The TopLevelDomain are normally one word long only, but they can be made of two (or maybe more) words separated by '.' as well, for example 'co.in', 'co.uk', ... etc. they are also top level domain names.
Left to the top level domain names comes the secondaryName which is the normally the unique name you choose for your website. Like in our example this name is 'google'. These names are resolved using the lower level domain name servers (DNS). Again it's not necessary that the secondaryName will always be one word long. It can be made up of severalwords maybe to maintain a hierarchy for the purpose of segregating (and then serving) requests of different types based on country, sub-domains, etc.
The leftmost part of a web site address is normally 'www' which is nothing but the name of the web server of the company hosting and running the web site. As it's a name and hence it can anything like 'aaa', 'aa', 'aaaa', 'www1' , 'www2' or any other string unique in context of the company specific DNS as these names are not resolved by the root or intermediate level DNS instead it's the company specific DNS which resolves it. Resolving a name simply means returning the IP address of the computer so that a combination of IP Adrress and a port (normally a default port) can be used by a client to establish a TCP connection.
This leftmost part of the web site which is normally resolved by the company specific DNS is usually called CNAME or Canonical Name. In case of our example, CNAME is 'www'. Like every other part this part is not required to be a one-word long name only. It all depends how much string is mapped to the actual IP address of the server in the company specific DNS mapping table. Needless to re-iterate that this can be any string (unique in the particular DNS context) you want and not only 'www'.
Let's take another example. The address of this blog is http://geekexplains.blogspot.com, so for this address 'com' is the top level domain, 'blogspot' is the secondaryName and 'geekexplains' is the canonical name which identifies this blog uniquely under the context of the secondary domain named 'blogspot'. The same physical server might (or probably for sure) is used for many other blogs running under 'blogspot'. This is probably achieved by having separate active ports (on the physical machine) each of which hosting processes responsible for one blog each. Ever wondered why the address 'http://www.geekexplains.blogspot.com' also takes you to the same blog? Well... because two canonical names 'geekexplains' and 'www.geekexplains' are probably mapped to the same IP Address - Port combination which hosts the process servicing the GeekExplains blog. The DNS at secondary domain requires that all the canonical names are unique in its context which is why you're asked to pick an unique name for your blog while you create one. Another interesting point to note here is that the client browser establishes TCP connection on the default Port Number 80 only as it connects to the server hosting 'blogspot.com' only and the request is further resolved there to return the HTTP response of the actual blog to the client. The mapping table of 'blogspot' DNS makes sure that the requests to any of the blogs running under it are intercepted by it and subsequently all the requests are serviced as expected. Hope this helps. Any doubts? Or, have anything to add/modify here? Feel free to reach me by dropping a comment.
Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.
Friday, December 26, 2008
www v/s www1 v/s www2. Role of CNAME and DNS.
Monday, December 22, 2008
How does Password Encryption work in real world?
Password encryption in work: illustration using SBI Sign In process
Last week I received an email (from one of our visitors, Anil) inquiring about what all actually takes place to ensure that the password (or any other sensitive data for that matter) gets encrypted before the request is sent to the Web/App Server? Thanks Anil for raising such a nice point.
In this article, I'll try to discuss how the password encryption feature typically works. Few details might be implementation dependent and hence might be little different in your case than what is mentioned below (and in the follow-up article), but the underlying idea will probably remain the same (more or less) for most of real world applications which require authentication.
Okay, let's start from thinking about which all places do we actually need to put encryption into action and how do we implement them? Except the possible encryption done at the Database end, there are two popular approaches of implementing encryption - One, which is done at the client side (the one we will mainly talk about in this article) and Two, which is done at the server side (i.e., the request carries the actual password and at the server it's encrypted to be processed further).
The former of the two is obviously safer to have as it eliminates the risk of the request being intercepted in the middle before it actually reaches the web/app server. Well... you can say that the data packaged in a HTTP POST request is automatically encrypted in case of HTTPS, but an extra level of encryption will only add to the security of the web application. Of course, the implementation should not be too much time consuming otherwise the benefits of having a more secure application will be ruled over by the frustration it might cause to its end-users.
Though, it depends upon the actual implementation, but possibly the preferred choice (in highly secure systems) is that the actual password should not be exposed anywhere in system, which means the encrypted password stored in DB is fetched and probably not decrypted back to actual password which the end-user uses, but instead some other form which is matched with the decrypted one at the middle-tier to authenticate the user. Find below a pictorial representation of how actually such a password authentication scenario works:
The entered password is first encrypted at the client side using the Public Key ('public key1' in the above diagram) and then the encrypted password reaches the App Server where it's decrypted a corresponding Private Key ('private key1' in the above diagram). App Server also fetches the password stored in the database, which might need to be decrypted using another Private Key ('private key2' in the above diagram). Now, the implementation of the algorithms and the generation of the keys should be such that both the decrypted passwords 'decryptedpwd1' and 'decryptedpwd2' should match equal for all the valid cases and they should be unequal otherwise.
How can the encryption be implemented? Will JavaScript/VBScript suffice?
Next question arises, how can we do it effectively and at the same time without consuming much time? The fastest possible way would probably be to have some mechanism in place so that the encryption can take place at the client side only. Okay, so how can the encryption take place at the client side? If we put both the encryption also definition and the public key in the JavaScript (or VBScript) code then one can easily see everything just by viewing the page source. Did you think that making the JavaScript external can solve your problem? As in you only declare the JS file in that case and not list down the contents. Well... if you did think this, you got to think again. A external JS file is equally exposed for viewing by the clients as you can simply type the path (if the path is absolute OR just append the relative path to the roor URL) in the browser window and the complete JS will be there for you to be viewed. We'll see examples below. Evidently, the encryption won't really carry any benefit here.
How else can we do it at the client side? How good would Applets be?
Yeah... now you got a better way of handling the situation. As we all know that applets are also downloaded to the client machine and are executed at client machine itself, so we can now make use of the Java programming language and its security mechanism for having the encryption implemented in a far better manner. What's probably an even more appealing about this approach is that you can NOT see the source of the applet class(es) directly. They are normally shipped in JAR bundles which gives an extra layer of security. You can claim that since the JARs are downloaded and the .class file of the applet class runs within the vicinity of the client JVM so the bytecodes would certainly be available which can then be de-compiled to have a look at the source. Yeah, you're are right that the bytecodes are available at the client machine and it can be decompiled. But, what makes this approach better than the JavaScript approach can be understood by following points:
- Bytecode tempering is automatically detected: if an intruder somehow gets hold of the bytecodes and changes that, the changed bytecode will throw an exception while running whereas any such changes in the JavaScript (or VBSCript) source won't be detected.
- Security Mechanism of Java is much more versatile: what we talked about in the above point is also an integral part of the Java Security mechanism, but it's not only limited to this. It's quite versatile and layered. No such benefit is available with JavaScript (or VBScript).
- JVM offers a far more consistent environment: bytecodes run within a Java Virtual Machine which obviously offers a far more consistent and stable environment as compared to the environment in which JavaScript (or VBSCript) code runs.
- Impl of different public/private keys for every new request: you would probably be aware of the Secure Key concept which forms a part of the password on many systems. The underlying idea in such implementations is to have part of the password which keeps on changing on a continuous basis and thus making it virtually impossible for the attackers to guess that. Similarly, if we want to step up the encryption strength to an even higher level, we can put in place different public/private key combinations for every new request.
Now that we have understood the underlying concept of password encryption, let's move on and see a pictorial representation of how password encryption has been implemented in a real world live scenario. We'll take example of SBI Net Banking and try to understand how the user entered password is getting encrypted there - Diagrammatic representation of Password Encryption >>
Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.
Sunday, December 21, 2008
Pictorial rep of Client-side password encryption
This article is a part #2 of the article 'How does Password Encryption work in real world?'. If you have landed directly on this article then you would probably like to go through the first part of the article - Complete working of Password Encryption >>
Diagrammatic rep of how password encryption works for SBI Net Banking
#1: typing the Login URL of Online SBI will get you a web page which will have a JS function named encrypt() and an applet named encryptApplet. Find below the code-snippet as obtained from the Page Source of the Login page:
#2: Once you enter the password and click 'Login' button, the entered password first goes through the basic checks (minimum and maximum length) and if it passes that then it is encrypted by the applet before it's sent to the web/app server. Notice that the public key id is set as it travels to the server as a hidden key which is where used for identifying the corresponding private key id for decrypting the password. This makes the web app implement a different public/private key combo for every new request. Find below the relevant code-snippets doing these tasks:
#3: see the change in password length before and after pressing 'Login' button which actually shows that the encryption is taking place before the request being sent to the server. Notice that the password is encrypted when the 'Login' button is clicked (it turns grey when clicked). Clicking the button first performs the basic validations, then the password encryption, and finally it submits the request to the web/app server.
#4: see below a snapshot showing how an external JavaScript code looks like when opened in the browser versus how an applet JAR file opened in browser looks like. Evidently JavaScript code is easily visible as it's plain text. Whereas, opening up the downloaded JAR/Bytecodes will mostly have special characters and you got to try hard to get hold of the source, if at all that's possible:
Note: On the face of it (by going through the HTML source of the Login page), this is how the password encryption process seems to work for SBI, but this is as per my understanding and of course I can't claim about the actual process. Anyway, the intention here was just to discuss a typical Client-side encryption process.
Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.
Thursday, December 18, 2008
Constructor Injection: how it works? Impl using Pico
Implementation of Constructor Injection using Pico. How CI works?
Constructor Injection is mainly used by a highly embeddable, lightweight, and full-service IoC Container named PicoContainer. Should you need to refresh your understanding of DI and its types, go through this article - Dependency Injection & its types>>
Let's see how the Constructor Injection phenomenon works. We will take one example scenario and will implement that using PicoContainer. First we'll see the source code and subsequently we'll discuss how the various pieces of the code fit together and how actually do they work.
Example Scenario: Suppose we have a requirement of displaying bank account details which might come from various sources. The requirement is fairy simple and to implement it we can easily think of having two interfaces, one which would take care of the presentation of the account details and the other would capture the details. Say for example we have named these interfaces as 'DisplayAccount' and 'BankAccount'.
interface DisplayAccount{
void displayAccountDetails(BankAccount bankAccount);
}
interface BankAccount{
Map getAccountDetails(long accountNumber);
}
Source Code: How to code Constructor Injection using PicoContainer?
Since the injection is handled by the constructor and hence it should have the declaration of all the objects that it might depend on.
class DisplayAccountImpl implements DisplayAccount{
private BankAccount bankAccount;
...
public DisplayAccountImpl(BankAccount bankAccount){
this.bankAccount = bankAccount;
}
public void displayAccountDetails(){
/*... display the details as required ...*/
Map acDetails = bankAccount.getAccountDetails();
...
}
...
}
class BankAccountImpl implements BankAccount{
private long accountNumber;
private Map acDetails;
...
...
public BankAccountImpl(long accountNumber){
this.accountNumber = accountNumber;
}
public Map getAccountDetails(){
Map acDetails = new HashMap();
/*... get the details maybe from a DB...*/
/*... form a Map object out of the details ...*/
...
...
...
return acDetails;
}
...
}
Having these two classes well in place, we can now use PicoContainer (or any other IoC Container supporting Constructor Injection such as Spring) to implement the Constructor Injection.
class TestConstructorInjectionWithPico{
private MutablePicoContainer getConfiguredContainer() {
/*... Step #1: get a default Container instance ...*/
MutablePicoContainer picoContainer = new DefaultPicoContainer();
/*... Step #2: populate the Container parameters ...*/
/*... we need to make the parameter value(s) available in this scope ...*/
ConstantParameter constantParameter1 = new ConstantParameter(accountNumber);
...
Parameter[] parametersForBankAccountImpl = {constantParameter1, ...};
/*... Step #3: registering the components with the Container ...*/
picoContainer.registerComponentImplementation(BankAccount.class, BankAccountImpl.class, requiredParameters);
pico.registerComponentImplementation(DisplayAccountImpl.class);
/*... Step #4: return the configured Container instance ...*/
return picoContainer;
}
...
public static void main(String[] args){
/*... get an instance of configured Container ...*/
MutablePicoContainer picoContainer = getConfiguredContainer();
/*... get the components having the injected dependencies ...*/
DisplayAccountImpl displayAccountImpl = (DisplayAccountImpl) picoContainer.getComponentInstance(DisplayAccountImpl.class);
/*... use ready-to-be used components: using for display ac details ...*/
displayAccountImpl.displayAccountDetails();
}
}
Let's try to understand how the above code-segments work. The sample class and interface definitions are fairly simple and straightforward. Coming directly to the getConfiguredContainer method where we are creating an instance of the PicoContainer and configuring the instance. PicoContainer is actually an interface having MutablePicoContainer as one of its various sub-interfaces. The MutablePicoContainer interface has many variants of the method registerComponentImplementation. The two variants which we have used have their signatures as:
- ComponentAdapter registerComponentImplementation(Object componentKey, Class componentImplementation, Parameter[] parameters) throws PicoRegistrationException;
- ComponentAdapter registerComponentImplementation(Class componentImplementation) throws PicoRegistrationException;
Second variant is internally translated into a call of this variant 'registerComponentImplementation(Object componentKey, Class componentImplementation)' which means the same type is used as the key to register the component with the Container. Another point to note here is that we didn't require to pass the parameter of type 'BankAccount' as it's already registered and the Container will pick that automatically. Moreover, as mentioned above the parameters passed serve just as hints and how many of them are treated (or ignored) by the Container depends on the particular implementation of the Container.
Once we have got the configured PicoContainer instance then we have used the getComponentInstance method (this method is a member of the PicoContainer interface and its signature is 'Object getComponentInstance(Object componentKey);' and it finds the component registered with the specified key) to fetch the well-built instances of the classes having all their dependencies injected into them. Notice that we have used 'DisplayAccountImpl.class' to fetch the instance of type 'DisplayAccountImpl' as we registered the instance with the same key.
Now we can use those instances as per the application requirements - in our case we are using the instance to call a public method named 'displayAccountDetails'.
The code-snippets listed above is just to give you an idea of how can we actually use a IoC Container to exploit the Constructor Injection feature. I've not compiled or tested the code. You may probably need to make some changes, but I suppose most of them would be cosmetic in nature.
Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.
Saturday, December 13, 2008
Spring: Beans, Container, Metadata, IoC Instantiation
Spring fwk: Beans, Containers, Configuration Metadata, IoC Instantiation
Beans - what are they and why do we call them beans?
Any object in your Spring application which is instantiated, assembled, and otherwise managed by the Spring IoC Container is called a Bean. Nothing special about these objects, except that the configuration metadata of the Spring framework contains details of these objects as how can they be instantiated and managed by the Spring IoC Container otherwise these objects are just like any other object used by your application.
Such objects were called Beans as the Spring framework was designed and developed in response to counter-attack the complexities of the Enterprise JavaBeans (EJB).
The Spring IoC Container
The central interface of the Spring IoC Container is BeanFactory, which is responsible for instantiating the application objects, configuring such objects, and managing the dependencies among these objects.
There are a number of implementations of this interface available today with one of the most popular being the XmlBeanFactory which allows the developers to specify the details of the object instantiation and their inter-dependencies in an XML File and the XmlBeanFactory takes care of interpreting the configuration XML metadata and subsequently providing a fully configured application taking the heavy load of object instantiation and inter-object dependency management off from the developers.
Configuration Metadata
This covers the details of object instantiation details of the application and also the details about inter-object dependencies. Depending on the form in which the metadata is supplied, the Spring framework would be required to use the corresponding type of IoC Container. The simplest and most popular form in which the metadata is supplied to the Spring framework is a simple and intuitive XML format. Evidently the XmlBeanFactory implementation of the BeanFactory interface is used as the Spring IoC Container in this case.
The other forms in which the configuration metadata can be supplied to the Spring framework are: using the Java Properties format, using Spring's public APIs to specify the metadata programmatically. Below is an image (taken from SpringFramework) which shows how all these pieces (POJOs, Config Metadata, and Container) magically fit together to produce a fully configured system ready to be used.
Instantiating the Spring IoC Container
Not always you would be required to instantiate a Spring IoC Container explicitly like in case of a Web Application some 8 lines of boilerplate code written in the Web Descriptor file named web.xml will do that for us. Otherwise we would be required to write just a few lines explicitly to instantiate the Container.
Configuration Metadata files are passed to the BeanFactory or ApplicationContext constructors in form of objects of type Resource which encapsulates the metadata details as obtained from external resources like the local file system, from the Java CLASSPATH, etc.
Few examples of Container instantiation are:
Resource resource = new FileSystemResource("beans.xml");
BeanFactory beanFactory = new XmlBeanfactory(resource);
or
ClassPathResource classPathResource = new ClassPathResource("beans.xml");
BeanFactory beanFactory = new XmlBeanFactory(classPathResource);
or
String[] appContextResorce = new String[] {"applicationContext.xml", "applicationContext-part2.xml"};
ApplicationContext applicationContext = new ClassPathXmlApplicationContext(appContextResource);
BeanFactory beanFactory = (BeanFactory)applicationContext;
Which all beans/objects will have their configuration details in the Spring configuration metadata?
Well... the configuration metadata needs to have the definition of at least one bean which the container would manage. How many other bean definitions would be captured by the configuration metadata, is something which depends upon your application requirements. Spring framework supports enormous possibilities and hence it's probably limited to your application requirements and your imagination.
Some of the common entries in the configuration metadata include service layer objects, DAOs, presentation layer objects such as Struts Action instances, infrastructure objects such as Hibernate SessionFactory instances, JMS Queue references, etc.
Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.
Tuesday, December 9, 2008
Spring: Inversion of Control, Dependency Injection
Spring IoC - Inversion of Control. DI - Dependency Injection.
Spring IoC - Inversion of Control
IoC pattern is defined as "A software design pattern and set of associated programming techniques in which the flow of control of a system is inverted in comparison to the traditional interaction mode." which means instead of the application calling the framework for services (as is the case with traditional patterns) it's the framework which calls the components as specified by the application.
DI - Dependency Injection
As you can deduce from the above para that IoC is quite generic in nature and perhaps not used by the Spring framework in entirety. The aspect of IoC used by the Spring framework is usually termed as DI or Dependency Injection as it focuses on injection of dependencies (required resources) into the dependent components at run time.
There are three main ways through which dependencies are injected into the dependent components. All these forms are supported in Spring either directly or indirectly. These forms are:
- Constructor Injection - as the name suggests, in this form the dependencies are injected using a constructor and hence the constructor needs to have all the dependencies (either scalar values or object references) declared into it. Though, this form of DI is directly supported by the Spring IoC Container. But, Constructor Injection is mainly used by a highly embeddable, lightweight, and full-service IoC Container named PicoContainer. Read more about how do we actually code Constructor Injection using Pico in this article - Implementing CI using Pico. How CI works?
- Setter/Mutator Injection - this form of DI uses the setter methods (also known as mutators) to inject the dependencies into the dependent components. Evidently all the dependent objects will have setter methods in their respective classes which would eventually be used by the Spring IoC container. This form of DI is also directly supported by Spring IoC Container. Though Spring framework supports both forms of DI namely the Constructor Injection and the Setter/Mutator Injection directly, but the IoC Container prefers the latter over the first.
- Interface Injection - in this case any implementation of the interface can be injected and hence it's relatively complex than the other two where the objects of specified classes are injected. It's not supported directly by the Spring IoC Container. Interface Injection is used by The Apache Avalon. In 2004 Apache Avalon project closed after growing into several sub-projects including Excalibur, Loom, Metro, and Castle.
Setter vs Constructor Injection. Why Spring prefers the former?
- Hard to manage a large number of constructor arguments: Spring framework prefers Setter/Mutator Injection over Constructor Injection because a large number of constructor arguments can become really hard to manage especially when many of them are optional. The Spring IoC Container will of course not have any such problem with Setter Injection type.
- Re-injection or Amendment: One important thing to note here is that in case of Constructor Injection the dependencies are handed over at the time of object instantiation of the dependent objects whereas in case of Setter/Mutator Injection the dependencies are injected after the dependent object has already been created. Now if a re-injection or re-configuration of a dependent object is required , it can be easily done by calling the respective Setters in case of Setter Injection, but not the same in case of the Constructor Injection. So, should you need any re-configuration of the dependent objects, you would be better off with Setter Injection.
Why at all do we have Constructor Injection then? Any real need?
Well... by saying that Spring IoC prefers Setter/Mutator Injection you should not deduce that the Constructor Injection has no real uses. Constructor Injection has its own benefit(s). This form of injection is preferred by the purists for the reason that the dependent objects are always handed over in a completely initialized state and no less than that. Right?
The obvious disadvantage in this case is that the dependent object becomes less flexible and hence any re-configuration becomes difficult.
How has the IoC pattern (or DI) been implemented in Spring framework?
Spring framework has two packages supporting this core functionality. These packages are: org.springframework.beans and org.springframework.context and the two interfaces which are the building blocks of Spring IoC are: BeanFactory and ApplicationContext.
ApplicationContext interface is actually a sub-interface of the BeanFactory interface. The top level interface, BeanFactory takes care of supporting an advanced configuration mechanism to manage objects of any type. ApplicationContext interface on the other hand is responsible for functionalities like integration to Spring AOP, message resource handling, event prorogation, and other enterprise-centric functionalities.
There are other specialized sub-interfaces as well which are targeted to add the specific set of functionalities on top of the the functionalities provided by ApplicationContext and BeanFactory interfaces. One such example is WebApplicationContext which is sub-interface of the ApplicationContext interafce and is responsible for providing enterprise-centric functionalities to be used in Web Applications.
Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.
Saturday, December 6, 2008
Session impl of a Web App served by multiple JVMs
Session implementation of a Web App spread on multiple JVMs/Servers
Should you need a refreshed understanding of what HTTP Sessions are and why are they actually needed, please refer to the article - Need for Session Tracking and Session Tracking Mechanism >>
All the three techniques of HTTP session maintenance namely Cookies, URL Rewriting, and Hidden Fields have quite a few limitations. Some of them are:
- Support for small volume of data - especially in case of URL Rewriting as can be easily understood.
- Support for simple character data only - URLs are made up of character strings.
- Vulnerable to attacks - since all the data is visible to the clients hence attackers may get hold of the data and change them in between before the request reaches the server.
- Developer needs to manage the session - the entire responsibility of maintaining the session (when to use the existing and when to create a new, ensuring uniqueness, consistent client-session integration, etc.) comes to the developer which obviously adds to the complexity of the overall application development.
How can we manage the session more effectively and more easily?
Using the HttpSession interface of the The Java Servlet Specification is one of the possible ways of maintaining the sessions more effectively and far more easily. All the J2EE compliant containers have the implementation of the underlying classes and the user simply need to call the APIs to use the services. For example, calling getSession() will return the session for the corresponding client if it already exists otherwise it will create a new session. Similarly, we can use the APIs putAttribute(String attrName, Object attrValue) and getAttribute(String attrName) to save the attribute values to the session OR to get them from the session, respectively. Here the important point to note is that we can save a value of type Object (i.e., effectively any Data Type supported by Java) which means we are no more limited to the character based name-value pairs. Almost all of the above mentioned limitations either get eliminated OR at least get reduced to a considerable extent by using this approach. Right?
Managing the Session Manager of your Application Server
Almost all the Application Servers come with an integrated Session Manager which facilitates an even easier way of configuring the session handling for your web application by facilitating the session configuration via simple GUI screens. You may refer to your Web/App Server manual for more details. Some common taks which we can do via App Server Session Manager are:
- Enable Sessions - if it's not enabled then the runtime system will throw an exception if the servlet tries to create one (and of course no existing session would be returned as there wouln't be any existing...right?). Why at all do we need this configuration? Well... because not all your web applications require Session Support and for them it's wiser to disable this feature to avoid the extra overhead which the runtime system incurs for session management.
- Enable Cookies - Ohh, back to Cookies again? Yeah... the reason why we still need cookies is because we need the client to maintain the unique Session ID of the session object created and maintained on the server. But, now we are left with storing just a single ID either in the Cookies or via URL Rewriting and not the entire session information. That definitely makes a web developer's life easier.
- Enable URL Rewriting - Again for the same purpose of passing the unique Session ID to and from the individual clients. This of course requires a piece of code to be written as the Session ID needs to be added to the URL programmatically and hence this approach is not supported for static pages (plain HTML pages). How to overcome this limitation? Pretty simple... convert all the plain HTML pages to JSP pages as every HTML page is a valid JSP page so this should not be a problem. You would of course not like to convert your static HTML pages to Servlets :-) You of course need to encode the URL in these converted JSP pages as well. Any JSP page or Servlet which doesn't do the URL encoding will not be able to access the session if it relies on URL Rewriting approach. Beware of this!
- Enable Persistent Sessions - sessions are maintained in the Application/Web Server memory and hence the data will be lost if the server is shut down. If you're interested in maintaining the session data then you need to store it in some database or in some other persistent medium. Almost all the Application Servers support this feature and you just need to specify the Data Source which would be used to store the session data.
What will happen if both 'Enable Cookies' and Enable URL Rewriting' are enabled?
If both these features are selected then the approach which would actually be used to maintain the session will depend upon the browser setting (to see if it accepts cookies or not) and also on the fact whether Servlet (in use) is using URL encoding or not. Below are the two possible cases where the session will be maintained if both 'Enable Cookies' and 'Enable URL Rewriting' are enabled:
- Servlet has URL Encoding code - if this is the case then URL Rewriting approach will be used irrespective of whether Cookies are enabled or not. Even if the browser is set to accept the Cookies.
- No URL Encoding code in Servlet and Browser accepting Cookies - cookies approach will be followed to maintain the sessions.
Implementing HTTP sessions for Web Applications spread across multiple physical servers (or JVMs)
Say your Web Application is spread across multiple physical servers (or may be on the same server, but using different JVMs which is of course a rarity) which might have been done to balance the load of your application OR may be the requirement is such that separate physical servers is a need than a luxury. Whatever be the case, in such a scenario if a user say log into one of the machines (JVMs to be specific) and then s/he is taken to some other Servlet/JSP running on some other server (JVM) to fulfill the client request. Now if that Servlet/JSP also requires authentication (which it would in most of the practical scenarios) then the user would be prompted to enter his/her credentials once again which s/he would of course not like. It's the applications responsibility to transfer the credentials from one server (JVM) to another)... right?
Using Persistent Sessions, we can easily achieve a solution to this complex problem. This approach requires the session to be saved in a data source which can easily be accessed by any of the scattered servers (JVMs) and the client gets a feeling that his application is virtually running on a single server (JVM). It's a better practice to have a completely separate data Source just for the purpose of session persistence and not to integrate session data with the application data source(s) for the obvious reason of making the overall implementation loosely coupled and hence better maintainable and more scalable.
Though the Session Manager discussed above behaves more or less the same for all the popular Application Servers like BEA WebLogic, IBM WebSphere, Oracle Application Server, etc., but the article is based on IBM WebSphere Application Server.
References: Maintaining Session Data with WebSphere, The WebSphere InfoCenter
Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.
Creating documents directly from emails in GMail
Creating documents straight out of your emails in GMail
If you have not noticed (or used) this feature so far then probably you are missing a really nice feature of GMail Labs. The feature was introduced to the entire world at the Official GMail Blog on Dec 16, 2008.
Okay, what's this used for? As the name might suggest, it's for creating documents straightaway from your emails. So, if you think any of your email conversations in your GMail a/c are needed to be saved, edited, and/or shared as a document then this feature will make your life easier. You just need to click a link named 'Create a document' which appears to the right of your opened email along with 'New window', 'Print all', 'Expand all', and 'Forward all' links.
This feature is not enabled by default and you need to turn it on by selcting "Enable" against "Create a document" and subsequently by clicking 'Save Changes' on the page Settings -> Labs. Find below a screenshot showing all these steps in the required order.
If you were to create a blank document then you need to press 'G' and while this key is pressed you just need to press 'W'. Make sure that the Keyboard Shortcuts are ON for this to work.
The official GMail Blog post which introduces and talks in detail about the feature can be found here - GMail Labs: Create a document. The need for this feature and the potential benefits of using this feature are summarized by the official blog post as below:
"More than once, I've had a conversation over email and later realized that the information contained in the messages would make a great starting point for a document. So I built an experimental feature for Gmail Labs that does just that: with one simple click, "Create a document" converts an email into a Google Docs document.
No more copying and pasting the text from your email -- just open the message you wish to convert, click the "Create a document" link on the right side of the page, and voila, you have a brand new document which you can then modify and share!"
Enjoy the cool GMail Labs feature which might eventually help you by saving your time in your sincere efforts to Go Green by making/keeping/managing as many documents online as possible.
Saturday, November 29, 2008
Implicit vs Explicit Cursors. Static vs Dynamic Cursors.
Life Cycle of DB Cursors. Implicit vs Explicit, Static vs Dynamic Cursors.
What are Database Cursors?
Oracle RDBMS assigns a private area to contain related information and the data returned or affected by a SQL statement when executed from a PL/SQL block. A Cursor is a mechanism of naming that private area so that the contents of which can be manipulated programmatically.
A Cursor in its simplest form can be thought of as a pointer to the records in database table or a virtual table represented by the result of a SELECT statement (in case of JOIN for example). A sample Cursor declaration/definition:
DECLARE
CURSOR sample_cur /*... explicit cursor ...*/
IS
SELECT T2.description
FROM SampleTable1 T1, SampleTable2 T2
WHERE T1.id = T2.id
/*...declaring a variable to hold data...*/
sample_var VARCHAR2(50);
BEGIN
/*... open the cursor if not opened already ...*/
IF NOT sample_cur%ISOPEN
THEN
OPEN sample_cur;
END IF;
/*... fetching data from a cursor into a variable ...*/
FETCH sample_cur INTO sample_var;
/*... once finished using the cursor then close it ...*/
CLOSE sample_cur;
END;
Once defined/declared a Cursor can be OPENed, FETCHed, and finally CLOSEd quite easily using the obvious looking commands. You can put the FETCH inside a loop if you wish to iterate through all the rows returned by the cursor. This of course holds true only for Explicit Cursors as these operations are (and can only be) performed implicitly by the PL/SQL engine for Implicit Cursors. Read more about the type of Cursors in the subsequent sections.
Life Cycle of a Cursor. How PL/SQL Engine executes a Cursor?
The PL/SQL Engine performs the following operations (either automatically or as specified in the program depending upon whether the cursor is implicit or explicit) during the entire life cycle of a database cursor:
- PARSE: making sure that the statement is valid and determining the execution plan. Implicitly done by the engine.
- BIND: associating values with placeholders in the SQL statement which is implicitly done by the engine for static SQL and needs to be specified explicitly for dynamic SQL.
- OPEN: acquiring memory for the cursor, initializing the cursor pointer and making the SQL statement ready to be executed. Implicitly done by the engine in case of an Implicit Cursor otherwise it's required to be specified for Explicit Cursors.
- EXECUTE: executing the SQL statement in the engine and seting the pointer to point just above the first row. Can't be specified explicitly.
- FETCH: retrieving the next row from the cursor's ResultSet. In case of an Explicit Cursor, this operation doesn't raise an exception/error when it reaches the end of the ResultSet. So you need to explicitly have some code making sure that the end of the ResultSet has reached.
- CLOSE: closing the cursor and releasing all memory used by it. Once close you can of course not use the cursor anymore. Implicit Cursors are automaticaly closed by the engine, but Explicit Cursors require the programmer to close them explicitly.
Implicit vs Explicit Cursors
For every INSERT, UPDATE, and DELETE in your code the PL/SQL engine creates implicit cursors and the developer doesn't require to create these cursors explicitly. In fact they can't create them explicitly even if they want to. Such cursors are called Implicit Cursors. If you have a SELECT statement returning only a single row then also an implicit cursors is created automatically. Implicit cursors are OPENed, FETCHed, and CLOSEd automatically and they don't offer any programmatic control.
For SELECT statements returning more than one rows you need to create cursors explicitly to iterate through all the returned rows/records. Such cursors are called Explicit Cursors. You can however create explicit cursors on those SELECT statement as well which return only a single row. It's advisable to do so if you're using PL/SQL Release 2.2 or earlier (read the reason below). But, if you're heavily re-using the same SELECT query then you would probably be better off with the implicit one for single-row SELECT statements.
Drawbacks of Implicit Cursors
- Less Control: Implicit Cursors can't be OPENed, FETCHed, or CLOSEd explicitly and hence they give less programming control.
- Less Efficient: In addition, they are comparatively less efficient than the explicit cursors (at least theoretically). The reason why it is slightly slow is because an implicit cursor runs as a SQL statement and Oracle SQL is ANSI-standard which requires a single-row query to not only fetch the first row, but also to perform a second fetch to ensure if more than one rwo will be returned by that query or not (in which case TOO_MANY_ROWS excepion will be raised by the PL/SQL engine). In case of an explicit cursor there would be only one FETCH in such a situation. But, this holds true only till PL/SQL Release 2.2 as in PL/SQL Release 2.3 implicit cursors got optimized and now they run slightly faster than the corresponding explcit cursor. In addition there are more chances that an implicit cursor would be re-used in the application and if that happens then the parsing time can be escaped as it might lie in pre-parsed state in the shared memory. So implicit cursors may enhace the overall performance of the application in case you're using PL/SQL 2.3 or later releases.
- Vulnerable to Data Errors: Less programmatic control on implicit cursors makes them more vulnerable to data errors as you can't really OPEN/FETCH/CLOSE them at will. For explicit cursors you may have custom validations in place and decide accordingly. Example: suppose you have a query returning only a single row and you are using an implicit cursor for that, but what if the same query returns morethan one rows in future. Implicit cursor will raise TOO_MANY_ROWS exception which may or may not like depending upon your application requirements.
- A Static Cursor doesn't reflect data changes made to the DB once the ResultSet has been created whereas a Dynamic Cursor reflects the changes as and when they happen.
- A Static Cursor is much more performant than a Dynamic Cursor as it doesn't require further interaction with the DB server.
- A static cursor supports both Relative and Absolute Positioning whereas a Dynamic Cursor supports only Relative Positioning.
- A Static Cursor can be used for Bookmarking purposes as the data returned is static whereas a Dynamic Cursor can't be used for the same.
Dynamic Cursor: A Dynamic Cursor is the one which reflects all the data changes made into the database as and when they happen. This may require continuous re-ordering of data in the ResultSet due to the database changes and hence it's much more expensive than a Static Cursor. The possible re-ordering of records in the ResultSet doesn't allow a Dynamic Cursor to support absolute positioning as it can't be sure of the absolute position of a record/row in the ResultSet. Hence a Dynamic Cursor only supports relative positioning. Furthermore a Dynamic Cursor doesn't support Bookmarks for the obvious reasons.
Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.
Sunday, November 23, 2008
AspectJ, Aspect-Oriented Programming, Spring AOP
Aspect, AOP, AspectJ, Spring AOP, AspectJ vs Spring AOP
What's an Aspect? What's Aspect-Oriented Programming (AOP)?
An Aspect can be understood as a crosscutting concern which can not be covered by the usual grouping and abstraction mechanisms supported by the various programming methodologies. Why is that so? Because many concerns can cut across multiple abstractions as defined by the underlying program and hence the name 'crosscutting concern'. Right? To understand it better we can think of a simple example: In any typical Web Application (dealing with DB source) there are numerous occasions where we need to write code for opening a connection (or fetching a connection from a pool if we use Connection Pooling), defining a transaction to fire all the DML statements (contained in the transaction) in entirety and commit the changes or to simply rollback any changes made to the source in case even a single statement of the transaction fails. Now this transaction processing stuff can be thought of as an aspect and any developer using a typical programming language needs to code all the underlying stuff manually. Now, using AOP, this widely used crosscutting concern (as a typical programming abstraction can not be used for doing this directly) can be handled declaratively. This is just a sample example and the idea is to make you feel how easy (and hence fast-paced and robust) application programming can be by using this approach.
Logging is another stereo-type example of crosscutting concern widely used in almost every application. Why the developer should be bothered about coding/maintaining this using all the Logging classes/interfaces when s/he can do away with this by using an AOP in a declarative manner. As is the case with any other declarative code, this will also be translated back to the actual programming code and compiled thereafter, but who cares if all this is done behind the scenes. And not to forget that Translators/Compilers don't make mistakes while humans may (or should I say 'will' :-)) and hence using a declarative approach not only make the life simpler, but also improves the robustness of the entire application. Security is another very popular example of crosscutting concern which can effectively be handled using an AOP implementation.
Another great benefit of using AOP for such crosscutting concern implementation in an application is apparently visible if we need to change the underlying approach/considerations (say for example security considerations) then it might require a hell lot of effort to identify all the scattered code fragments doing the task and changing them to perform the new considerations/standards won't be easier either. Using AOP to handle such stuff will only require some declarative changes say for example it can be as simple as adding an advice and appropriate pointchecks.This will automatically check all such instances when any of the pointchecks match and in that case the advice will guarantee the execution of either the added security considerations or the changes ones. Wow, so easy? Yes, it is.
AspectJ - what's this?
It's an aspect-oriented extension to the Java programming language. The syntax is quite similar to that of Java and all the valid Java programs are valid AspectJ programs. But not the vice-versa as AspectJ supports many programming constructs alien to Java which are used to define and use aspects in applications. Eclipse Foundation first released this extension in 2001 and now it is available both as a stand-alone extension (pluggable to many popular frameworks) and integrated into Eclipse. AspectJ was originally developed by Gregor Kiczales and his team at Xerox PARC. See below what Eclipse Foundation says about AspectJ:
AspectJ is possibly the most generic and widely-used AOP extension and it focuses primarily on simplicity and usability. A Java-like syntax makes and availability of plug-ins for many popular frameworks make it even more popular with Java developers worldwide.
Some of the aspects which probably can not be defined by a typical programming language directly (not at least declaratively) and which can be easily achieved by using AspectJ are:
- Ability to add members (methods of fields) within existing classes
- Ability to define pointcuts when some method is called, some object is instantiated, etc.
- Ability to run a piece of code before/after a pointcut is matched
Spring AOP. How is Spring AOP different from AspectJ?
Spring AOP is just another AOP implementation which comes with the Spring Framework bundle developed by SpringSource and released under Apache Licence 2.0.
Spring AOP differs from AspectJ probably in complexity and consequently in capabilities. Spring AOP implementation is less complex than the AspectJ implementation and hence less capable as compared to AspectJ. The main reason to keep Spring AOP simpler is because one of the foundation principles on which the Spring Framework was designed and developed is 'Simplicity'. In addition, Spring belives that most of the users don't really need those really complex features of AspectJ in their applications and hence Spring AOP would probably be more than sufficient for most of the application developers. In case there are a select few who want to use those missing features then they are most welcome to use AspectJ from within Spring Framework has support fro AspectJ since Spring 1.2 version itself and in every subsequent release they have continued making the integration even better. This will facilitate the developers to have the best of both the worlds. BTW, Spring AOP implementation is far more capable than it may sound here (only because it's being compared with the best of breed and de-facto AOP implementation named AspectJ) and the Spring Framework uses Spring AOP for transaction management, security handling, remote access, and JMX.
Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.
Thursday, November 20, 2008
Using XSD to create XML Doc and applying an XSL on it
Using XSD to create XML Doc and applying an XSL on it
I was given an XSD (XML Schema) and was asked to do a couple of things:- (i) Generate an XML document out of the XSD and (ii) Generate (and test of course) an XSL to check if one of the XML elements (named 'filename') contained a specific string (say "abc") at a specific position or not. These tasks are of course not very complex in nature, but while doing them I got a feeling that I had unintentionally revisited most of the common stuff about Schemas, XML Docs, XSLs, and their application simply in one go and that too so quickly. Hence I thought of putting it all in an article so that interested people may go through it for revising some common oractical stuff about XML/XSL especially when they are not interested in getting into too much of theoretical details.
Given XSD (XML Schema)
<?xml version="1.0" ?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<xsd:element name="file_transfer" type="file_transfer_type" />
<xsd:complexType name="file_transfer_type">
<xsd:sequence>
<xsd:element name="tag" type="xsd:string" minOccurs="0" maxOccurs="1" />
<xsd:element name="filename" type="xsd:string" minOccurs="0" maxOccurs="unbounded" />
</xsd:sequence>
</xsd:complexType>
</xsd:schema>
How to generate an XML out of it? Well... you can manually generate an XML in a few minutes as the Schema is not so complex, but why to waste time when we have tools to do it. You have quite a few options - Altova XMLSpy being one of them. I used this tool. Generated XML was:-
<?xml version="1.0" encoding="UTF-8"?>
<!--Sample XML file generated by XMLSPY v2004 rel. 4 U (http://www.xmlspy.com)-->
<file_transfer xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="C:\schema_file.xsd">
<tag>String</tag>
<filename>String</filename>
</file_transfer>
Generating the XSL which tests whether 'filename' has "abc" or not
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output omit-xml-declaration="yes"/>
<xsl:template match="/">
<xsl:param name="filename" select="file_transfer/filename"/>
<xsl:param name="substrfilename" select="substring($filename, 7, 3)" />
<xsl:if test="$substrfilename='abc' ">
<xsl:comment>Ignore</xsl:comment>
</xsl:if>
</xsl:template>
</xsl:stylesheet>
Applying XSL on XML Doc to test the result
Again you have plenty of options - you may use Java XSLT Processing APIs or some XML/XSL Processing tool like XMLSpy. I used the latter one for the obvious reasons. Find below the screnshots of outputs for negative (when the 'filename' doesn't contains the sub-string 'abc' at the specified location) and positive (when the 'filename' contains the sub-string 'abc' at the specified location).
As you can see that the XMLSpy output above clearly shows that when the value of the XML element named 'filename' has the sub-string 'abc' at index 7 (counted from 1) then the output window shows XML Comment having text 'Ignore' as specified in the XSL. When the value of element 'filename' doesn't contain 'abc' then the output windows shows nothing as the test condition specified in the XSL doesn't return 'true' and hence the control simply skips to the subsequent statement. I believe the XSL code is quite self-explanatory and easy to understand. However, that shouldn't stop you to discuss anything you feel like. You may bring up something really interesting on the table which I might have missed to notice.
Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.
Sunday, November 16, 2008
Ease of Dev in Java 5: Generics, Autoboxing, Static Import, Enhanced for, Annotation, & Typesafe enum
Java 5 features for easier dev: Generics, Autoboxing/Unboxing, Static Import, Enhanced for, Typesafe enums, Annotaion/Metadata
I had to give a presentation on Java 5.0 few weeks back in my company and during that time I went through an interesting interview (taken sometime in 2003) of Joshua Bloch who touch based on the six popular new features of Java 2 Platform, Standard Edition 5.0 (also knwon as J2SE 5.0) including Generics, Autoboxing, Static Imports, Annotations, Enhanced for loop, and Typesafe Enums. He talked about how these new features are going to be accepted by the developers worldwide and how actually will these features make application development in Java easier, safer, and more robust. Bloch is an architect at Sun Microsystems and he has been involved in the design and implementation of several Core Java features including the highly regarded Collections Framework and the java.math package. He has authored the Jolt award winning book named "Effective Java". His interview can be found here.
Before we move ahead let's discuss what the six major new features (aimed towards ease of development) introduced in Java 5.0 are all about.
- Generics: this feature is used to ensure compile-time safety for Collections and eliminates the need for having the casts. The feature guarantees the code using Collections won't throw the infamous runtime exception named 'ClassCastException' as such cases can be detected at compile-time itself if the programmer uses Generics. A big relief, isn't it?
- Autoboxing/Unboxing: this feature is used to make the automatic casting possible between the primitive data types and their corresponding wrapper data types. For example: an 'int' can now be assigned to a reference of type 'Interger' and vice-versa. The compiler automatically takes care of this.
- Static Import: remember using Interfaces just for using static constants in Java programs. Interfaces are not meant for that, instead they should be used for defining types. Using them just for the sake of using constants not only defeats the actual meaning of interfaces, but it also makes the code less flexible as implementing an interface is a public contract and even if you plan not to use the constants defined in the interface in the newer implementations of the class then also you got to maintain the contract as the clients might have used the interface as a data type for the implementing class in their code. Static import actually imports all the static members of a class/interface making them available to be used with their simple names and thus you can avoid implementing the interfaces for using the constants.
- Enhanced for-loop: this feature makes the for-loop more compact. The iterators now don't need to be explicitly checked for boundary conditions.
- Annotations/Metadata: this feature helps the programmer by letting the tools generate the obvious code just by supplying the corresponding annotation tags. It makes the programming more "declarative" in nature.
- Typesafe enums: most of the shortcomings of the enums which previously require lot of coding around their use to ensure safe usage have now been resolved in this new version of enums which are completely Typesafe and additionally they can be used with switch statement as well.
Some of the interesting questions taken from his interview are listed below:-
Question: How Java 5.0 will help making Java an easier and more effective language to work with?
Answer: Bloch summarizes the answer to this question with two main points saying that Java 5.0 will help shifting the responsibility of writing the boilerplate code from the programmer to the compiler and he believes that the whole will be greater than the sum of its parts. Generics, Autoboxing, Enhanced for-loop are the features which shift quite a lot coding responsibility from the programmer to the compiler. Humans are error prone and hence shifting the responsibility ultimately not only speeds up the development, but also makes the code more reliable and secure.
Since all the features have been thought about keeping the others (and those which are already existing) in mind and hence the designers have exploited the positives of all possible features during the design and implementation of these features. This makes the overall benefit even more than what these features could have contributed while having been designed in isolation. For example: Generics, Autoboxing, and Enhanced for-loop make such a fabulous combination when used together and this results into one of possible scenarios where we actually realize the power of these features, especially when they are used together. One example taken from his interview which nicely shows how well can the these features be combined to get cleaner, safer, and easier to write code is as follows:
The example counts the frequency of words supplied at the command line. There are two versions, the former without the use of any Java 5.0 features and the latter beautifully uses Generics, Autoboxing, and Enhanced for-loop.
public class Freq {
private static final Integer ONE = new Integer(1);
public static void main(String args[]) {
// Maps word (String) to frequency (Integer)
Map m = new TreeMap();
for (int i=0; i<args.length; i++) {
Integer freq = (Integer) m.get(args[i]);
m.put(args[i], (freq==null ? ONE :
new Integer(freq.intValue() + 1)));
}
System.out.println(m);
}
}
As you can easily figure out that the above code counts the frequency by having a TreeMap where the keys are the words and the corresponding values are their frequencies. At the first occurrence of the word the value is set to ONE and subsequently it increases by 1 on every other occurrence of the same word. Now see the same program using the three new features of Java 5.0:
public class Freq {
public static void main(String args[]) {
Mapm = new TreeMap ();
for (String word : args) {
Integer freq = m.get(word);
m.put(word, (freq == null ? 1 : freq + 1));
}
System.out.println(m);
}
}
Clearly it's far more readable and easier to understand, write, and maintain than the former version.
Question: Will the changes be hard for developers to adapt to?
Answer: Bloch says that it won't be tough for the developers to adjust to the changes. Generics will probably be little tricky in the beginning as the declaration will now require more info to be supplied.
Considering the benefits of Generics these adjustments don't really bother the programmers. In fact, once used to the feature they start realizing that how the readability and cleanliness of the code has tremendously improved. No explicit casting required and no fear of the runtime ClassCastException now :-)
Question: How does the "enhanced for-loop" help the developers?
Answer: This feature allows the programmers to forget about taking care of the iterators obtained from the Collections. The compiler automatically does that and it ensures that the iterators for each element in the Collection they have been obtained from. Bloch says that having two extra keywords "foreach" and "in" could have made this even more readable and understandable, but that might have compromised with the compatbility with the earlier versions as the clients might have used these two words as identifiers in their programs.
Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.