Wednesday, October 14, 2009

Java API for accessing Bugzilla

It just happened one day that my manager came to me and said we should automate our build process to make our life easier. It is then we started analyzing what exactly we want to automate among the zillion steps we do as part of the build process.

One of steps is that we should be able to programatically access bugzilla and do some operations with the bug. Probably search for a list of bugs and verify the comments etc.. It turns out Bugzilla only supports XML-RPC and JSON-RPC protocols, where as XML-RPC being the stable one we went ahead and picked it.

Since its the first time I'm going to use XML-RPC, I did some research on net about it and found that there's one cool API framework available from apache, its called 'The Apache XML-RPC client'.

Just like any other API from apache, it came with loads of documentation and Javadocs.., a big thanks to those geeks for maintaining code.

Coming back to problem at hand, All I wanted to do was to access bugzilla and login and do some operations in it with a list of bugs. Following is the code snippet that I simply copied and modified from the apache's examples for XML-RPC client.


import java.net.MalformedURLException;
import java.net.URL;

import org.apache.xmlrpc.XmlRpcException;
import org.apache.xmlrpc.client.XmlRpcClient;
import org.apache.xmlrpc.client.XmlRpcClientConfigImpl;


public class Bugzilla {
public static void main(String[] args) throws MalformedURLException, XmlRpcException {
XmlRpcClientConfigImpl config = new XmlRpcClientConfigImpl();
config.setServerURL(new URL("http://bug-tracker.yyyy.com/Bugzilla/xmlrpc.cgi"));

XmlRpcClient client = new XmlRpcClient();
client.setConfig(config);

Object[] params = new Object[]{"xxx@yyy.com", "password"};

Object result = client.execute("User.login", params);
System.out.println("Result = "+result);
}
}


When i ran this, it started throwing this exception..


Exception in thread "main" java.lang.ClassCastException: java.lang.String
at org.apache.xmlrpc.parser.XmlRpcResponseParser.addResult(XmlRpcResponseParser.java:61)
at org.apache.xmlrpc.parser.RecursiveTypeParserImpl.endValueTag(RecursiveTypeParserImpl.java:78)
at org.apache.xmlrpc.parser.XmlRpcResponseParser.endElement(XmlRpcResponseParser.java:186)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:633)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanEndElement(XMLNSDocumentScannerImpl.java:719)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(XMLDocumentFragmentScannerImpl.java:1685)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:368)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:834)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:148)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1242)
at org.apache.xmlrpc.client.XmlRpcStreamTransport.readResponse(XmlRpcStreamTransport.java:186)
at org.apache.xmlrpc.client.XmlRpcStreamTransport.sendRequest(XmlRpcStreamTransport.java:156)
at org.apache.xmlrpc.client.XmlRpcHttpTransport.sendRequest(XmlRpcHttpTransport.java:115)
at org.apache.xmlrpc.client.XmlRpcSunHttpTransport.sendRequest(XmlRpcSunHttpTransport.java:69)
at org.apache.xmlrpc.client.XmlRpcClientWorker.execute(XmlRpcClientWorker.java:56)
at org.apache.xmlrpc.client.XmlRpcClient.execute(XmlRpcClient.java:167)
at org.apache.xmlrpc.client.XmlRpcClient.execute(XmlRpcClient.java:137)
at org.apache.xmlrpc.client.XmlRpcClient.execute(XmlRpcClient.java:126)
at Bugzilla.main(Bugzilla.java:21)

The error message is confusing and doesn't really point me to the root cause.

After an hour of search, found the mystery with the error above. It turns out Bugzilla follows certain rules with the XML-RPC implementation.
  1. All the parameters that are passed to Bugzilla must be a named parameter.
  2. All the parameters are treated as 'struct' tag by Bugzilla, so we must pass all the parameters as keys and values in a HashMap.
Following is modified code...



import java.net.MalformedURLException;
import java.net.URL;
import java.util.HashMap;
import java.util.Map;

import org.apache.xmlrpc.XmlRpcException;
import org.apache.xmlrpc.client.XmlRpcClient;
import org.apache.xmlrpc.client.XmlRpcClientConfigImpl;


public class Bugzilla {
public static void main(String[] args) throws MalformedURLException, XmlRpcException {
XmlRpcClientConfigImpl config = new XmlRpcClientConfigImpl();
config.setServerURL(new URL("http://bug-tracker.yyyy.com/Bugzilla/xmlrpc.cgi"));

XmlRpcClient client = new XmlRpcClient();
client.setConfig(config);

Map map = new HashMap();
map.put("login","xxxx@yyyy.com");
map.put("password","password");

Map result = (Map) client.execute("User.login", new Object[]{map});
System.out.println("Result = "+result);
}
}


The response from bugzilla is again a Map object, so I had to cast it to make sure I can access elements in it.

Now comes the next hurdle. Bugzilla's XML-RPC apis also assume that the client maintains the session with the server using cookies. The problem now is to figure out a way to do that with the apache XML-RPC client APIs. I'm going to write about it in my next blog post.

Friday, September 4, 2009

The semantics of Flushing in Ehcache

When I was going through the ehcache user manual, I was a little confused about a 'flush' and a 'shutdown' of Ehcache. Of course, the names of these methods suggests their purpose in a perfectly clear way. But I never really paid any attention towards the inner workings of these methods.

I always thought if I can flush each and every cache in my manager class before I shutdown its equivalent to calling a shutdown method on it. I'm talking about caches with basic functionalities ( with no bootstraps, no loaders etc.. so sending shutdown hook to all these (empty)listeners doesn't matter ). But, I was proved wrong. Here's the reason ...

Following code does a very simple task. It reads all the elements from one ehcache and transfers all its element to a new cache and tries to persist the items to diskstore before termination.


import net.sf.ehcache.Element
import net.sf.ehcache.CacheManager
import net.sf.ehcache.Cache

def cachemgr = new CacheManager("D:/mPortal/workspace_new_cvs_structure/EhCacheDemo/config/change_listener_cache.xml")
def deltacache = cachemgr.getCache("deltaCache")

def deltaclone = new Cache("deltaCacheClone", 10000,null, true,cachemgr.getDiskStorePath(), true, 120,120,true, 120,null)
cachemgr.addCache(deltaclone)


println "Migration about to begin"
println "Size of the original cache : ${deltacache.getSize()}"
println "Size of the clone : ${deltaclone.getSize()}"

deltacache.getKeys().each{
ele = deltacache.get(it)
deltaclone.put(new Element (ele.getKey(),ele.getValue()))
}

println "Size of the original cache after migration : ${deltacache.getSize()}"
println "Size of the clone after migration : ${deltaclone.getSize()}"

println "Migration successfully finished.."

deltacache.flush()
deltaclone.flush()


Note that I'm flushing all the caches I created/used in my program at the end.

To my surprise, whenever I run this program again the 'deltaclone' cache always initializes itself to zero. This puzzled me for quite sometime and finally forced me to revisit the source code of ehcache to understand the behavior.

The Reason

What I found is that the 'flush' operation was not a synchronous operation at all. it only signals the spool to flush it when it wakes again. In my case, my VM didn't stop till this thread do its job.. it has killed it forcibly.

How Shutdown Solves this problem ?

The shutdown method however, is very responsible and gracefully waits till the thread finishes its execution. Following is the snippet which can explain this behavior...



//set the write index flag. Ignored if not persistent
flush();

//tell the spool thread to spool down. It will loop one last time if flush was caled.
spoolAndExpiryThreadActive = false;

//interrupt the spoolAndExpiryThread if it is waiting to run again to get it to run now
// Then wait for it to write
spoolAndExpiryThread.interrupt();
if (spoolAndExpiryThread != null) {
spoolAndExpiryThread.join();
}


This is the snippet from the DiskStore's dispose method. Thanks to the documentation, the code is pretty much self explainable.


The fix

So, the proper way to fix my program is very simple.. by adding the shutdown() method call at the end.


import net.sf.ehcache.Element
import net.sf.ehcache.CacheManager
import net.sf.ehcache.Cache

def cachemgr = new CacheManager("D:/mPortal/workspace_new_cvs_structure/EhCacheDemo/config/change_listener_cache.xml")
def deltacache = cachemgr.getCache("deltaCache")

def deltaclone = new Cache("deltaCacheClone", 10000,null, true,cachemgr.getDiskStorePath(), true, 120,120,true, 120,null)
cachemgr.addCache(deltaclone)


println "Migration about to begin"
println "Size of the original cache : ${deltacache.getSize()}"
println "Size of the clone : ${deltaclone.getSize()}"

deltacache.getKeys().each{
ele = deltacache.get(it)
deltaclone.put(new Element (ele.getKey(),ele.getValue()))
}

println "Size of the original cache after migration : ${deltacache.getSize()}"
println "Size of the clone after migration : ${deltaclone.getSize()}"

println "Migration successfully finished.."

deltacache.flush() // Redundant
deltaclone.flush() // Redundant

cachemgr.shutdown()


The additional flush operations are totally redundant, the program works even if you remove those statements..

Wednesday, September 2, 2009

How to download response, even if you get a HTTP 500

When we normally connect to a HTTP URL from within our java code, we tend to check only the status code and proceed based on its value. We'll be interested in handling only the HTTP SUCCESS code 200, but for other errors you'll simply want to report it as a 'Generic error'

But there're times, when you want to see what exactly does the server returned for our request for diagnostics purpose. For example, a monitoring server may not just want to know what error code the server has returned, but also want details about what exactly happened at the server side. I've written simple groovy script which uses the commons-http client and core modules from Apache to do this task for me.



import org.apache.commons.httpclient.HttpClient
import org.apache.commons.httpclient.methods.GetMethod

client = new HttpClient()
method = new GetMethod("http://10.11.12.48:8004/monitor")
def statusCode = client.executeMethod(method)

println "Status code is : ${statusCode}"

//Read the response body.
byte[] responseBody = method.getResponseBody();

// Deal with the response.
// Use caution: ensure correct character encoding and is not binary data
println(new String(responseBody));

You should keep following files in the class path to run this program successfully.
  1. commons-httpclient.jar
  2. commons-codec.jar
  3. commons-logging.jar

And ofcourse, you can change the URL to which ever value you want to test out other error codes.

Most wanted features in Ehcache

In our company, we finally decided to make a move forward and implement ehcache as our caching provider. The integration has provided me with great challenges, it forced me to learn quite a bit about ehcache before we took the decision.

Ehcache, as an API is well tested and perfectly capable of holding up to larger loads. But what's troubling me now is to monitor the caching activity that goes under the hood in ehcache. In most of the normal cases we don't have to worry about what's happening inside Ehcache, but there're times where we want to know very specific things like...

  • How far has the replication has reached ?
  • Has all the servers ( copies of ehcaches) in the cluster acknowledged the replication event successfully ?
  • How do we identify the replication failures ?
Ehcache currently doesn't support event handling at this granular level. Thanks to its extensible architecture, we can still write our own implementation classes to introduce this level of granularity. But, as you know in this world of open source every piece of code you write is worthless if some super-smart guy has already done it before.

One other thing I got stuck is about maintaining the coherency of data between the database and the cache.

The caching solution what we're looking at requires that the database should be 'master' of all the caches, we must make sure the database has all the latest information and at the same time the data must be available up-to-date to all the runtime components which take the hits from users in real time.

  • At any given time, how do I know that the data cached in Ehcache is consistent with the database or not ?
Has anyone encountered this kind of situation before ? How were you able to handle it ?

Monday, August 31, 2009

Syntax highlighting of blogger entries

When I was first introduced to blogging, I came across hundreds of blogs and after getting motivation from them I started my own blog. But after all this time, the only thing that worried me was that my blog doesn't look as cool and trendy( in looks ) as many other blogs out there. I finally found the answers...
  • Themes and Layouts.
  • Third party Scripts.

Themes and Layouts

Blogger allows you to customize the theme and layout of your blog using a plan ( almost ) html based template. I always wondered, it'd take me light years to come up with my own template. But luckily I don't have to put myself to such a torment after all; I found that there're thousands of templates that are available out there waiting for me to discover. I found this site very useful, it has really cool themes categorized into several categories. Now, I have limitless number of templates to choose from depending on my mood...

Third party Scripts

Since I can edit the template XML to my wish, I also started searching for code syntax highlighter to decorate my code snippets in my blog entries. I used to miss them alot as they certainly make a difference in presenting the code snippets. There's however, an open source solution available, to help us out here. Follow the guidelines in this blog and you'll have your personal code highlighter in no time.

These tools definitely going to help me continue blogging with more ease and elegance.

Monday, June 29, 2009

Experimenting Terracotta

Recently, I've been looking into several caching solutions for our product. After a little research we concluded that we use ehcache for our caching needs. But given the fact that we have a cluster of servers communicating with each others there is also a need for searching a clustering solution as well.

Luckily, Ehcache also supports clustering in its own way using either

  1. RMI
  2. JMS
  3. JGroups
All these methods work on the concept called replication. Ehcache keeps a copy in each of the cache node and replicates each and every mutable cache operation on every node in the cluster. Even though this solves the cluster problem to some extent, its in-efficient in a sense that I have a copy of cache everywhere and I've to figure out how to keep things consistent and at the same time scalable to any sizes of caches.

Alternatives for the approach mentioned above were are also available.. see here.

However, what I was looking for is a true clustered solution where a single copy of cache is made visible to all the nodes in the cluster as a single cache and at the same time I should be able to scale the cache to whichever size I want at runtime.

Terracotta is the answer to this question.

The installation worked like a breeze with absolutely no issues on windows. But I had to download the generic tar.gz file instead of a installer jar file for installing it in solaris.

I've used "The definitive Guide to Terracotta" book as a guide for my quest and I'd recommend everyone to read this book for a better understanding of how terracotta works internally. You can read it online ( a limited version ofcourse ) at google books.

The book has one HelloClusteredWorld example in it which explains the basic working of Terracotta using a simple java program. It works in the strightforward way and so I don't want to mention it here again. I want to show another experiment that I conducted my own using the ehcache integration module ( the ehcache TIM for Terracotta ) .

Following is the sample groovy program that I used for testing this.


import net.sf.ehcache.Element
import java.io.InputStreamReader
import net.sf.ehcache.CacheManager
/**
*
*/

url = getClass().getResource("ehcache-config.xml");
cacheMgr = new CacheManager(url)
def choice = "Y"
cache = cacheMgr.getCache("userCache");
inreader=new InputStreamReader(System.in)

println "Current cache size is : ${cache.getSize()}"

while ( true ){
print " what do you want to do ? "
choice = new InputStreamReader(System.in).readLine()
switch ( choice ){
case "flush":
println "Flushing the cache."
cacheMgr.flush()
break;
case "add":
print "Enter the key element you want to add : "
key = new InputStreamReader(System.in).readLine()
print "Enter the value element you want to add : "
val = new InputStreamReader(System.in).readLine()
cache.put(new Element(key, val))
break;
case "remove":
print "Enter key of the element to remove : "
key = inreader.readLine()
cache.remove(key)
case "rall":
print "Removing everything !"
cache.removeAll()
break;
case "size":
print "Current cache size is ${cache.getSize()}"
break;
default:
println "I don't understand what you're saying ? "
System.exit(0)
}

}
The purpose of this program is simple, It initializes a ehcache using its cache manager and tries to do CRUD operations in it. Without clustering this program works in a strightforward way waiting for user's input to do the corresponding action.

Following is the Ehcache XML to use...


<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="ehcache.xsd">

<defaultCache
maxElementsInMemory="10000"
eternal="false"
timeToIdleSeconds="120"
timeToLiveSeconds="120"
overflowToDisk="true"
diskSpoolBufferSizeMB="30"
maxElementsOnDisk="10000000"
diskPersistent="false"
diskExpiryThreadIntervalSeconds="120"
memoryStoreEvictionPolicy="LRU"


/>

<cache name="userCache"
maxElementsInMemory="10"
eternal="true"
overflowToDisk="true"
diskSpoolBufferSizeMB="20"
timeToIdleSeconds="300"
timeToLiveSeconds="600"
diskPersistent="false"
memoryStoreEvictionPolicy="LFU">
</cache>

</ehcache>


Now, I tried to convert this program into a cluster-aware program using terracotta. I used following tc-config.xml file.



<?xml version="1.0" encoding="UTF-8"?>
<!--

All content copyright Terracotta, Inc., unless otherwise indicated. All rights reserved.



-->
<!--
This is a Terracotta configuration file that has been pre-configured
for use with DSO. All classes are included for instrumentation,
and all instrumented methods are write locked.

For more information, please see the product documentation.
-->
<tc:tc-config xmlns:tc="http://www.terracotta.org/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.terracotta.org/schema/terracotta-4.xsd">
<servers>

<!-- Tell DSO where the Terracotta server can be found. -->
<server host="localhost">
<data>%(user.home)/terracotta/server-data</data>
<logs>%(user.home)/terracotta/server-logs</logs>
<dso>
<persistence>
<mode>permanent-store</mode>
</persistence>
</dso>
</server>
</servers>

<!-- Tell DSO where to put the generated client logs -->
<clients>
<logs>%(user.home)/terracotta/client-logs</logs>
<modules>
<module name="tim-ehcache-1.4.1" version="1.3.2"/>
</modules>

</clients>

<application>
<dso>
<roots>

<root>
<field-name>SubscriberCreator.cacheMgr</field-name>
</root>
<root>
<field-name>SubscriberCreator.cache</field-name>
</root>
</roots>
<!-- Start by including just the classes you expect to get added to the shared
graph. These typically include domain classes and shared data structures.
If you miss classes, Terracotta will throw NonPortableOjectExceptions
telling you more about what needs to be added. -->
<instrumented-classes>
<include>
<class-expression>SubscriberCreator</class-expression>
</include>
</instrumented-classes>


</dso>
</application>

</tc:tc-config>


But when I ran this program, i got this error.

Starting Terracotta client...
2009-07-01 12:23:44,758 INFO - Terracotta 3.0.1, as of 20090514-130552 (Revision 12704 by cruise@su1
0mo5 from 3.0)
2009-07-01 12:23:45,476 INFO - Configuration loaded from the file at 'd:\mPortal\Workspace_research\
HelloClusteredWorld\tc-config.xml'.
2009-07-01 12:23:45,711 INFO - Log file: 'C:\Documents and Settings\Admin\terracotta\client-logs\ter
racotta-client.log'.
2009-07-01 12:23:49,320 INFO - Connection successfully established to server at 127.0.0.1:9510
2009-07-01 12:23:49,570 WARN - The root expression 'SubscriberCreator.cacheMgr' meant for the class
'SubscriberCreator' has no effect, make sure that it is a valid expression and that it is spelled co
rrectly.
2009-07-01 12:23:49,570 WARN - The root expression 'SubscriberCreator.cache' meant for the class 'Su
bscriberCreator' has no effect, make sure that it is a valid expression and that it is spelled corre
ctly.
com.tc.exception.TCNonPortableObjectError:
*******************************************************************************
Attempt to share an instance of a non-portable class by assigning it to a root. This unshareable
class is a JVM- or host machine-specific resource. Please ensure that instances of this class
don't enter the shared object graph.

For more information on this issue, please visit our Troubleshooting Guide at:
http://terracotta.org/kit/troubleshooting

Thread : main
JVM ID : VM(22)
Non-portable root name: ALL_CACHE_MANAGERS
Unshareable class : java.util.concurrent.CopyOnWriteArrayList

Action to take:

1) Change your application code
* Ensure that no instances or subclass instances of java.util.concurrent.CopyOnWriteArrayList
are assigned to the DSO root: ALL_CACHE_MANAGERS


*******************************************************************************

at com.tc.object.ClientObjectManagerImpl.throwNonPortableException(ClientObjectManagerImpl.java:786)
at com.tc.object.ClientObjectManagerImpl.checkPortabilityOfRoot(ClientObjectManagerImpl.java:690)
at com.tc.object.ClientObjectManagerImpl.lookupOrCreateRoot(ClientObjectManagerImpl.java:656)
at com.tc.object.ClientObjectManagerImpl.lookupOrCreateRoot(ClientObjectManagerImpl.java:642)
at com.tc.object.bytecode.ManagerImpl.lookupOrCreateRoot(ManagerImpl.java:321)
at com.tc.object.bytecode.ManagerImpl.lookupOrCreateRoot(ManagerImpl.java:300)
at com.tc.object.bytecode.ManagerUtil.lookupOrCreateRoot(ManagerUtil.java:96)
at net.sf.ehcache.CacheManager.__tc_setALL_CACHE_MANAGERS(CacheManager.java)
at net.sf.ehcache.CacheManager.(CacheManager.java:61)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:164)
at SubscriberCreator.class$(SubscriberCreator.groovy)
at SubscriberCreator.$get$$class$net$sf$ehcache$CacheManager(SubscriberCreator.groovy)
at SubscriberCreator.run(SubscriberCreator.groovy:9)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:86)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:234)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1062)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:893)
at org.codehaus.groovy.runtime.InvokerHelper.invokePogoMethod(InvokerHelper.java:744)
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHelper.java:727)
at org.codehaus.groovy.runtime.InvokerHelper.runScript(InvokerHelper.java:383)
at org.codehaus.groovy.runtime.InvokerHelper$runScript.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:40)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:117)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:129)
at SubscriberCreator.main(SubscriberCreator.groovy)
Exception in thread "main" com.tc.exception.TCNonPortableObjectError:
*******************************************************************************
Attempt to share an instance of a non-portable class by assigning it to a root. This unshareable
class is a JVM- or host machine-specific resource. Please ensure that instances of this class
don't enter the shared object graph.

For more information on this issue, please visit our Troubleshooting Guide at:
http://terracotta.org/kit/troubleshooting

Thread : main
JVM ID : VM(22)
Non-portable root name: ALL_CACHE_MANAGERS
Unshareable class : java.util.concurrent.CopyOnWriteArrayList

Action to take:

1) Change your application code
* Ensure that no instances or subclass instances of java.util.concurrent.CopyOnWriteArrayList
are assigned to the DSO root: ALL_CACHE_MANAGERS


*******************************************************************************

at com.tc.object.ClientObjectManagerImpl.throwNonPortableException(ClientObjectManagerImpl.java:786)
at com.tc.object.ClientObjectManagerImpl.checkPortabilityOfRoot(ClientObjectManagerImpl.java:690)
at com.tc.object.ClientObjectManagerImpl.lookupOrCreateRoot(ClientObjectManagerImpl.java:656)
at com.tc.object.ClientObjectManagerImpl.lookupOrCreateRoot(ClientObjectManagerImpl.java:642)
at com.tc.object.bytecode.ManagerImpl.lookupOrCreateRoot(ManagerImpl.java:321)
at com.tc.object.bytecode.ManagerImpl.lookupOrCreateRoot(ManagerImpl.java:300)
at com.tc.object.bytecode.ManagerUtil.lookupOrCreateRoot(ManagerUtil.java:96)
at net.sf.ehcache.CacheManager.__tc_setALL_CACHE_MANAGERS(CacheManager.java)
at net.sf.ehcache.CacheManager.(CacheManager.java:61)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:164)
at SubscriberCreator.class$(SubscriberCreator.groovy)
at SubscriberCreator.$get$$class$net$sf$ehcache$CacheManager(SubscriberCreator.groovy)
at SubscriberCreator.run(SubscriberCreator.groovy:9)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:86)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:234)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1062)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:893)
at org.codehaus.groovy.runtime.InvokerHelper.invokePogoMethod(InvokerHelper.java:744)
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHelper.java:727)
at org.codehaus.groovy.runtime.InvokerHelper.runScript(InvokerHelper.java:383)
at org.codehaus.groovy.runtime.InvokerHelper$runScript.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:40)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:117)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:129)
at SubscriberCreator.main(SubscriberCreator.groovy)



Okay, so there're two errors in the trace I pasted above. The first one is kind of a warning message from terracotta that it is unable to identify the member variables I declared in the XML file in the SubscriberCreator class file.

2009-07-01 12:23:49,570 WARN - The root expression 'SubscriberCreator.cacheMgr' meant for the class
'SubscriberCreator' has no effect, make sure that it is a valid expression and that it is spelled correctly.
2009-07-01 12:23:49,570 WARN - The root expression 'SubscriberCreator.cache' meant for the class 'Su
bscriberCreator' has no effect, make sure that it is a valid expression and that it is spelled correctly.

The reason for this error is because the class file I used to launch the terracotta client was generated from a groovy program. When Groovy compiler compiles the groovy file it converts the fields that we declare in the class as properties inside the class. So, whatever member variables that we declare in the groovy file is not in fact a member variable but internally its represented as a key in a HashTable.

Instead of worrying about fixing this, I just converted the groovy program into a java program and this problem has vanished since then. Following is the equivalent java program...


import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.URL;

import net.sf.ehcache.Cache;
import net.sf.ehcache.CacheManager;
import net.sf.ehcache.Element;

public class SubscriberCreator {

private URL url;
private CacheManager cacheMgr;
private Cache cache;


public SubscriberCreator() {
url = getClass().getResource("ehcache-config.xml");
cacheMgr = new CacheManager(url);
cache = cacheMgr.getCache("userCache");
}


public static void main(String[] args) throws IOException {

SubscriberCreator creator = new SubscriberCreator();
System.out.println("Current cache size is : "+creator.cache.getSize());

do {
System.out.println("\nWhat do you want to do ? \n\t 1) Add \n\t 2) remove \n\t 3) flush \n\t 4) clear cache \n\t 5) get \n\t 6) size");
System.out.print("Enter your choice : ");
String choice = new BufferedReader(new InputStreamReader(System.in)).readLine();

if ( choice == null ){
System.out.println (" Bye Bye ");
System.exit(0);
}

int choiceint;
try {
choiceint = Integer.parseInt(choice);
} catch (NumberFormatException e) {
System.out.println("Bye bye ");
break;
}
String key;
String val;
switch (choiceint){
case 1:
System.out.print ("Enter the key element you want to add : ");
key = new BufferedReader(new InputStreamReader(System.in)).readLine();
System.out.print("Enter the value element you want to add : ");
val = new BufferedReader(new InputStreamReader(System.in)).readLine();
creator.cache.put(new Element(key, val));
break;
case 2:
System.out.print ("Enter the key element you want to remove : ");
key = new BufferedReader(new InputStreamReader(System.in)).readLine();
creator.cache.remove(key);
System.out.println("key '"+key+"' removed Successfully !");
break;
case 3:
System.out.println("Flushing the cache now !");
creator.cache.flush();
System.out.println("Flushed the cache successfully ");
break;
case 4:
System.out.println("Clearning the cache !");
creator.cache.removeAll();
System.out.println("Cleared the cache successfully ");
break;
case 5:
System.out.print ("Enter the key element you want to retrieve : ");
key = new BufferedReader(new InputStreamReader(System.in)).readLine();
System.out.println("Element requested is : "+creator.cache.get(key));
break;
case 6:
System.out.println ("Cache size is : "+creator.cache.getSize());
break;
default:
{
System.out.println("I don't understand your request..");
System.exit(0);
}
}
}while ( true );

}

}



Coming to the next half of the problem, it complains about a JDK class called java.util.concurrent.CopyOnWriteArrayList. It appears that terracotta 3.1 or earlier versions of it doesn't have the ability to instrument all the classes in the java.util.concurrent package.

I later realized that from ehcache 1.6 beta3 they have migrated to the JDK concurrent package from the third party concurrent package. So, I had to go down the ehcache release line and re-tried the same example with ehcache 1.5 stable release. It works like a charm !


What did I learnt from this exercise ? Terracotta is a well tested software , but there's definitely a learning curve involved in order to understand what to share/monitor/instrument etc. Without this the whole concept looks unclear and likely to confuse you than do any better.

Here're the minimum requirements for the terracotta 3.1

java version : JDK 1.5 +
ehcache TIM version : 1.4.1 or 1.3 both works fine.
ehcache core version : ehcache 1.5.x ( 1.6 is not yet supported, hope the new version of terracotta fixes these short comings )
Operating systems : Windows XP or higher, Solaris ( I tested only on these machines, I see no reason it doesn't work on other operating systems. )