ATG Repository Distributed Caches, Part 2

Alternative 1

In part 1 of this series I described briefly how ATG’s distributed cache invalidation works and the options it provides. I then described a serious production problem my company was encountering using the distributed cache-mode for large clusters.

As I had mentioned in part 1 ATG’s newest cache-mode option, distributedJMS, appeared to offer a good alternative to the use of TCP connections for the distribution of cache invalidation events. The main problem with this approach is that, by default, it is based on ATG SQLJMS which offers only polled, persistent destinations. If you are using ATG’s DPS module the configuration for distributedJMS cache mode is already in place. Otherwise you can follow the configuration examples in the ATG Repository users Guide.

A very important property, gsaInvalidatorEnabled, of component /atg/dynamo/Configuration must be set to true for distributedJMS cache invalidation to work.

Distributedjms

Distribution of events via JMS is supported by using a PatchBay message source/sink combination which offers us the opportunity to override the definitions and use a 3rd party JMS provider. The advantage of using a real JMS provider is that message distribution is event driven rather than polled and in-memory destinations may be used avoiding disk/database I/O. The figure to the left depicts how JMS is used in event distribution. For each item descriptor defined with a cache mode of distributedJMS ATG’s repository will route all invalidation events to a PatchBay message source defined by class atg.adapter.gsa.invalidator.GSAInvalidatorService. The Nucleus component used by the repository may be set via the invalidatorService property on GSARepository. By default, this component is located at /atg/dynamo/service/GSAInvalidatorService but you can place it anywhere you like.

The actual cache invalidation takes place in the message sink which is defined by the ATG class atg.adapter.gsa.invalidator.GSAInvalidationReceiver. This component receives events of the following types, resolves the name of the supplied repository component and issues an invalidation request for the appropriate item descriptor/repository item(s).

  • atg.adapter.gsa.invalidator.GSAInvalidationMessage – defines a single repository item that should be flushed from the cache.
  • atg.adapter.gsa.invalidator.MultiTypeInvalidationMessage – defines one or more repository items that should be flushed from the cache. All the items defined in this message must belong to the same repository.

The default PatchBay configuration for these components looks like the following:

<dynamo-message-system>
  <patchbay>
 <!-- DAS Messages -->
    <message-source>
       <nucleus-name>/atg/dynamo/service/GSAInvalidatorService</nucleus-name>
       <output-port>
         <port-name> GSAInvalidate </port-name>
         <output-destination>
            <provider-name>sqldms</provider-name>
            <destination-name>sqldms:/sqldms/DASTopic/GSAInvalidator</destination-name>
            <destination-type>Topic</destination-type>
         </output-destination>
       </output-port>
    </message-source>
    <message-sink>
       <nucleus-name>/atg/dynamo/service/GSAInvalidationReceiver</nucleus-name>
       <input-port>
         <port-name>GSAInvalidate</port-name>
         <input-destination>
           <provider-name>sqldms</provider-name>
           <destination-name>sqldms:/sqldms/DASTopic/GSAInvalidator</destination-name>
           <destination-type>Topic</destination-type>
         </input-destination>
       </input-port>
    </message-sink>
  </patchbay>
</dynamo-message-system>

Ok, so my first thought was to modify this configuration to use JBoss as the JMS provider. I considered one of the fine stand-alone JMS providers like Fiorano or Sonic and I think these would have worked just fine. We are currently still running on DAS but expect to to move to JBoss over the next year so using JBoss seemed like a natural. I promptly over-rode the above configuration like this:

<dynamo-message-system>
  <patchbay>
    <provider>
      <provider-name>JBoss</provider-name>
      <topic-connection-factory-name>ConnectionFactory</topic-connection-factory-name>
      <queue-connection-factory-name>ConnectionFactory</queue-connection-factory-name>
      <xa-topic-connection-factory-name>XAConnectionFactory</xa-topic-connection-factory-name>
      <xa-queue-connection-factory-name>XAConnectionFactory</xa-queue-connection-factory-name>
      <supports-transactions>true</supports-transactions>
      <supports-xa-transactions>true</supports-xa-transactions>
      <username></username>
      <password></password>
      <client-id></client-id>
      <initial-context-factory>/my/utils/jms/J2EEInitialContextFactory</initial-context-factory>
    </provider>

    <message-source xml-combine="replace">
      <nucleus-name>/atg/dynamo/service/GSAInvalidatorService</nucleus-name>
        <output-port>
          <port-name>GSAInvalidate</port-name>
          <output-destination>
            <provider-name>JBoss</provider-name>
              <destination-name>/topic/GSAInvalidator</destination-name>
              <destination-type>Topic</destination-type>
            </output-destination>
          </output-port>
    </message-source>    	

    <message-sink xml-combine="replace">
      <nucleus-name>/atg/dynamo/service/GSAInvalidationReceiver</nucleus-name>
        <input-port>
          <port-name>GSAInvalidate</port-name>
          <input-destination>
            <provider-name>JBoss</provider-name>
            <destination-name>/topic/GSAInvalidator</destination-name>
            <destination-type>Topic</destination-type>
          </input-destination>
        </input-port>
    </message-sink>
  </patchbay>
</dynamo-message-system>

Notice that you have to define a component that is used to obtain an initial context from the 3rd party JMS provider. ATG’s documentation covers this in detail so I won’t go into it here.

After setting up this configuration with a properly configured JBoss server I was distributing invalidation events and things were looking great. That’s always the time a nasty problem arises and this situation was no different.

I had tested this configuration but, of course, for our production environment we wanted to run a cluster of JBoss instances to provide high availability. The problem I encountered is that JBoss supports two different JMS providers:

  1. JBossMQ – offers a highly available singleton JMS service and is the out of the box configuration for all JBoss 4.2 and earlier versions. This implementation supports Java 1.4.
  2. JBoss Messaging – offers a highly available distributed message service but requires Java 1.5+. May be configured in JBoss 4.2 and will be the out of the box configuration in the next JBoss release.

We currently run ATG 7.2 under Java 1.4 and I wanted to keep our JBoss servers at the same level if possible so I decided to use JBoss 4.0.5 and JBossMQ. The first problem I encountered was that even though JBossMQ supports high availability it does so with the assistance of its clients. JBossMQ expects all its clients to register a JMS ExceptionListener to handle connection failures by reopening the connection and re-creating all JMS objects when a failure occurs. Clearly this wasn’t going to work for ATG PatchBay – I needed transparent fail over.

My next approach was to use JBoss 4.2 with JBoss Messaging. This required the JBoss servers to run on Java 1.5 but I figured I could live with that until we moved to ATG 2007.1. Of course this didn’t work as the JBoss 4.2 client jars were compiled for Java 1.5 and all my ATG instances were running under 1.4. This was starting to look like more trouble than it was worth but first I ran all the JBoss client jars through Retroweaver and deployed them under Java 1.4. This looked promising until I connected to the JBoss instance and pulled back an InitialContext. The stub that was returned required Java 1.5. I may have been able to work around this but I gave up on JBoss 4.2.

Now a reasonable person would have given up on JBoss at this point and perhaps purchased SonicMQ. Instead I set about writing a JMS mapping layer that would sit between PatchBay and JBossMQ and perform transparent fail over. What I did was use a decorator pattern to wrap every JBoss JMS class with my own that knew how to recreate itself in the event of a fail over. This wasn’t difficult but it involved a fair amount of coding.

I actually got this approach working and it appeared to work very well but then I had another idea and I set this option aside for the time being.

By the way, the JBossMQ transparent fail over layer is not specific to PatchBay, if anyone has a need of this I can probably arrange to give you the code.

That wraps up part 2 of this series. Stay tuned for my second alternative presented in part 3.

Advertisements

4 Responses to ATG Repository Distributed Caches, Part 2

  1. The topic is quite trendy on the Internet right now. What do you pay the most attention to when choosing what to write about?

  2. bwithers says:

    Huh? The ATG Respository is currently trendy?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: