As many of you have likely heard, the highly anticipated JBoss World 2011 keynote from May was capped with a live demo that sparked a lot of discussion in the user community. We encourage you to check out the recording of the keynote and demo available online (jump to 35:17 for the live demo), and see for yourself. Read further for a breakdown of the demo and what we believe contributed to its success. Ultimately, we think that you’ll agree that the "cool factors" are the interactive element of the demo and the visual representation of Tweets coming from participants’ mobile devices. Instead of relying on an obscure back end application, we were able to tie together the JBoss enterprise middleware technologies to address a mainstream consumer use case.
To kick off the demo, the audience was directed to Tweet using hashtags #JBW or #Infinispan from their mobile devices. A frenzy of tweeting shortly ensued, and was captured visually in two grids, named Grid A and Grid B, which became the basis for the visual demo.
As Twitter data started to populate the grid, we fired up a second grid (Grid-B) consisting of 8 nodes. These nodes were configured using asynchronous distribution and two data owners, but at the time of the demo these nodes were running on very small and cheap plugtop computers. These plugtops - GuruPlugs - are constrained devices with 512MB of RAM, a 1GHz ARM processor.
Cache listeners were then used to build an HTML 5-based webapp to visualize the action within the grid -- pushing events to Web browsers rendering the "spinning spheres" using HTML 5's canvas tag. The data was then animated to illustrate the movement within a grid of Infinispan nodes.
These sub-iPhone devices were running a real data grid!
The purpose of this? To demonstrate the extremely low footprint and overhead Infinispan imposes on your hardware. A server running JBoss Application Server with the cool visualization webapp rendering the contents of Grid-B allowed people to "see" the data in both grids.
We then fired up Drools to mine the contents of Grid-A and send it to Grid-B, applying rules to select the interesting tweets, namely the ones having the hashtag #JBW. With this in place, we then invited the audience to participate - by tweeting with hashtag #JBW, as well as the hashtag of your favorite JBoss project - e.g., #infinispan. People were allowed to vote for more than one project, and the most prolific tweeter was to win a prize. This started a frenzy of tweeting, and was reflected in the two grid visualizations.
Jay Balunas of Richfaces built a TwitterStream app with live updates of these tweets for various devices, including iPhones, iPads, Android phones and tablets, and of course, desktop web browsers, grabbing data off Grid-B. Christian Sadilek and Mike Brock from the Errai team also built a tag-cloud application visualising popular tags as a tag cloud, again off Grid-B, making use of Errai to push events to the browser.
After simulating Mark Proctor, project lead for Drools, trying to cheat the system with a script, we could recover the correct votes: clear Grid-B, update the Drools rules to have it discard the cheat tweets, and have a cleaned up stream of tweets flow to Grid-B.
All applications, including Drools and the visualizations, were using a Java Persistence API (JPA) interface to store or load the tweets. It was powered by an early preview of HibernateOGM, which aims to abstract any NoSQL store as a JPA persistence store while still providing some level of consistency. As HibernateOGM is not "feature complete," it was using Hibernate Search to provide query capabilities via a Lucene index, and using the Infinispan integration of Hibernate Search to distribute the index on Infinispan.
We then demonstrated failover, inviting the winner to come to the stage to brutally un-plug one of the plugtops of his choice from Grid-B - this plugtop subsequently became his prize. Important to note, the webapps running off the grid did not risk losing any data, while Drools continued to transfer data from Grid-A into Grid-B.
In the end, this fairly simple setup using embeddable components and cheap hardware allowed the audience to build a fairly complex application with excellent failover and scalability properties.
How did we pull this off? In short, the power of the portfolio and breadth of technologies (application server, data grid, business rules and RichFaces). Infinispan was also critical to the demo’s success, mining mass volumes of real-time data off a Twitter stream and then storing in an Infinispan grid.
After the demo, we did hear of a large commercial application using Infinispan and Drools in precisely this manner - except instead of Twitter, the large data stream was created with flight seat pricing, changing dynamically and constantly, and eventually rendered to web pages of various travel sites – which weren’t running on "plugtops." So, as you can see, the example isn't completely artificial and can be replicated in actual environments.