ProductsServer Desktop & Workstation Developer Subscriptions Satellite OpenStack Platform For IBM POWER For SAP Business Applications Management For Scientific ComputingExtended Update Support High Availability High Performance Network Load Balancer Resilient Storage Scalable File System Smart Management Extended Lifecycle SupportA-MQ Accelerate Automate Integrate Application Platform BPM Suite BRMS JBoss community or Red Hat JBoss Middleware Data Grid Data Virtualization Developer Studio Portfolio Edition Fuse Fuse Service Works Operations Network Portal Web Framework Kit Web Server
SolutionsWhy Red Hat Why open hybrid cloud? The new IT Public cloud Cloud resource library Private cloud Infrastructure-as-a-Service (IaaS) Platform-as-a-Service (PaaS) Cloud applications and workloadsSolaris to Red Hat Enterprise Linux Migration overview Migrate from your UNIX platform How to migrate to Red Hat Enterprise Linux Upgrade to the latest Red Hat Enterprise Linux release Red Hat JBoss Middleware Benefits of migrating to Red Hat Enterprise Linux Migration services Start a conversation with Red Hat
TrainingPopular and new courses Red Hat JBoss Administration curriculum Core System Administration curriculum Red Hat JBoss Middleware development curriculum Advanced System Administration curriculum Linux Development curriculum Cloud Computing, Virtualization, and Storage curriculum
ConsultingSOA and integration Business process management Custom Software Development Enterprise Data and Storage Systems management Migrations
Was Your Twitter Search Super Fast? Twitter turns to JBoss Netty to Power Its Search Performance
April 26, 2011
By JBoss Team
We love hearing about organizations that are doing innovative things with Red Hat technologies, which is why we were excited to see this blog post from Twitter's engineering team on how it used Netty, a JBoss Community project, to build a new Java server front-end to its realtime search engine called Blender.
With one of the most heavily trafficked search engines (with more than one billion queries per day, according to Twitter), Twitter set out to build a new search engine that would handle its “ever-growing traffic, improve the end-user latency and availability of our service, and enable rapid development of new search features.” JBoss Netty, a client server framework for the development of highly performable and scalable network applications, turned out to be the answer for Twitter.
Blender is a Thrift and HTTP service built on Netty, a highly-scalable NIO client server library written in Java that enables the development of a variety of protocol servers and clients quickly and easily. We chose Netty over some of its other competitors, like Mina and Jetty, because it has a cleaner API, better documentation and, more importantly, because several other projects at Twitter are using this framework. To make Netty work with Thrift, we wrote a simple Thrift codec that decodes the incoming Thrift request from Netty’s channel buffer, when it is read from the socket and encodes the outgoing Thrift response, when it is written to the socket.
Netty defines a key abstraction, called a Channel, to encapsulate a connection to a network socket that provides an interface to do a set of I/O operations like read, write, connect, and bind. All channel I/O operations are asynchronous in nature. This means any I/O call returns immediately with a ChannelFuture instance that notifies whether the requested I/O operations succeed, fail, or are canceled.
When a Netty server accepts a new connection, it creates a new channel pipeline to process it. A channel pipeline is nothing but a sequence of channel handlers that implements the business logic needed to process the request. In the next section, we show how Blender maps these pipelines to query processing workflows.
Following the launch of Blender, our 95th percentile latencies were reduced by 3x from 800ms to 250ms and CPU load on our front-end servers was cut in half. We now have the capacity to serve 10x the number of requests per machine. This means we can support the same number of requests with fewer servers, reducing our front-end service costs.