Jun 23, 2011 (02:06 PM EDT)
2 Ways To Tackle Really Big Data
Read the Original Article at InformationWeek
IBM Netezza on Wednesday announced a High Capacity Appliance aimed at really, really big data. We're talking petabytes, typical for long-term archives maintained for regulatory or compliance reasons. Infobright, meanwhile, has upgraded a column-store database the promises superfast querying of machine-generated data at more routine volumes of less than 40 terabytes. Beyond these specific products, both vendors have answers for the extremes of capacity and speed.
The IBM Netezza High Capacity Appliance is an alternative to the vendor's standard TwinFin product. It boasts four times the data density of the TwinFin thanks to higher-capacity hard drives. It also has about 35% less processing power per rack (to keep costs down and create room for more storage). The appliance stores 500 terabytes per rack, and you can put together as many as 20 racks to handle as much as 10 petabytes of user-addressable data.
Who needs to query that much data? Telcos operating in many countries (India being one example) are required to keep call data records (CDRs) for as long as 10 years so law enforcement agencies can request relevant information. Government intelligence agencies and financial services subject to retention requirements often keep that much data around as well.
Fast querying is generally not important when you're retrieving records to meet regulatory requirements. Thus, the EMC, IBM Netezza, and Teradata high-capacity appliances all favor storage over speed. For example, an identical query will run about 2.5 times faster on the Netezza TwinFin than it will run on that vendor's high-capacity appliance. The TwinFin, however, can't match the low-cost-per terabyte of the IBM Netezza High Capacity Appliance, which works out to less than $2,500 per terabyte, according to Netezza (less than a quarter the cost of the TwinFin).
Plenty of companies need both high capacity and super-fast querying. The likes of EMC, IBM Netezza, and Teradata would likely suggest the combination of their high-capacity appliances and one of their high-performance appliances. Yes, at the opposite end of the speed-versus-scale spectrum, Teradata and EMC both have pure-solid-state appliances (Teradata's being the Extreme Performance Appliance and EMC's being the High Performance EMC Data Computing Appliance). These products have less capacity but about 10 times the speed of each vendor's standard appliance.
IBM Netezza announced Wednesday that it will get in on this act sometime next year with an Ultra Performance appliance employing a combination of flash memory and RAM (a contrast with the solid-state disk drives used by Teradata and EMC). Having a high-performance appliance and a high-capacity appliance gives you the best of both worlds, but it's also no small investment.
Infobright is not an appliance vendor and its database is designed to run on symmetric multiprocessor (SMP) servers rather than the massively parallel processing (MPP) approach used by EMC, IBM, Teradata, and others. The approach is all about fast query speed, but most deployments are in the 1-to-10 terabyte range and top out at about 40 terabytes (because you can't scale out as you can in the MPP approach). By exploiting links to Hadoop, however, Infobright says it can help companies affordably address both high-speed and high-scale.
Infobright gets its speed from its column-oriented architecture, which enables it to query selected attributes without wading through all the non-relevant data in each row that row-oriented databases (like EMC Greenplum, IBM DB2, Netezza, Oracle, Teradata) have to churn through.
Column-oriented databases also excel at compression, particularly when dealing with highly repetitive machine-generated data. Where row-oriented databases average 2X to 4X compression, Infobright says it routinely gets 10X compression and peaks at 40X compression. That makes big data smaller, saving money on processing power and storage.
The Infobright 4.0 adds two features expressly for machine-generated data. First, a DomainExpert feature lets companies store repeating patterns of data that don't change, such as email addresses, URLs and IP addresses. These three examples are included in a Web analytics DomainExpert set developed by Infobright and included in 4.0. But companies can add their own patterns as well, whether it's related to call data records, financial trading, or geospatial information. The query engine then has the brains to ignore this static data and instead query only the changing data. That saves query time because irrelevant data doesn't have to be decompressed and interrogated.
The second new feature that speeds analysis is Rough Query for Data Mining, which is an approach whereby in-memory metadata about each column and row is queried first to eliminate all data that's not relevant to the query. Once the relevant information is revealed, the query engine issues a select statement query only against that relevant data. It can speed queries by as much as 20 times, according to Infobright, over the conventional approach of issuing a long-running query against the entire data set.
Infobright says customers such as hedge funds investigating price histories, financial firms doing portfolio risk analysis, marketing organizations doing clickstream analysis, and so on all want the ultimate in fast querying. If those customers also have large-scale archiving needs, Hadoop, the fast-growing open source project, includes options for low-cost, queryable storage. Like many commercial vendors, Infobright has integrated its product with Hadoop, so data stored there--or subsets of data boiled down through Hadoop processing--could be brought back in Infobright for fast SQL-style analysis.
The downside of using a SQL database (like Infobright) and Hadoop in combination is that you can't use the same SQL-based queries and applications in the latter environment--an advantage IBM points out in highlighting the advantages of its High Capacity appliance. On the other hand, Hadoop deployments running on commodity hardware can cost as little as $250 per terabyte, according to Cloudera, which provides commercial service and support for Hadoop deployments. That's quite a savings over the less-than-$2,500 per terabyte touted by IBM.
So there you have the two approaches to handling machine-generated-data. If you have vast archives, EMC, IBM Netezza, and Teradata all have purpose-build appliances that scale into the petabytes. You also could use Hadoop, which promises much lower cost, but you'll have to develop separate processes and applications for that environment. You'll also have to establish or outsource expertise on Hadoop deployment, management, and data processing.
For fast-query needs, EMC, IBM Netezza, and Teradata all have fast, standard appliances and faster, high-performance appliances (and companies including Kognitio and Oracle have similar configuration choices). Column-oriented database and appliance vendors including HP Vertica, Infobright, ParAccel, and Sybase have speed advantages inherent in their database architectures.
As always, your performance will vary depending on your queries, your data, data volumes, query volumes, number of users, and other factors. Do thorough tests with your own data and your toughest queries to determine which path to follow.
You can't afford to keep operating without redundancy for critical systems--but business units must prioritize before IT begins implementation. Also in the new, all-digital InformationWeek SMB supplement: Avoid the direct-attached storage trap. Download it now. (Free registration required.)