Users of NoSQL databases and data processing frameworks such as CouchDB and Hadoop are deploying these new technologies for their speed, scalability and flexibility, judging from a number of sessions at the NoSQL Now conference being held this week in San Jose, California.
“EMC is using a mixture of traditional databases and newfangled NoSQL data stores to analyse public perception of the company and its products,” explained Subramanian Kartik, distinguished EMC engineer.
“The process, called sentiment analysis, involves scanning hundreds of technology blogs, finding mentions of EMC and its products, and assessing if the references are positive or negative, using words in the text,” he said.
“To execute the analysis, EMC gathers the full text of all the blog and Web pages mentioning EMC, and compiles them into a version of MapReduce running on its Greenplum data analysis platform. It then uses Hadoop to weed out the Web markup code and non-essential words, which slims the data set considerably. It then passes the word lists into SQL-based databases, where a more thorough quantitative analysis is done,” added Kartik.
“The NoSQL technologies are useful in summarising a huge data set, while SQL can then be used for a more detailed analysis,” Kartik said, adding that this hybrid approach can be applied to many other areas of analysis as well.
“There is all sorts of information out there, and at some point you will have to go through tokenising, parsing and natural language processing. The way to get to any meaningful quantitative measures of this data is to put it in an environment you know can manipulate it well, in a SQL environment,” Kartik said.
For digital media company AOL, NoSQL products provide speed and volume that would not be possible using traditional relational databases.
“The company uses Hadoop and the CouchDB NoSQL database to run its ad targeting operations,” said Matt Ingenthron, manager of community relations for Couchbase.
AOL has developed a system that can pick out a set of targeted ads for each time a user opens an AOL page, according to Ingenthron. “What ads are chosen can be based on the data that AOL has on the user, along with algorithmic guesses about what ads would be most of interest to that user. The process must be executed within about 40 milliseconds,” he said.
Source data is voluminous. Logs are kept on all users’ actions on every server, AOL said. “They must be parsed and reassembled to build a profile of each user. The ad brokers also set a complex set of rules of how much they will pay for an ad impression, or what ads should be shown to which users,” Ingenthron said.
He added, “This activity generates 4 to 5 terabytes of data a day, and AOL has amassed 600 petabytes of operational data. The system maintains more than 650 billion keys, including one for every user, as well as keys for handling other aspects of data as well. The system must react to 600,000 events every second.”
“Data feeds produce much of this source data, which come from Web server logs and outside sources. The Hadoop Flume component is used to ingest data. The Hadoop cluster also executes a series of MapReduce jobs to parse the raw data into summaries,” explained Ingenthron.
AOL also uses Couchbase’s CouchDB as a switching station of sorts for data arriving from the feeds, according to the company. “Because CouchDB can work with data without writing it to disk, it can be used to parse data quickly before sending it to the next step,” pointed out Ingenthron.
“We didn’t anticipate ad targeting to be a primary [market] for us. But Couchbase ended up filling a need for AOL and other ad companies,” Ingenthron said. The work is “technically complex and has a lot challenges in processing data very quickly.”
Scientific and medical publishing house Elsevier was looking for greater flexibility when it procured an XML-based, non-relational database system from Mark Logic, according to Elsevier Labs VP, Bradley Allen.
“The scientific publishing world is moving from a static model to a more dynamic one,” Allen explained. For the past few centuries, printed scientific paper, collected in journals, served as the basic unit of knowledge. It contained a description of the work, the authors and contributors, references and other core components of information. While the scientific publishing world is moving to digital, paper remains the dominant medium for data communication. “We’re still in the horse-and-carriage era,” Allen quipped.
“Over time, the scientific paper will be decomposed into individual elements, which can be used in multiple products. Individual paragraphs or even individual assertions can be annotated and indexed,” Allen predicted. “They can then be reassembled into new works and embedded in applications, such as programs that doctors can consult. They can also be mined for new information through the use of analytics.”
With this in mind, Elsevier is in the process of annotating the papers in its journals so they can be deployed in other applications and services. “An XML database was a natural fit for this work,” Allen explained. “New content types can easily be added into a database, and the format allows individual components to be easily reused in new composite applications and services,” he added.
Elsevier has introduced a number of new products with this approach. One is the SciVal, a service for academic administrators that summarises the publishing activity within their institution, giving them a quantitative idea of the organization’s academic strengths and weaknesses, he reported. Another service is the Science Direct, a full-text search engine for Elsevier’s journals.