Using Spark SQLContext, HiveContext & Spark Dataframes API with ElasticSearch, MongoDB & Cassandra

In this post we will show how to use the different SQL contexts for data query on Spark.
We will begin with Spark SQL and follow up with HiveContext. In addition to this, we will conduct queries on various NoSQL databases and analyze the advantages / disadvantages of using them, so without further ado, let’s get started!

First of all we need to create a context that will add Spark to the configuration options for connecting to Cassandra:

Spark SQLContext allows us to connect to different Data Sources to write or read data from them, but it has limitations, namely that when the program ends or the Spark shell is closed, all links to the datasoruces we have created are temporary and will not be available in the next session.

Monitoring the Spanish 2015 General Elections

We’re just a couple of days away from the Spanish general elections and Twitter is boiling up with campaign related messages. People want to have a say in what goes on in their country and they turn Twitter to express their opinions and feelings.

 

Social networks are starting to play a very important role in political events in Spain, that is why candidates from different parties are actively seeking to get the most profit from their presence in these type of platforms. They apply different strategies that allow them to connect with the people and, hopefully, gain their votes.

 

At Stratio we have been monitoring the campaign with our real-time data aggregation system, Stratio Sparkta, and with our visualization tool, Stratio Viewer. We use Apache Spark to to process the data and MongoDB to store it.

MongoDB – Spark Connector Whitepaper

We recently worked with MongoDB and their developer team for the analysis of their Hadoop based connector Vs our native connector solution. The paper highlights how Stratio’s connector for Apache Spark implements the PrunedFilteredScan API instead of the TableScan API which effectively allows you to avoid scanning the entire collection.

Our connector supports the Spark Catalyst optimizer for both rule-based and cost-based query optimization.

Stratio Release 1.2.0

  • Added HDFS as an option of persistence technologies.
  • Added HDFS Connector for Crossdata.
  • Retry button when a node fails to install.
  • UX refactor.
  • Now the Admin machine is the package repository for node installation (no 3rd party repositories needed).
  • Backend bugfixes.
  • Uninstall scripts for RedHat/CentOs.

Spark-MongoDB library

Once Data Sources API  has been released, we’ve wanted to take advantage of these new features and, for this reason, we have developed a Spark-MongoDB library. With this new connector we help the growing MongoDB community to simplify the interaction with this datasource via Spark.

This library provides the mechanism for accessing MongoDB collections in a structured way from SparkSQL, accesible from Python and Scala API’s. Since MongoDB is an open-source document database leader among NoSQL databases and is highly used in several projects [http://www.mongodb.com/leading-nosql-database] we find this connection with all the operations permitted by SparkSQL not only useful but necessary.