On March the 26th 2012, James Cameron and his submarine craft, Deepsea Challenger, explored the depths of the ocean down to 11km under sea level at 11.329903°N 142.199305°E, an infinitesimal point on the surface of the Earth’s vast Oceans. Could you imagine how incredible it would be to have thousands of “Deep Challengers” reaching the bottom of our planet in parallel? What a map we would get!…
This will be the last installment in the “Continuous Delivery in depth” series. After the good and the bad, here comes the ugly. Ugly because of the amount of changes required: a pull request with 308 commits was merged adding 2932 lines, whilst removing a whooping 10112. This represented about a 75% loc reduction, obviously improving the maintainability.
To go further on the topic, on 27 April 2017 you have the opportunity to join the first JAM in Madrid: confirm your assistance!
We don’t usually like to boast but on this one we can’t hold back. As of 17 February 2017, a huge (but just symbolic) milestone was reached: more than 1000 automated releases performed by our Jenkins installation, from each continuous delivery pipeline.
This is the first part of a story, a story about how important it is to have a reliable release and deployment process.
Anyone working in our sector has had to deal with deployments. I’d like to launch this series with a bunch of interesting questions that will let you know whether you should change your deployment process.
This post is about an exciting journey that starts with a problem and ends with a solution. One of the top banks in Europe came to us with a request: they needed a better profiling system.
We came up with a methodology for clustering nodes in a graph database according to concrete parameters.
We started by developing a Proof of Concept (POC) to test an approximation of the bank’s profiling data, using the following technologies:
- Java / Scala, as Programming languages.
- Apache Spark, to handle the given Data Sets.
- Neo4j, as graph database.
The POC began as a 2-month long project in which we rapidly discovered a powerful solution to the bank’s needs.
We decided to use Neo4j, along with Cypher, Neo4j’s query language, because relationships are a core aspect of their data model. Their graph databases can manage highly connected data and complex queries. We were then able to make node clusters thanks to GraphX, an Apache Spark API for running graph and graph-parallel compute operations on data.…
A follow-up to this post will be held at the Spark Summit East in Boston in February. Find out more.
Amongst all the Big Data technology madness, security seems to be an afterthought at best. When one talks about Big Data technologies and security, they are usually referring to the integration of these technologies with Kerberos. It’s true however that this trend seems to be changing for the better and we now have a few security options for these technologies, like TLS. Against this backdrop, we would like to take a look at the interaction between the most popular large-scale data processing technology, Apache Spark, and the most popular authentication framework, MIT’s Kerberos.
After the resounding success of the first article on recommender systems, Alvaro Santos is back with some further insight into creating a recommender system.
Coming soon: A follow-up Meetup in Madrid to go even further into this exciting topic. Stay tuned!
In the previous article of this series, we explained what a recommender system is, describing its main parts and providing some basic algorithms which are frequently used in these systems. We also explained how to code some functions to read JSON files and to map the data in MongoDB and ElasticSearch using Spark SQL and Spark connectors.
This second part will cover:
- Generating our Collaborative Filtering model.
- Pre-calculating product / user recommendations.
- Launching a small REST server to interact with the recommender.
- Querying the data store to retrieve content-based recommendations.
- Mixing the different types of recommendations to create a hybrid recommender.
The not so lean side
Remember issue #1 published in the summer? We are back with the next part in the series, wearing the hat of Pitfall Harry to look at some of the issues we have come across and how these have impacted our day-to-day job. We also include some tips for overcoming them.
First things first: Jenkins’ pipelines are an awesome improvement over basic Jenkins funcionalities, allowing us to easily build complex continuous delivery flows, with extreme reusability and maintainability attributes.
Having said this, pipelines are code. And code is written by human beings. Human beings make mistakes. Such errors are reflected as software defects and execution failures.
This post will take a look at some of the defects, pitfalls and limitations of our (amazing) Jenkins’ pipelines, defining some possible workarounds.
Nowadays, there are a lot of Big Data query engines available. Some companies struggle to choose which one to use. Benchmarks exist, but results can be contradictory and thus difficult to trust.
One Big Data query engine that is frequently mentioned is Presto. We wanted to find out more about its potential and decided to compare it with Crossdata in a controlled environment, given that Crossdata is a data hub that extends the capabilities of Apache Spark. We detected that the most popular persisting layers in our projects are Apache Cassandra, MongoDB and HDFS+Parquet, but that MongoDB is not supported by Presto. The benchmark was therefore carried out with Apache Cassandra and HDFS+parquet only.
Crossdata provides additional features and optimizations to the SQLContext of Spark through the XDContext. It can be deployed as a library of Apache Spark or using a Client-Server architecture where the cluster of servers form a P2P structure.
Imagine a rectangular grid of cells, in which each cell has a value – Either black (dead) or white (alive). And imagine that:
- Any live cell with two or three live neighbors survives for the next generation.
- Any cell with four or more neighbors dies from overpopulation.
- Any cell with one or no neighbors dies from isolation.
- Any dead cell with exactly three neighbors comes to life.
These are the four simple rules of Conway’s Game of Life . You could hardly imagine a simpler set of rules to code on your computer and you wouldn’t expect any interesting result at all, but…
Behold the wonders of its hidden might!…