Cloud & Big Data

Spark

Apache Spark is an open-source computing framework. It was originally developed at the University of California, Berkeley’s AMPLab in 2009 and donated to the Apache Software Foundation. It’s part of a greater set of tools, along with Apache Hadoop and other open-source resources which are used in today’s analytics community.

Apache Hadoop

Apache Hadoop is an open-source software library that supports the distributed processing of large data sets across computer clusters. The framework is designed to scale up from single servers to multiple machines, each offering local computation and storage. At the same time, all modules of Hadoop are designed with the basic assumption that hardware failures are common and should be automatically handled by the framework.

The core of the Hadoop framework consists of a storage path known as HDFS (Hadoop Distributed File System), and a processing part called MapReduce. Hadoop splits files into large blocks and distributes them across nodes in a cluster. In order to process the data, the packaged code is transferred by Hadoop to nodes, so that the packages are processed in parallel according to the data that needs to be processed.

Apache Cassandra

Apache Cassandra is a free and open-source distributed database used for managing large amounts of structured data across many commodity servers. Also, it provides services that are highly available and have no single point of failure.

Cassandra has a high-value performance and offers robust support (with asynchronous masterless replication) for clusters from multiple datacenters.

Scroll to Top