Thursday, March 23, 2017

Apache Spark Interview Questions

1. What is Spark?
Apache Spark is a fast, easy-to-use and flexible data processing framework. It has an advanced execution engine supporting cyclic data  flow and in-memory computing. Spark can run on Hadoop, standalone or in the cloud and is capable of accessing diverse data sources including HDFS, HBase, Cassandra and others. 
2. Key features of Spark.
  • Allows Integration with Hadoop and files included in HDFS.
  • Spark has an interactive language shell as it has an independent Scala (the language in which Spark is written) interpreter.
  • Spark consists of RDD’s (Resilient Distributed Datasets), which can be cached across computing nodes in a cluster.
  • Spark supports multiple analytic tools that are used for interactive query analysis , real-time analysis and graph 
3. What is an RDD?
RDD is the acronym for Resilient Distribution Datasets – a fault-tolerant collection of operational elements that run parallel. The partitioned data in RDD is immutable and distributed. There are primarily two types of RDD:
  1. Parallelized Collections : The existing RDD’s running parallel with one another.
  2. Hadoop datasets : perform function on each file record in HDFS or other storage system
4. What are Partitions? 
As the name suggests, partition is a smaller and logical division of data  similar to ‘split’ in MapReduce. Partitioning is the process to derive logical units of data to speed up the processing process. Everything in Spark is a partitioned RDD.
5. What kind of operations RDD support?
  • Transformations.
  • Actions
6. What do you understand by Transformations in Spark?
Transformations are functions applied on RDD, resulting into another RDD. It does not execute until an action occurs. map() and filer() are examples of transformations, where the former applies the function passed to it on each element of RDD and results into another RDD. The filter() creates a new RDD by selecting elements form current RDD that pass function argument.
7. What are Actions in Spark ?
An action helps in bringing back the data from RDD to the local machine. An action’s execution is the result of all previously created transformations. reduce() is an action that implements the function passed again and again until one value if left. take() action takes all the values from RDD to local node.

8. Define functions of SparkCore?
Serving as the base engine, SparkCore performs various important functions like memory management, monitoring jobs, fault-tolerance, job scheduling and interaction with storage systems.
9. What is RDD Lineage?
Spark does not support data replication in the memory and thus, if any data is lost, it is rebuild using RDD lineage. RDD lineage is a process that reconstructs lost data partitions. The best is that RDD always remembers how to build from other datasets.
10. What is Spark Driver?
Spark Driver is the program that runs on the master node of the machine and declares transformations and actions on data RDDs. In simple terms, driver in Spark creates SparkContext, connected to a given Spark Master.
The driver also delivers the RDD graphs to Master, where the standalone cluster manager runs.
11. What is Hive on Spark?
Hive contains significant support for Apache Spark, wherein Hive execution is configured to Spark:
hive> set spark.home=/location/to/sparkHome;hive> set hive.execution.engine=spark;
Hive on Spark supports Spark on yarn mode by default.
12. Name commonly-used Spark Ecosystems.
  • Spark SQL (Shark)- for developers.
  • Spark Streaming for processing live data streams.
  • GraphX for generating and computing graphs.
  • MLlib (Machine Learning Algorithms).
  • SparkR to promote R Programming in Spark engine.
13. What is Spark Streaming ?
Spark supports stream processing – an extension to the Spark API , allowing stream processing of live data streams. The data from different sources like Flume, HDFS is streamed and finally processed to file systems, live dashboards and databases. It is similar to batch processing as the input data is divided into streams like batches.
14. What is Spark SQL?
SQL Spark, better known as Shark is a novel module introduced in Spark to work with structured data and perform structured data processing. Through this module, Spark executes relational SQL queries on the data. The core of the component supports an altogether different RDD called SchemaRDD, composed of rows objects and schema objects defining data type of each column in the row. It is similar to a table in relational database.
15. What is a Parquet file?
Parquet is a columnar format file supported by many other data processing systems. Spark SQL performs both read and write operations with Parquet file and consider it be one of the best big data analytics format so far.
16. What file systems Spark support?
  • Hadoop Distributed File System (HDFS).  Learn more about HDFS in these Top Interview questions.
  • Local File system.
  • S3
17. What is Yarn?
Similar to Hadoop, Yarn is one of the key features in Spark, providing a central and resource management platform to deliver scalable operations across the cluster . Running Spark on Yarn necessitates a binary distribution of Spar as built on Yarn support.
18. List the functions of Spark SQL.?
Spark SQL is capable of:
  • Loading data from a variety of structured sources.
  • Querying data using SQL statements, both inside a Spark program and from external tools that connect to Spark SQL through standard database connectors (JDBC/ODBC). For instance, using business intelligence tools like Tableau. Get to know more about Tableau in this Tableau Tutorial.
  • Providing rich integration between SQL and regular Python/Java/Scala code, including the ability to join RDDs and SQL tables, expose custom functions in SQL, and more.
19. What are benefits of Spark over MapReduce?
  • Due to the availability of in-memory processing, Spark implements the processing around 10-100x faster than Hadoop MapReduce. MapReduce makes use of persistence storage for any of the data processing tasks.
  • Unlike Hadoop, Spark provides in-built libraries to perform multiple tasks form the same core like batch processing, Steaming, Machine learning, Interactive SQL queries. However, Hadoop only supports batch processing.
  • Hadoop is highly disk-dependent whereas Spark promotes caching and in-memory data storage.
  • Spark is capable of performing computations multiple times on the same dataset. This is called iterative computation while there is no iterative computing implemented by Hadoop.
Read more in this blog about the comparison of Spark and MapReduce.
20. What is Spark Executor?
When SparkContext connect to a cluster manager, it acquires an Executor on nodes in the cluster. Executors are Spark processes that run computations and store the data on the worker node. The final tasks by SparkContext are transferred to executors for their execution.
21. Name types of Cluster Managers in Spark.
The Spark framework supports three major types of Cluster Managers:
  1. Standalone : a basic manager to set up a cluster.
  2. Apache Mesos : generalized/commonly-used cluster manager, also runs Hadoop MapReduce and other applications.
  3. Yarn : responsible for resource management in Hadoop
22. What do you understand by worker node?
Worker node refers to any node that can run the application code in a cluster.
23. Do you need to install Spark on all nodes of Yarn cluster while running Spark on Yarn?
No because Spark runs on top of Yarn.

4 comments:

  1. I am very much interested in Data analyst, your blog gives me knowledge on how data statistics are predicted. Thank you admin
    Regards

    Hadoop Training Chennai
    Hadoop Training in Chennai

    ReplyDelete
  2. Thanks for sharing informative. The Way Of Your article presentation is very effective And very easy to understand about the process..thank you keep update with your blogs Asperina Technosoft

    ReplyDelete
  3. I am looking for and I love to post a comment ExcelR Machine Learning Courses "The content of your post is awesome" Great work!

    ReplyDelete