Thursday, March 23, 2017

Apache Spark Interview Questions

1. What is Spark?
Apache Spark is a fast, easy-to-use and flexible data processing framework. It has an advanced execution engine supporting cyclic data  flow and in-memory computing. Spark can run on Hadoop, standalone or in the cloud and is capable of accessing diverse data sources including HDFS, HBase, Cassandra and others. 
2. Key features of Spark.
  • Allows Integration with Hadoop and files included in HDFS.
  • Spark has an interactive language shell as it has an independent Scala (the language in which Spark is written) interpreter.
  • Spark consists of RDD’s (Resilient Distributed Datasets), which can be cached across computing nodes in a cluster.
  • Spark supports multiple analytic tools that are used for interactive query analysis , real-time analysis and graph 
3. What is an RDD?
RDD is the acronym for Resilient Distribution Datasets – a fault-tolerant collection of operational elements that run parallel. The partitioned data in RDD is immutable and distributed. There are primarily two types of RDD:
  1. Parallelized Collections : The existing RDD’s running parallel with one another.
  2. Hadoop datasets : perform function on each file record in HDFS or other storage system
4. What are Partitions? 
As the name suggests, partition is a smaller and logical division of data  similar to ‘split’ in MapReduce. Partitioning is the process to derive logical units of data to speed up the processing process. Everything in Spark is a partitioned RDD.
5. What kind of operations RDD support?
  • Transformations.
  • Actions
6. What do you understand by Transformations in Spark?
Transformations are functions applied on RDD, resulting into another RDD. It does not execute until an action occurs. map() and filer() are examples of transformations, where the former applies the function passed to it on each element of RDD and results into another RDD. The filter() creates a new RDD by selecting elements form current RDD that pass function argument.
7. What are Actions in Spark ?
An action helps in bringing back the data from RDD to the local machine. An action’s execution is the result of all previously created transformations. reduce() is an action that implements the function passed again and again until one value if left. take() action takes all the values from RDD to local node.

8. Define functions of SparkCore?
Serving as the base engine, SparkCore performs various important functions like memory management, monitoring jobs, fault-tolerance, job scheduling and interaction with storage systems.
9. What is RDD Lineage?
Spark does not support data replication in the memory and thus, if any data is lost, it is rebuild using RDD lineage. RDD lineage is a process that reconstructs lost data partitions. The best is that RDD always remembers how to build from other datasets.
10. What is Spark Driver?
Spark Driver is the program that runs on the master node of the machine and declares transformations and actions on data RDDs. In simple terms, driver in Spark creates SparkContext, connected to a given Spark Master.
The driver also delivers the RDD graphs to Master, where the standalone cluster manager runs.
11. What is Hive on Spark?
Hive contains significant support for Apache Spark, wherein Hive execution is configured to Spark:
hive> set spark.home=/location/to/sparkHome;hive> set hive.execution.engine=spark;
Hive on Spark supports Spark on yarn mode by default.
12. Name commonly-used Spark Ecosystems.
  • Spark SQL (Shark)- for developers.
  • Spark Streaming for processing live data streams.
  • GraphX for generating and computing graphs.
  • MLlib (Machine Learning Algorithms).
  • SparkR to promote R Programming in Spark engine.
13. What is Spark Streaming ?
Spark supports stream processing – an extension to the Spark API , allowing stream processing of live data streams. The data from different sources like Flume, HDFS is streamed and finally processed to file systems, live dashboards and databases. It is similar to batch processing as the input data is divided into streams like batches.
14. What is Spark SQL?
SQL Spark, better known as Shark is a novel module introduced in Spark to work with structured data and perform structured data processing. Through this module, Spark executes relational SQL queries on the data. The core of the component supports an altogether different RDD called SchemaRDD, composed of rows objects and schema objects defining data type of each column in the row. It is similar to a table in relational database.
15. What is a Parquet file?
Parquet is a columnar format file supported by many other data processing systems. Spark SQL performs both read and write operations with Parquet file and consider it be one of the best big data analytics format so far.
16. What file systems Spark support?
  • Hadoop Distributed File System (HDFS).  Learn more about HDFS in these Top Interview questions.
  • Local File system.
  • S3
17. What is Yarn?
Similar to Hadoop, Yarn is one of the key features in Spark, providing a central and resource management platform to deliver scalable operations across the cluster . Running Spark on Yarn necessitates a binary distribution of Spar as built on Yarn support.
18. List the functions of Spark SQL.?
Spark SQL is capable of:
  • Loading data from a variety of structured sources.
  • Querying data using SQL statements, both inside a Spark program and from external tools that connect to Spark SQL through standard database connectors (JDBC/ODBC). For instance, using business intelligence tools like Tableau. Get to know more about Tableau in this Tableau Tutorial.
  • Providing rich integration between SQL and regular Python/Java/Scala code, including the ability to join RDDs and SQL tables, expose custom functions in SQL, and more.
19. What are benefits of Spark over MapReduce?
  • Due to the availability of in-memory processing, Spark implements the processing around 10-100x faster than Hadoop MapReduce. MapReduce makes use of persistence storage for any of the data processing tasks.
  • Unlike Hadoop, Spark provides in-built libraries to perform multiple tasks form the same core like batch processing, Steaming, Machine learning, Interactive SQL queries. However, Hadoop only supports batch processing.
  • Hadoop is highly disk-dependent whereas Spark promotes caching and in-memory data storage.
  • Spark is capable of performing computations multiple times on the same dataset. This is called iterative computation while there is no iterative computing implemented by Hadoop.
Read more in this blog about the comparison of Spark and MapReduce.
20. What is Spark Executor?
When SparkContext connect to a cluster manager, it acquires an Executor on nodes in the cluster. Executors are Spark processes that run computations and store the data on the worker node. The final tasks by SparkContext are transferred to executors for their execution.
21. Name types of Cluster Managers in Spark.
The Spark framework supports three major types of Cluster Managers:
  1. Standalone : a basic manager to set up a cluster.
  2. Apache Mesos : generalized/commonly-used cluster manager, also runs Hadoop MapReduce and other applications.
  3. Yarn : responsible for resource management in Hadoop
22. What do you understand by worker node?
Worker node refers to any node that can run the application code in a cluster.
23. Do you need to install Spark on all nodes of Yarn cluster while running Spark on Yarn?
No because Spark runs on top of Yarn.

Sunday, January 29, 2017

Handling data with changing schema on Hadoop

User data often is unpredictable, even when we can predict a change coming our way we need to prepare for that. Make changes in our environment to accept the incoming change, accommodate and absorb. 

With that being a fact of life, our design should allow for those changes to happen.  With the structured data warehouses gone (well, mostly), Hadoop based data stores are the norm these days, and would be default in future not so distant.  By definition, Hadoop allows all kinds of data to be stored on the cluster and happily provides various tools to process such data.

However, with different types of data out there on cluster, any given tool would need to "know" what data that is? How to read it, interpret it and process it. That would have been relatively straightforward with the data schema being static, as in, a file comes in from source A, with a certain format, and that’s about it. You can define programs/structures to work with it. 

What if the source system decides to alter the data set schema, add a few columns, and change some column ordering, perhaps data type too, what then? Would our program reading that dataset on Hadoop cluster still work as it? No.. It won’t .. Because it would also require change to reflect the same changes sent from source.

Assume that you did those changes too. So, now our program is version 2, able to handle the dataset sent by source (v2 as well). What about the older data lying around on Hadoop cluster (v1, with old schema)? How is that dataset going to used/usable?  What if that data was used by hive tables, would you have two (or more) set of tables? 

What if that data was exposed to some analytical programs, through R or similar? I am sure we won’t want to rewrite/test the whole solutions just because source system decided to change something?

And this scenario is not purely hypothetical, it happens around us, more often than not, just that we deal with it in our own patchwork styles, or solutions varying on a case to case basis.

Here I propose a somewhat generic solution, one that might work for all, well, nearly all use cases.

Avro is a data serialization system, wherein the schema is built into the file itself. As a result, programs reading the data-file doesn’t have to bother with interpreting the data file through external schema.

Avro was designed by Doug Cutting for similar reasons, as a tool that is language and schema independent and therefore unrelated tools/technologies can deal with data files.

That being said, is not a silver bullet in itself, but it handles the schema change fairly nicely.

Basically, you store the data on hdfs in Avro formatted files. For that, Avro schema needs to be created, which is fairly straightforward json structured file, indicating the record structure. Take a look at an example here - 

{"namespace": "myExample.avro",  "type": "record",  "name": "User",  "fields": [      {"name": "name", "type": "string"},      {"name": "age",  "type": ["int", "null"]},      {"name": "address", "type": ["string", "null"]}  ] }
create external  table ex_users (name     String ,  age     int , address     String) stored as AVRO location '/apps/data/hive/ex_users.tbl';




When files are saved as Avro formatted files, information from the schema is built into them, in plain text, and is parse-able for programs like hive and others to interpret the rest of the file.

For this to work, the tool has to rely on Avro. Look at this hive table definition - 

With the clause "Stored as Avro" we are basically telling hive to match the columns defined in hive with columns in Avro fi
les in that hdfs location and use whatever columns map to hive definition.
  
Avro is compressible, split-able and stores data in binary format, providing some additional features.

When this Avro formatted file is exposed to programs like hive, they don’t have to worry about changes in schema, since they rely on the schema information in the file header. If the hive / R schema asks for 5 columns, but the file only has 3 of them (with names as matching criteria and not order/position of columns), those matching columns are project to the consuming tool, and the rest is ignored. 

Similarly, if the tool asks for data that is not present in a certain data file (remember our example v2 file with additional column as against v1), the column that is not present is reported as null.

Now, look at the following data file (new avro schema - after source system decided to change)

{"namespace": "myExample.avro", "type": "record", "name": "User", "fields": [     {"name": "name", "type": "string"},     {"name": "age",  "type": ["int", "null"]},       {"name": "salary",  "type": ["float", "null"]},     {"name": "address", "type": ["string", "null"]} ]}

And the corresponding hive table definition - 

create external  table ex_users (name     String ,  age     int , address     String, salary float)stored as AVROlocation '/apps/data/hive/ex_users.tbl';


For this table definition, its not a problem to query and report a data file in old schema (v1) wherein the column salary is not present. it would simply know from the avro schema definition in that header that the file doenst have that column, and it would gladly report that as null.

On the other hand, in the new structure has a different order of columns, which is also fine for hive/avro combination since the schema definition helps match up the columns and report the right content in right columns. 

Similar analogies can be derived for other data consumers who need to interpret the schemas for data files on hdfs.

In comparison to some other storage formats, we do lose some points on the performance scale, but for the use case, where schema is changing and we want to be able to handle the changes with little or no effort, this is a better fit than many others.

Tuesday, December 20, 2016

Hadoop - Small Files vs Big Files

Credits- https://blogs.msdn.microsoft.com/cindygross/2015/05/04/hadoop-likes-big-files/

One of the frequently overlooked yet essential best practices for Hadoop is to prefer fewer, bigger files over more, smaller files. How small is too small and how many is too many? How do you stitch together all those small Internet of Things files into files "big enough" for Hadoop to process efficiently?
The Problem
One performance best practice for Hadoop is to have fewer large files as opposed to large numbers of small files. A related best practice is to not partition “too much”. Part of the reason for not over-partitioning is that it generally leads to larger numbers of smaller files.
Too small is smaller than HDFS block size (chunk size), or realistically small is something less than several times larger than chunk size. A very, very rough rule of thumb is files should be at least 1GB each and no more than maybe around 10,000-ish files per table. These numbers, especially the maximum total number of files per table, vary depending on many factors. However, it gives you a reference point. The 1GB is based on multiples of the chunk size while the 2nd is honestly a bit of a guess based on a typical small cluster.
Why Is It Important?
One reason for this recommendation is that Hadoop’s name node service keep track of all the files and where the internal chunks of the individual files are. The more files it has to track the more memory it needs on the head node and the longer it takes to build a job execution plan. The number and size of files also affects how memory is used on each node.
smallpiebigpieLet’s say your chunk size is 256MB. That’s the maximum size of each piece of the file that Hadoop will store per node. So if you have 10 nodes and a single 1GB file it would be split into 4 chunks of 256MB each and stored on 4 of those nodes (I’m ignoring the replication factor for this discussion). If you have 1000 files that are 1MB each (still a total data size of ~1GB) then every one of those files is a separate chunk and 1000 chunks are spread across those 10 nodes. NOTE: In Azure and WASB this happens somewhat differently behind the scenes – the data isn’t physically chunked up when initially stored but rather chunked up at the time a job runs.
With the single 1GB file the name node has 5 things to keep track of – the logical file plus the 4 physical chunks and their associated physical locations. With 1000 smaller files the name node has to track the logical file plus 1000 physical chunks and their physical locations. That uses more memory and results in more work when the head node service uses the file location information to build out the plan for how it will split out any Hadoop job into tasks across the many nodes. When we’re talking about systems that often have TBs or PBs of data the difference between small and large files can add up quickly.
The other problem comes at the time that the data is read by a Hadoop job. When the job runs on each node it loads the files the task tracker identified for it to work with into memory on that local node (in WASB the chunking is done at this point). When there are more files to be read for the same amount of data it results in more work and slower execution time for each task within each job. Sometimes you will see hard errors when operating system limits are hit related to the number of open files. There is also more internal work involved in reading the larger number of files and combining the data.
Stitching
There are several options for stitching files together.
  • Combine the files as they land using the code that moves the files. This is the most performant and efficient method in most cases.
  • INSERT into new Hive tables (directories) which creates larger files under the covers. The output file size can be controlled with settings like hive.merge.smallfiles.avgsize and hive.merge.size.per.task.
  • Use a combiner in Pig to load the many small files into bigger splits.
  • Use the HDFS FileSystem Concat APIhttp://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html#concat.
  • Write custom stitching code and make it a JAR.
  • Enable the Hadoop Archive (HAR). This is not very efficient for this scenario but I am including it for completeness.
There are several writeups out there that address the details of each of these methods so I won’t repeat them.
The key here is to work with fewer, larger files as much as possible in Hadoop. The exact steps to get there will vary depending on your specific scenario.

Tuesday, November 15, 2016

Eclipse - installing Scala plugin manually?

I have been playing around with Scala for some time, and was always using the Scala IDE (www.scala-ide.org) which is based on a relatively older version of Eclipse (Luna).

I recently discovered this, wherein you could install the scala plug-in on a regular Eclipse installation.

Just add the following url as a new update site in your local eclipse installation and you'd be able to install the scala plugin just like that -


http://download.scala-ide.org/sdk/lithium/e44/scala211/stable/site

Saturday, July 23, 2016

Links to free big-data-sets



Many people who are starting their journey with big data and analytics find it hard to get their hands on the right kind of data to play or experiment with.

Most of the time, people have enthusiasm, they are learning the skill too, but they just don't have the right kind of dataset to apply their newly acquired skills.

Democratising data has been at the forefront of discussions for many data pioneers. Through their efforts and with some re-alignment of technology priorities, some government bodies have opened up their datasets to the public.

As a result, here is a set of links (reproduced) to some of the free sources.
  1. Data.gov http://data.gov The US Government pledged last year to make all government data available freely online. This site is the first stage and acts as a portal to all sorts of amazing information on everything from climate to crime. 
  2. US Census Bureau http://www.census.gov/data.html A wealth of information on the lives of US citizens covering population data, geographic data and education. 
  3. Socrata is another interesting place to explore government-related data, with some visualisation tools built-in. 
  4. European Union Open Data Portal http://open-data.europa.eu/en/data/ As the above, but based on data from European Union institutions. 
  5. Data.gov.uk http://data.gov.uk/ Data from the UK Government, including the British National Bibliography – metadata on all UK books and publications since 1950. 
  6. Canada Open Data is a pilot project with many government and geospatial datasets. 
  7. Datacatalogs.org offers open government data from US, EU, Canada, CKAN, and more. 
  8. The CIA World Factbook https://www.cia.gov/library/publications/the-world-factbook/Information on history, population, economy, government, infrastructure and military of 267 countries. 
  9. Healthdata.gov https://www.healthdata.gov/ 125 years of US healthcare data including claim-level Medicare data, epidemiology and population statistics. 
  10. NHS Health and Social Care Information Centre http://www.hscic.gov.uk/home Health data sets from the UK National Health Service. 
  11. UNICEF offers statistics on the situation of women and children worldwide. 
  12. World Health Organization offers world hunger, health, and disease statistics. 
  13. Amazon Web Services public datasets http://aws.amazon.com/datasets Huge resource of public data, including the 1000 Genome Project, an attempt to build the most comprehensive database of human genetic information and NASA ’s database of satellite imagery of Earth. 
  14. Facebook FB +0.32% Graph https://developers.facebook.com/docs/graph-api Although much of the information on users’ Facebook profile is private, a lot isn’t – Facebook provide the Graph API as a way of querying the huge amount of information that its users are happy to share with the world (or can’t hide because they haven’t worked out how the privacy settings work). 
  15. Face.com: A fascinating tool for facial recognition data. 
  16. UCLA makes some of the data from its courses public. 
  17. Data Market is a place to check out data related to economics, healthcare, food and agriculture, and the automotive industry. 
  18. Google Public data explorer includes data from world development indicators, OECD, and human development indicators, mostly related to economics data and the world. 
  19. Junar is a data scraping service that also includes data feeds. 
  20. Buzzdata is a social data sharing service that allows you to upload your own data and connect with others who are uploading their data. 
  21. Gapminder http://www.gapminder.org/data/ Compilation of data from sources including the World Health Organization and World Bank covering economic, medical and social statistics from around the world. 
  22. Google GOOGL +0.66% Trends http://www.google.com/trends/explore Statistics on search volume (as a proportion of total search) for any given term, since 2004. 
  23. Google Finance https://www.google.com/finance 40 years’ worth of stock market data, updated in real time. 
  24. Google Books Ngrams http://storage.googleapis.com/books/ngrams/books/datasetsv2.htmlSearch and analyze the full text of any of the millions of books digitised as part of the Google Books project. 
  25. National Climatic Data Center http://www.ncdc.noaa.gov/data-access/quick-links#loc-clim Huge collection of environmental, meteorological and climate data sets from the US National Climatic Data Center. The world’s largest archive of weather data. 
  26. DBPedia http://wiki.dbpedia.org Wikipedia is comprised of millions of pieces of data, structured and unstructured on every subject under the sun. DBPedia is an ambitious project to catalogue and create a public, freely distributable database allowing anyone to analyze this data. 
  27. New York Times http://developer.nytimes.com/docs  Searchable, indexed archive of news articles going back to 1851. 
  28. Freebase http://www.freebase.com/ A community-compiled database of structured data about people, places and things, with over 45 million entries. 
  29. Million Song Data Set http://aws.amazon.com/datasets/6468931156960467 Metadata on over a million songs and pieces of music. Part of Amazon Web Services. 
  30. UCI Machine Learning Repository is a dataset specifically pre-processed for machine learning. 
  31. Financial Data Finder at OSU offers a large catalog of financial data sets. 
  32. Pew Research Center offers its raw data from its fascinating research into American life. 
  33. The BROAD Institute offers a number of cancer-related datasets. 

Credit to Forbes article at

http://www.forbes.com/sites/bernardmarr/2016/02/12/big-data-35-brilliant-and-free-data-sources-for-2016/#5b2a54cf6796

Friday, June 19, 2015

Teradata Data type abbreviation - described

Teradata data types (as reported in DBC.Columns.ColumnType can be cryptic and not always easy to remember.  Here's a ready reckoner - 

Abbreviation
Equivalent English :)
A1
ARRAY  
AN
MULTI-DIMENSIONAL ARRAY 
AT
TIME  
BF
BYTE  
BO
BLOB  
BV
VARBYTE  
CF
CHARACTER  
CO
CLOB  
CV
VARCHAR  
D
DECIMAL  
DA
DATE  
DH
INTERVAL DAY TO HOUR
DM
INTERVAL DAY TO MINUTE
DS
INTERVAL DAY TO SECOND
DY
INTERVAL DAY 
F
FLOAT  
HM
INTERVAL HOUR TO MINUTE
HS
INTERVAL HOUR TO SECOND
HR
INTERVAL HOUR 
I
INTEGER  
I1
BYTEINT  
I2
SMALLINT  
I8
BIGINT  
JN
JSON  
MI
INTERVAL MINUTE 
MO
INTERVAL MONTH 
MS
INTERVAL MINUTE TO SECOND
N
NUMBER  
PD
PERIOD(DATE)  
PM
PERIOD(TIMESTAMP WITH TIME ZONE)
PS
PERIOD(TIMESTAMP)  
PT
PERIOD(TIME)  
PZ
PERIOD(TIME WITH TIME ZONE)
SC
INTERVAL SECOND 
SZ
TIMESTAMP WITH TIME ZONE
TS
TIMESTAMP   
TZ
TIME WITH TIME ZONE
UT
UDT Type 
XM
XML  
YM
INTERVAL YEAR TO MONTH
YR
INTERVAL YEAR 
=++
TD_ANYTYPE