Tuesday, December 20, 2016

Hadoop - Small Files vs Big Files

Credits- https://blogs.msdn.microsoft.com/cindygross/2015/05/04/hadoop-likes-big-files/

One of the frequently overlooked yet essential best practices for Hadoop is to prefer fewer, bigger files over more, smaller files. How small is too small and how many is too many? How do you stitch together all those small Internet of Things files into files "big enough" for Hadoop to process efficiently?
The Problem
One performance best practice for Hadoop is to have fewer large files as opposed to large numbers of small files. A related best practice is to not partition “too much”. Part of the reason for not over-partitioning is that it generally leads to larger numbers of smaller files.
Too small is smaller than HDFS block size (chunk size), or realistically small is something less than several times larger than chunk size. A very, very rough rule of thumb is files should be at least 1GB each and no more than maybe around 10,000-ish files per table. These numbers, especially the maximum total number of files per table, vary depending on many factors. However, it gives you a reference point. The 1GB is based on multiples of the chunk size while the 2nd is honestly a bit of a guess based on a typical small cluster.
Why Is It Important?
One reason for this recommendation is that Hadoop’s name node service keep track of all the files and where the internal chunks of the individual files are. The more files it has to track the more memory it needs on the head node and the longer it takes to build a job execution plan. The number and size of files also affects how memory is used on each node.
smallpiebigpieLet’s say your chunk size is 256MB. That’s the maximum size of each piece of the file that Hadoop will store per node. So if you have 10 nodes and a single 1GB file it would be split into 4 chunks of 256MB each and stored on 4 of those nodes (I’m ignoring the replication factor for this discussion). If you have 1000 files that are 1MB each (still a total data size of ~1GB) then every one of those files is a separate chunk and 1000 chunks are spread across those 10 nodes. NOTE: In Azure and WASB this happens somewhat differently behind the scenes – the data isn’t physically chunked up when initially stored but rather chunked up at the time a job runs.
With the single 1GB file the name node has 5 things to keep track of – the logical file plus the 4 physical chunks and their associated physical locations. With 1000 smaller files the name node has to track the logical file plus 1000 physical chunks and their physical locations. That uses more memory and results in more work when the head node service uses the file location information to build out the plan for how it will split out any Hadoop job into tasks across the many nodes. When we’re talking about systems that often have TBs or PBs of data the difference between small and large files can add up quickly.
The other problem comes at the time that the data is read by a Hadoop job. When the job runs on each node it loads the files the task tracker identified for it to work with into memory on that local node (in WASB the chunking is done at this point). When there are more files to be read for the same amount of data it results in more work and slower execution time for each task within each job. Sometimes you will see hard errors when operating system limits are hit related to the number of open files. There is also more internal work involved in reading the larger number of files and combining the data.
Stitching
There are several options for stitching files together.
  • Combine the files as they land using the code that moves the files. This is the most performant and efficient method in most cases.
  • INSERT into new Hive tables (directories) which creates larger files under the covers. The output file size can be controlled with settings like hive.merge.smallfiles.avgsize and hive.merge.size.per.task.
  • Use a combiner in Pig to load the many small files into bigger splits.
  • Use the HDFS FileSystem Concat APIhttp://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html#concat.
  • Write custom stitching code and make it a JAR.
  • Enable the Hadoop Archive (HAR). This is not very efficient for this scenario but I am including it for completeness.
There are several writeups out there that address the details of each of these methods so I won’t repeat them.
The key here is to work with fewer, larger files as much as possible in Hadoop. The exact steps to get there will vary depending on your specific scenario.

Wednesday, November 16, 2016

Eclipse - installing Scala plugin manually?

I have been playing around with Scala for some time, and was always using the Scala IDE (www.scala-ide.org) which is based on a relatively older version of Eclipse (Luna).

I recently discovered this, wherein you could install the scala plug-in on a regular Eclipse installation.

Just add the following url as a new update site in your local eclipse installation and you'd be able to install the scala plugin just like that -


http://download.scala-ide.org/sdk/lithium/e44/scala211/stable/site

Sunday, July 24, 2016

Links to free big-data-sets



Many people who are starting their journey with big data and analytics find it hard to get their hands on the right kind of data to play or experiment with.

Most of the time, people have enthusiasm, they are learning the skill too, but they just don't have the right kind of dataset to apply their newly acquired skills.

Democratising data has been at the forefront of discussions for many data pioneers. Through their efforts and with some re-alignment of technology priorities, some government bodies have opened up their datasets to the public.

As a result, here is a set of links (reproduced) to some of the free sources.
  1. Data.gov http://data.gov The US Government pledged last year to make all government data available freely online. This site is the first stage and acts as a portal to all sorts of amazing information on everything from climate to crime. 
  2. US Census Bureau http://www.census.gov/data.html A wealth of information on the lives of US citizens covering population data, geographic data and education. 
  3. Socrata is another interesting place to explore government-related data, with some visualisation tools built-in. 
  4. European Union Open Data Portal http://open-data.europa.eu/en/data/ As the above, but based on data from European Union institutions. 
  5. Data.gov.uk http://data.gov.uk/ Data from the UK Government, including the British National Bibliography – metadata on all UK books and publications since 1950. 
  6. Canada Open Data is a pilot project with many government and geospatial datasets. 
  7. Datacatalogs.org offers open government data from US, EU, Canada, CKAN, and more. 
  8. The CIA World Factbook https://www.cia.gov/library/publications/the-world-factbook/Information on history, population, economy, government, infrastructure and military of 267 countries. 
  9. Healthdata.gov https://www.healthdata.gov/ 125 years of US healthcare data including claim-level Medicare data, epidemiology and population statistics. 
  10. NHS Health and Social Care Information Centre http://www.hscic.gov.uk/home Health data sets from the UK National Health Service. 
  11. UNICEF offers statistics on the situation of women and children worldwide. 
  12. World Health Organization offers world hunger, health, and disease statistics. 
  13. Amazon Web Services public datasets http://aws.amazon.com/datasets Huge resource of public data, including the 1000 Genome Project, an attempt to build the most comprehensive database of human genetic information and NASA ’s database of satellite imagery of Earth. 
  14. Facebook FB +0.32% Graph https://developers.facebook.com/docs/graph-api Although much of the information on users’ Facebook profile is private, a lot isn’t – Facebook provide the Graph API as a way of querying the huge amount of information that its users are happy to share with the world (or can’t hide because they haven’t worked out how the privacy settings work). 
  15. Face.com: A fascinating tool for facial recognition data. 
  16. UCLA makes some of the data from its courses public. 
  17. Data Market is a place to check out data related to economics, healthcare, food and agriculture, and the automotive industry. 
  18. Google Public data explorer includes data from world development indicators, OECD, and human development indicators, mostly related to economics data and the world. 
  19. Junar is a data scraping service that also includes data feeds. 
  20. Buzzdata is a social data sharing service that allows you to upload your own data and connect with others who are uploading their data. 
  21. Gapminder http://www.gapminder.org/data/ Compilation of data from sources including the World Health Organization and World Bank covering economic, medical and social statistics from around the world. 
  22. Google GOOGL +0.66% Trends http://www.google.com/trends/explore Statistics on search volume (as a proportion of total search) for any given term, since 2004. 
  23. Google Finance https://www.google.com/finance 40 years’ worth of stock market data, updated in real time. 
  24. Google Books Ngrams http://storage.googleapis.com/books/ngrams/books/datasetsv2.htmlSearch and analyze the full text of any of the millions of books digitised as part of the Google Books project. 
  25. National Climatic Data Center http://www.ncdc.noaa.gov/data-access/quick-links#loc-clim Huge collection of environmental, meteorological and climate data sets from the US National Climatic Data Center. The world’s largest archive of weather data. 
  26. DBPedia http://wiki.dbpedia.org Wikipedia is comprised of millions of pieces of data, structured and unstructured on every subject under the sun. DBPedia is an ambitious project to catalogue and create a public, freely distributable database allowing anyone to analyze this data. 
  27. New York Times http://developer.nytimes.com/docs  Searchable, indexed archive of news articles going back to 1851. 
  28. Freebase http://www.freebase.com/ A community-compiled database of structured data about people, places and things, with over 45 million entries. 
  29. Million Song Data Set http://aws.amazon.com/datasets/6468931156960467 Metadata on over a million songs and pieces of music. Part of Amazon Web Services. 
  30. UCI Machine Learning Repository is a dataset specifically pre-processed for machine learning. 
  31. Financial Data Finder at OSU offers a large catalog of financial data sets. 
  32. Pew Research Center offers its raw data from its fascinating research into American life. 
  33. The BROAD Institute offers a number of cancer-related datasets. 

Credit to Forbes article at

http://www.forbes.com/sites/bernardmarr/2016/02/12/big-data-35-brilliant-and-free-data-sources-for-2016/#5b2a54cf6796

Friday, June 19, 2015

Teradata Data type abbreviation - described

Teradata data types (as reported in DBC.Columns.ColumnType can be cryptic and not always easy to remember.  Here's a ready reckoner - 

Abbreviation
Equivalent English :)
A1
ARRAY  
AN
MULTI-DIMENSIONAL ARRAY 
AT
TIME  
BF
BYTE  
BO
BLOB  
BV
VARBYTE  
CF
CHARACTER  
CO
CLOB  
CV
VARCHAR  
D
DECIMAL  
DA
DATE  
DH
INTERVAL DAY TO HOUR
DM
INTERVAL DAY TO MINUTE
DS
INTERVAL DAY TO SECOND
DY
INTERVAL DAY 
F
FLOAT  
HM
INTERVAL HOUR TO MINUTE
HS
INTERVAL HOUR TO SECOND
HR
INTERVAL HOUR 
I
INTEGER  
I1
BYTEINT  
I2
SMALLINT  
I8
BIGINT  
JN
JSON  
MI
INTERVAL MINUTE 
MO
INTERVAL MONTH 
MS
INTERVAL MINUTE TO SECOND
N
NUMBER  
PD
PERIOD(DATE)  
PM
PERIOD(TIMESTAMP WITH TIME ZONE)
PS
PERIOD(TIMESTAMP)  
PT
PERIOD(TIME)  
PZ
PERIOD(TIME WITH TIME ZONE)
SC
INTERVAL SECOND 
SZ
TIMESTAMP WITH TIME ZONE
TS
TIMESTAMP   
TZ
TIME WITH TIME ZONE
UT
UDT Type 
XM
XML  
YM
INTERVAL YEAR TO MONTH
YR
INTERVAL YEAR 
=++
TD_ANYTYPE  

Thursday, May 7, 2015

Hadoop Meetup on the sidelines of Strata Hadoop Conference - Part 2

Read part 1 of this here

Day 2 of the meetup was equally exciting, if not better.  Lined up were talks from Qubit and Google, William Hill (a surprise for me - more later on that) and then PostCodeAnywhere, all very exciting from the synopsis.

Google & Qubit showcased basically a stream processing engine, with pluggable components, many of them can be written in different technologies and programming languages.

Of course Google Cloud Data flow is much more than just a stream processing engine, however, for real time data ingestion perspective, that feature is pretty significant.  

A completely managed system, it woks on the publish-subscribe (pub-sub) model.  As Reza put it, “pub-sub is not just data delivery mechanism, its used as a glue to hold the complete system together”.  Pluggable components is another differentiator for Google’s offering, in today’s demo they showcased bigtable as one of the consumers at the end.

From my own knowledge of stream processing, which is not significant in anyway, i could relate to many similarities with IBM’s info sphere streams and some with apache kafka.  However, a question around comparisons with these sites remained unanswered from Google (though in very good spirit, in a chat with the speaker Reza later on, it came out as more of a philosophical question avoidance than anything else).

The william hill talk (by Peter Morgan, their head of engineering), was a genuine surprise, at least for me.  Perhaps due to my ignorance, due to which i didn't realize, their systems are far more sophisticated and load bearing than I would have imagined.  As an example, they process 160TB of data through their systems on a daily basis.

Including many complexities managed through their system are their main components, the betting engine, the settlement engine among others. 

William Hill supports an open API as well, enabling app developers to pick up data elements and innovate. However, for obvious reasons, very limited data is thrown open in the public domain.  Would that be a deterrent for app developers ? not having enough data ?   For example, if i would want to report in an app, who’s betting on a  certain game, cross referenced with geo location data .. I cant do that, since William hill doesn't publish demographic data.  I personally feel alright with it, there are possibilities that many of those data elements can be used in ways to influence the betting system itself, becoming counter-productive.

I would imagine their IT systems to be one of the top notch systems around the place, to be able to manage such data volumes, with such speeds and accuracy. Commendable job.  I would probably write exclusively on their architecture once i get my hands on the presentation slides (couple of days may be).

The talk from PostCodeAnywhere was more educative to me, personally.  Got to understand a bit about Markov Models, chains and how they can be used for machine Learning.  Very interesting stuff there too.

Apache Spark is being seen more and more as the tool to be perform analytics on the fly, specially on large volumes of data.  It would be very interesting to see how R and python analytical capabilities compare with what spark offers.

Speaking to another attendee today, it came out the people prefer to use R more and more for massaging and cleansing purposes, however, its not seen as fit for heavy lifting required for performing real analytic and/or predictive pieces. For these areas, people still prefer to use Python.


IBM’s bigR is a possible contender for the job, where they talk about having optimised R for a hadoop cluster and have enabled it to work on top of hdfs.  However, bigR is not open source and that could be its biggest challenge in adoption.

Wednesday, May 6, 2015

Hadoop Meetup on the sidelines of Strata Hadoop Conference - Part 1

Not being able to make it up to the Main conference (Strata Hadoop London 2015), the evening meet-ups were the consolation pieces of getting in touch as much as possible.

In my view, these conferences/events often help one to get to know about the recent developments in the space, mostly showcasing whats being done with a given technology, whats coming up (future developments, innovations) and people's experiences with the technology, both good (the famous savings use cases) and bad (challenges faced in achieving production readiness, if ever).

Last evening, on day 1 of the conference, I ended up attending one of the meet-ups.  It was particularly useful for me, for couple of reasons.

There was a talk on the new execution engine for hive, i.e. hive running on spark. Always keen on internal workings of a complex piece of software (or hardware for that matter), i was very happy to be able to listen in directly from the person responsible for much of development on hive.  I have an audio recording of the whole talk, though I am hopeful that the conference organizers shall put up the video on their website anyway.

When Phill talked about his experiences on getting hadoop on its feet and how they orchestrated hadoop as a PaaS within BT was something  insightful, (they seem to call it HaaS there).  it showed two things to me - architects' always have to "find the funding" for innovations and new tech to be brought in to the organization :) Also, Security on hadoop is "doable", as his use case proved to be.  There are reliable tools and solutions which can help achieve Enterprise level security for a hadoop cluster.

Another interesting talk was Dato's.  Dato is a machine learning/modelling tool, which claims to be fairly quick than many others, allows the data to be consumed in-place (like hadoop) and supports hdfs integration.  I am sure to follow up on Dato with the organization.  for me its one of the key problems of the future, where data is too much and the modelling algorithm has to be enabled to consume data for training sets in-place, since its just not practical to move tera/petabytes of data to where program is.  IBM BigR is doing something similar as well.


Finally, another interesting talk was from Big Data Boards team.  they talked about how they are building cluster hardware for hosting small hadoop clusters. Interesting proposition there, to have your own hadoop cluster running on a desk in a corner of your office.  no need of going over to the likes of aws for hosting the cluster.  They say that many universities etc are already using the clusters they made for real life experiments.  very interesting space for me.

Saturday, April 4, 2015

Data Sets


Some of the publicly available datasets are listed here.  This is a continuously evolving page, and therefore might not always be 100% up-to-date.  For licensing information, please refer to each datasets’ own licensing page.  I take no responsibility for the licensing/distribution of the datasets.

1. Amazon’s Ratings dataset


2. imdb movies dataset


Monday, February 23, 2015

Weighted trust graph for authentication

During the hackathon (discussed in earlier post), I met with Gary.  He had come in to be play mentor, but since he couldn't devote enough time, ended up being a guest.

After engaging Neo4J in few meetups and understanding the database a bit, i was contemplating using Graph databases for authentication.  It might have consequent applications in fraud analytics too, where graph databases are used already[1]

During the discussion on the idea, Gary suggested to mould the idea differently and possibly using Trust networks/graphs, wherein each node (entities i.e. people, organisations etc) are related to each other through weighted directed relationships.  The weight of this relationship can be deduced in multiple ways, e.g. by periodic algorithms similar to search engine ranking algorithms or by asking people their trust level of others on a scale of 1 to x. x being a hypothetical standard scale that can be used as a yardstick across the network for determining level of trust.

While researching some more, I found that similar research has been done in this space [2], though applications are few to come by.

It was also pointed out in the discussion that banks don't really have a huge interest in preventing this crime.  the view was, since banks already provision for certain amount in their balance sheets for these "potential" thefts, they don't really bother so much.

I believe that the financial institutions as a single unit need to attack these fraud crimes by joining hands and leveraging best of research and technology for minimising the crime.  The technology exists for providing unto the moment information on these events, some more innovation and research is needed that can bring together the whole picture and look like a "solution"





[1]http://info.neotechnology.com/rs/neotechnology/images/Fraud%20Detection%20Using%20GraphDB%20-%202014.pdf?_ga=1.182367911.1656585956.1417700858
[2]http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.212.9978&rep=rep1&type=pdf

Tuesday, February 17, 2015

Hackathon - Fintechathon

A hackathon is a hacking marathon wherein many people are invited to attack problems around a theme.

I recently attended a hackathon over the valentines weekend.  Organised by StartupBootCamp Fintech, it was attended by about 100 people. Many ideas, many teams, some partners i.e. corporates with their own challenges.

I was initially team less, but then found some others who were in in my situation.  We formed a team,  around my favourite topic, data analytics.  We had two business development guys, Adam and Oksana, two java programmers, Nelson and Nick, and a mobile app developer Vlad.  

Hackathon teams are formed around ideas, wherein someone with an idea takes on the ownership, and the collects the team around it.  Things are focussed from moment one, and the march forward is fairly disciplined and fast, thats why the name hackathon... keep hacking, for long, long days and nights.

We, had the other way round.. all of us were teamless and therefore put together as a team. We had no idea to start with.  As a result of that, we spent better part of the friday evening and saturday zeroing on the problem to attack.  

Finally we decided to go ahead with an data analytics piece. I won't chalk out the details here, but its something that the marketing guys always love and like.  To know when their customers are off to a life event, and therefore could be offered some product.

By Sunday morning, we had lost two team members, one to a different idea and one to sleep. Vlad hadn't slept in 4 nights, so he kept sleeping much of Sunday.

As a result, we ended up a team without anyone who could do any UI design, and therefore only some backend API calls, some analytics pieces and nothing to show off.

The result was that, we couldn't show any working model in our pitch presentation and had to contend with a presentation only, which tried to describe our idea to the judges.

Of course we lost, but then it was a very nicely spent weekend, met some very nice people, made some contacts, and possibly a future for the idea.