User data often is unpredictable, even when we can predict a
change coming our way we need to prepare for that. Make changes in our
environment to accept the incoming change, accommodate and absorb.
With that being a fact of life, our design should allow for those
changes to happen. With the structured data warehouses gone (well,
mostly), Hadoop based data stores are the norm these days, and would be default
in future not so distant. By definition, Hadoop allows all kinds of data
to be stored on the cluster and happily provides various tools to process such
data.
However, with different types of data out there on cluster, any
given tool would need to "know" what data that is? How to read it,
interpret it and process it. That would have been relatively straightforward
with the data schema being static, as in, a file comes in from source A, with a
certain format, and that’s about it. You can define programs/structures to work
with it.
What if the source system decides to alter the data set schema,
add a few columns, and change some column ordering, perhaps data type too, what
then? Would our program reading that dataset on Hadoop cluster still work as it?
No.. It won’t .. Because it would also require change to reflect the same
changes sent from source.
Assume that you did those changes too. So, now our program is
version 2, able to handle the dataset sent by source (v2 as well). What about
the older data lying around on Hadoop cluster (v1, with old schema)? How is
that dataset going to used/usable? What if that data was used by hive
tables, would you have two (or more) set of tables?
What if that data was exposed to some analytical programs, through
R or similar? I am sure we won’t want to rewrite/test the whole solutions just
because source system decided to change something?
And this scenario is not purely hypothetical, it happens
around us, more often than not, just that we deal with it in our own
patchwork styles, or solutions varying on a case to case basis.
Here I propose a somewhat generic solution, one that might work
for all, well, nearly all use cases.
Avro is a data serialization system, wherein the schema is built
into the file itself. As a result, programs reading the data-file doesn’t have
to bother with interpreting the data file through external schema.
Avro was designed by Doug Cutting for similar reasons, as a tool
that is language and schema independent and therefore unrelated
tools/technologies can deal with data files.
That being said, is not a silver bullet in itself, but it handles
the schema change fairly nicely.
Basically, you store the data on hdfs in Avro formatted files. For
that, Avro schema needs to be created, which is fairly straightforward
json structured file, indicating the record structure. Take a
look at an example here -
{"namespace": "myExample.avro", "type": "record", "name": "User", "fields": [ {"name": "name", "type": "string"}, {"name": "age", "type": ["int", "null"]}, {"name": "address", "type": ["string", "null"]} ] }
create external table ex_users (name String , age int , address String) stored as AVRO location '/apps/data/hive/ex_users.tbl';
When files are saved as Avro formatted files, information from the
schema is built into them, in plain text, and is parse-able for programs like
hive and others to interpret the rest of the file.
For this to work, the tool has to rely on Avro. Look at this hive table definition -
With the clause "Stored as Avro" we are basically telling hive to match the columns defined in hive with columns in Avro fi
les in that hdfs location and use whatever columns map to hive definition.
Avro is compressible, split-able and stores data in binary format,
providing some additional features.
When this Avro formatted file is exposed to programs like hive, they don’t have to worry about changes in schema, since they rely on the schema information in the file header. If the hive / R schema asks for 5 columns, but the file only has 3 of them (with names as matching criteria and not order/position of columns), those matching columns are project to the consuming tool, and the rest is ignored.
Similarly, if the tool asks for data that is not present in a
certain data file (remember our example v2 file with additional column as against
v1), the column that is not present is reported as null.
Now, look at the following data file (new avro schema - after source system decided to change)
{"namespace": "myExample.avro", "type": "record", "name": "User", "fields": [ {"name": "name", "type": "string"}, {"name": "age", "type": ["int", "null"]}, {"name": "salary", "type": ["float", "null"]}, {"name": "address", "type": ["string", "null"]} ]}
And the corresponding hive table definition -
create external table ex_users (name String , age int , address String, salary float)stored as AVROlocation '/apps/data/hive/ex_users.tbl';
For this table definition, its not a problem to query and report a data file in old schema (v1) wherein the column salary is not present. it would simply know from the avro schema definition in that header that the file doenst have that column, and it would gladly report that as null.
On the other hand, in the new structure has a different order of columns, which is also fine for hive/avro combination since the schema definition helps match up the columns and report the right content in right columns.
Similar analogies can be derived for other data consumers who need
to interpret the schemas for data files on hdfs.
In comparison to some other storage formats, we do lose some points on the performance scale, but for the use case, where schema is changing and we want to be able to handle the changes with little or no effort, this is a better fit than many others.
This blog is very helpful for beginners and experts also, thanks for sharing it. Keep share content on MSBI Online Training
ReplyDeleteGreat Informative post. Really this is a very helpfull information for every ,Hadoop learning beginners.
ReplyDeleteThanks
Katherine
The blog was absolutely fantastic! Lot of information is helpful in some or the other way. Keep updating the blog, looking forward for more content...Great job, keep it up
ReplyDeleteWorkday HCM Online Training
Oracle Fusion Financials Online Training
Oracle Fusion HCM Online Training
Oracle Fusion SCM Online Training
I am looking for and I love to post a comment ExcelR Machine Learning Courses" The content of your post is awesome" Great work!
ReplyDeleteThe blog was absolutely fantastic! Lot of information is helpful in some or the other way. Keep updating the blog, looking forward for more content...Great job, keep it up
ReplyDeleteIELTS Coaching in chennai
German Classes in Chennai
GRE Coaching Classes in Chennai
TOEFL Coaching in Chennai
spoken english classes in chennai | Communication training
The blog was absolutely fantastic! Lot of information is helpful in some or the other way. Keep updating the blog, looking forward for more content...Great job, keep it up
ReplyDelete| Certification | Cyber Security Online Training Course|
Ethical Hacking Training Course in Chennai | Certification | Ethical Hacking Online Training Course|
CCNA Training Course in Chennai | Certification | CCNA Online Training Course|
RPA Robotic Process Automation Training Course in Chennai | Certification | RPA Training Course Chennai|
SEO Training in Chennai | Certification | SEO Online Training Course
The blog was absolutely fantastic! Lot of information is helpful in some or the other way. Keep updating the blog, looking forward for more content...Great job, keep it up
ReplyDeleteoracle training in bangalore
hadoop training in bangalore
oracle training
oracle online training
oracle training in hyderabad
hadoop training in chennai
iMyFone LockWiper (8.5.3) Crack is encryption tool for iOS, Android, passwords. Register imyfone lockwiper by serial key 2023 free download. ImyFone Lockwiper Free Download
ReplyDeleteTally ERP 9 9.6.7 Crack with Product Key Free Download Tally ERP Crack is the best accounting tool that is very helpful Tally 9 With Crack Full Version Zip
ReplyDelete