Friday, June 19, 2015

Teradata Data type abbreviation - described

Teradata data types (as reported in DBC.Columns.ColumnType can be cryptic and not always easy to remember.  Here's a ready reckoner - 

Abbreviation
Equivalent English :)
A1
ARRAY  
AN
MULTI-DIMENSIONAL ARRAY 
AT
TIME  
BF
BYTE  
BO
BLOB  
BV
VARBYTE  
CF
CHARACTER  
CO
CLOB  
CV
VARCHAR  
D
DECIMAL  
DA
DATE  
DH
INTERVAL DAY TO HOUR
DM
INTERVAL DAY TO MINUTE
DS
INTERVAL DAY TO SECOND
DY
INTERVAL DAY 
F
FLOAT  
HM
INTERVAL HOUR TO MINUTE
HS
INTERVAL HOUR TO SECOND
HR
INTERVAL HOUR 
I
INTEGER  
I1
BYTEINT  
I2
SMALLINT  
I8
BIGINT  
JN
JSON  
MI
INTERVAL MINUTE 
MO
INTERVAL MONTH 
MS
INTERVAL MINUTE TO SECOND
N
NUMBER  
PD
PERIOD(DATE)  
PM
PERIOD(TIMESTAMP WITH TIME ZONE)
PS
PERIOD(TIMESTAMP)  
PT
PERIOD(TIME)  
PZ
PERIOD(TIME WITH TIME ZONE)
SC
INTERVAL SECOND 
SZ
TIMESTAMP WITH TIME ZONE
TS
TIMESTAMP   
TZ
TIME WITH TIME ZONE
UT
UDT Type 
XM
XML  
YM
INTERVAL YEAR TO MONTH
YR
INTERVAL YEAR 
=++
TD_ANYTYPE  

Thursday, May 7, 2015

Hadoop Meetup on the sidelines of Strata Hadoop Conference - Part 2

Read part 1 of this here

Day 2 of the meetup was equally exciting, if not better.  Lined up were talks from Qubit and Google, William Hill (a surprise for me - more later on that) and then PostCodeAnywhere, all very exciting from the synopsis.

Google & Qubit showcased basically a stream processing engine, with pluggable components, many of them can be written in different technologies and programming languages.

Of course Google Cloud Data flow is much more than just a stream processing engine, however, for real time data ingestion perspective, that feature is pretty significant.  

A completely managed system, it woks on the publish-subscribe (pub-sub) model.  As Reza put it, “pub-sub is not just data delivery mechanism, its used as a glue to hold the complete system together”.  Pluggable components is another differentiator for Google’s offering, in today’s demo they showcased bigtable as one of the consumers at the end.

From my own knowledge of stream processing, which is not significant in anyway, i could relate to many similarities with IBM’s info sphere streams and some with apache kafka.  However, a question around comparisons with these sites remained unanswered from Google (though in very good spirit, in a chat with the speaker Reza later on, it came out as more of a philosophical question avoidance than anything else).

The william hill talk (by Peter Morgan, their head of engineering), was a genuine surprise, at least for me.  Perhaps due to my ignorance, due to which i didn't realize, their systems are far more sophisticated and load bearing than I would have imagined.  As an example, they process 160TB of data through their systems on a daily basis.

Including many complexities managed through their system are their main components, the betting engine, the settlement engine among others. 

William Hill supports an open API as well, enabling app developers to pick up data elements and innovate. However, for obvious reasons, very limited data is thrown open in the public domain.  Would that be a deterrent for app developers ? not having enough data ?   For example, if i would want to report in an app, who’s betting on a  certain game, cross referenced with geo location data .. I cant do that, since William hill doesn't publish demographic data.  I personally feel alright with it, there are possibilities that many of those data elements can be used in ways to influence the betting system itself, becoming counter-productive.

I would imagine their IT systems to be one of the top notch systems around the place, to be able to manage such data volumes, with such speeds and accuracy. Commendable job.  I would probably write exclusively on their architecture once i get my hands on the presentation slides (couple of days may be).

The talk from PostCodeAnywhere was more educative to me, personally.  Got to understand a bit about Markov Models, chains and how they can be used for machine Learning.  Very interesting stuff there too.

Apache Spark is being seen more and more as the tool to be perform analytics on the fly, specially on large volumes of data.  It would be very interesting to see how R and python analytical capabilities compare with what spark offers.

Speaking to another attendee today, it came out the people prefer to use R more and more for massaging and cleansing purposes, however, its not seen as fit for heavy lifting required for performing real analytic and/or predictive pieces. For these areas, people still prefer to use Python.


IBM’s bigR is a possible contender for the job, where they talk about having optimised R for a hadoop cluster and have enabled it to work on top of hdfs.  However, bigR is not open source and that could be its biggest challenge in adoption.

Wednesday, May 6, 2015

Hadoop Meetup on the sidelines of Strata Hadoop Conference - Part 1

Not being able to make it up to the Main conference (Strata Hadoop London 2015), the evening meet-ups were the consolation pieces of getting in touch as much as possible.

In my view, these conferences/events often help one to get to know about the recent developments in the space, mostly showcasing whats being done with a given technology, whats coming up (future developments, innovations) and people's experiences with the technology, both good (the famous savings use cases) and bad (challenges faced in achieving production readiness, if ever).

Last evening, on day 1 of the conference, I ended up attending one of the meet-ups.  It was particularly useful for me, for couple of reasons.

There was a talk on the new execution engine for hive, i.e. hive running on spark. Always keen on internal workings of a complex piece of software (or hardware for that matter), i was very happy to be able to listen in directly from the person responsible for much of development on hive.  I have an audio recording of the whole talk, though I am hopeful that the conference organizers shall put up the video on their website anyway.

When Phill talked about his experiences on getting hadoop on its feet and how they orchestrated hadoop as a PaaS within BT was something  insightful, (they seem to call it HaaS there).  it showed two things to me - architects' always have to "find the funding" for innovations and new tech to be brought in to the organization :) Also, Security on hadoop is "doable", as his use case proved to be.  There are reliable tools and solutions which can help achieve Enterprise level security for a hadoop cluster.

Another interesting talk was Dato's.  Dato is a machine learning/modelling tool, which claims to be fairly quick than many others, allows the data to be consumed in-place (like hadoop) and supports hdfs integration.  I am sure to follow up on Dato with the organization.  for me its one of the key problems of the future, where data is too much and the modelling algorithm has to be enabled to consume data for training sets in-place, since its just not practical to move tera/petabytes of data to where program is.  IBM BigR is doing something similar as well.


Finally, another interesting talk was from Big Data Boards team.  they talked about how they are building cluster hardware for hosting small hadoop clusters. Interesting proposition there, to have your own hadoop cluster running on a desk in a corner of your office.  no need of going over to the likes of aws for hosting the cluster.  They say that many universities etc are already using the clusters they made for real life experiments.  very interesting space for me.

Saturday, April 4, 2015

Data Sets


Some of the publicly available datasets are listed here.  This is a continuously evolving page, and therefore might not always be 100% up-to-date.  For licensing information, please refer to each datasets’ own licensing page.  I take no responsibility for the licensing/distribution of the datasets.

1. Amazon’s Ratings dataset


2. imdb movies dataset


Monday, February 23, 2015

Weighted trust graph for authentication

During the hackathon (discussed in earlier post), I met with Gary.  He had come in to be play mentor, but since he couldn't devote enough time, ended up being a guest.

After engaging Neo4J in few meetups and understanding the database a bit, i was contemplating using Graph databases for authentication.  It might have consequent applications in fraud analytics too, where graph databases are used already[1]

During the discussion on the idea, Gary suggested to mould the idea differently and possibly using Trust networks/graphs, wherein each node (entities i.e. people, organisations etc) are related to each other through weighted directed relationships.  The weight of this relationship can be deduced in multiple ways, e.g. by periodic algorithms similar to search engine ranking algorithms or by asking people their trust level of others on a scale of 1 to x. x being a hypothetical standard scale that can be used as a yardstick across the network for determining level of trust.

While researching some more, I found that similar research has been done in this space [2], though applications are few to come by.

It was also pointed out in the discussion that banks don't really have a huge interest in preventing this crime.  the view was, since banks already provision for certain amount in their balance sheets for these "potential" thefts, they don't really bother so much.

I believe that the financial institutions as a single unit need to attack these fraud crimes by joining hands and leveraging best of research and technology for minimising the crime.  The technology exists for providing unto the moment information on these events, some more innovation and research is needed that can bring together the whole picture and look like a "solution"





[1]http://info.neotechnology.com/rs/neotechnology/images/Fraud%20Detection%20Using%20GraphDB%20-%202014.pdf?_ga=1.182367911.1656585956.1417700858
[2]http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.212.9978&rep=rep1&type=pdf

Tuesday, February 17, 2015

Hackathon - Fintechathon

A hackathon is a hacking marathon wherein many people are invited to attack problems around a theme.

I recently attended a hackathon over the valentines weekend.  Organised by StartupBootCamp Fintech, it was attended by about 100 people. Many ideas, many teams, some partners i.e. corporates with their own challenges.

I was initially team less, but then found some others who were in in my situation.  We formed a team,  around my favourite topic, data analytics.  We had two business development guys, Adam and Oksana, two java programmers, Nelson and Nick, and a mobile app developer Vlad.  

Hackathon teams are formed around ideas, wherein someone with an idea takes on the ownership, and the collects the team around it.  Things are focussed from moment one, and the march forward is fairly disciplined and fast, thats why the name hackathon... keep hacking, for long, long days and nights.

We, had the other way round.. all of us were teamless and therefore put together as a team. We had no idea to start with.  As a result of that, we spent better part of the friday evening and saturday zeroing on the problem to attack.  

Finally we decided to go ahead with an data analytics piece. I won't chalk out the details here, but its something that the marketing guys always love and like.  To know when their customers are off to a life event, and therefore could be offered some product.

By Sunday morning, we had lost two team members, one to a different idea and one to sleep. Vlad hadn't slept in 4 nights, so he kept sleeping much of Sunday.

As a result, we ended up a team without anyone who could do any UI design, and therefore only some backend API calls, some analytics pieces and nothing to show off.

The result was that, we couldn't show any working model in our pitch presentation and had to contend with a presentation only, which tried to describe our idea to the judges.

Of course we lost, but then it was a very nicely spent weekend, met some very nice people, made some contacts, and possibly a future for the idea.

Sunday, February 1, 2015

Playing with R

Since early 2013, when I got to know of possible online learning for R through www.coursera.com, I have been a fan of this language.

My teachers in college would probably not like to hear this, but I was never  a great Statistics student, it was always a "passing" requirement during my studies. However, when I learned about R and did some experimental work with R, i found that stats is indeed a very interesting and powerful mathematical tool. And with R i found it fun too, which in my personal opinion is the root of an ongoing learning process.  If learning is not fun, it stops.  

For me, laying my hands on RStudio was the best thing that happened to me around the statistics learning curve.  The tool makes your R learning curve so much more easy and possible, the college could never do that.

As an example, in college we could never realise real life examples and uses of regression, curve fitting etc.  However, now using R, it all makes so much more sense and with lot of ease. 

I can't put a finger on the clear reason, whether the years have added that capacity to understand the subject, or the very fact that today these kinds of tools exist which let you explore that area with so much straightforward ease, but the fact remains. It has become so much easy, relatively, to get your hands on the power of statistics and analytics with tools like.

Thats probably one reason, why i am getting more and more inclined towards data analytics. Its a very powerful and interesting area, with huge potential towards shaping our future.

Monday, May 26, 2014

Can we rollback the Truncate, Drop, and Delete command deleted data?

Here we talk about the concept of truncate, drop and delete commands and whether they can be rollback, if so, how and when !!

Yesterday I was working on this and see if we use the truncate command in transaction, we can rollback the data. One more thing if we perform delete command outside the transaction, we can’t rollback the data.

Let’s try this…

Suppose I have a table dbo.Employee with three column- Id, Name, and Salary.
CREATE TABLE [dbo].Employee(
      Id [int] NULL,
      Name [varchar](50) NULL,
      Salary [Int] NULL
)
Insert the values into the db.Employee table
INSERT INTO dbo.Employee VALUES(1,'Employee1',10000)
INSERT INTO dbo.Employee VALUES (2,'Employee2',20000)
INSERT INTO dbo.Employee VALUES (3,'Employee3',30000)
INSERT INTO dbo.Employee VALUES (4,'Employee4',10000)
INSERT INTO dbo.Employee VALUES (5,'Employee5',30000)
Now I have a five rows in table

SELECT * FROM dbo.Employee










Now we perform the delete command in Transaction and then select command
BEGIN TRAN
DELETE FROM dbo.Employee WHERE ID = 2
ROLLBACK

SELECT * FROM dbo.Employee
Output:

But again I run the select command then I get the same result as previous select statement (i.e 5).
Now I perform the Truncate command in Transaction, Let see what happen.
BEGIN TRAN
TRUNCATE TABLE dbo.Employee
ROLLBACK
Now again I run the select command and got the same rows and same result, if same thing we do with Drop command that will also give same result.
It means If we use the Delete, Truncate, and Drop command with Transaction, we can rollback the data and if we perform all three command outside the Transaction we can’t rollback.

One more thing I have to add in this article, we can’t rollback the data but we can restore the data from the Transaction_log, if we use the delete command. We can’t restore the data if we use the truncate or drop command. How and Why read in next article.

In summary, 
We can rollback the DDL and DML commands, if all these commands are in transaction. We can only restore the DML commands data. 

Thursday, May 22, 2014

Difference between Char, Varchar, and nVarchar data type in SQL Server



In real world everyone has different need and requirement, why we use the same data type to store different types of value. What is the actual difference between Char, Varchar and nVarchar data type.

Char DataType
Char datatype which is used to store the fixed length, non-unicode characters data. Suppose if we declared char (50) it will allocates memory for 50 characters. Once we declare char (50) and if we enter fewer than the number of characters defined (i.e. 6), the remaining length will be space filled to the right. it means other 44 characters of memory will be wasted.

Now we look in the query, How will it effect.

 

Output of this Select Statement:


Use this data type when the column data is to be of fixed length, which tends to be the case
for customer IDs and bank account IDs.

Nchar DataType
Nchar type is exactly similar to char. But hold character in Unicode format rather than ANSI. The Unicode format has a larger character set range than ANSI. Unicode datatype take exact double memory space in sql server. So use the nchar when you want to store the Unicode value.

varchar DataType

Varchar datatype, which is used to store the alphanumeric values, just like char datatype. If both are similar than what is the difference between char and Varchar? What is the need to create the new datatype? Later in this article we will discuss this. The maximum size of a Varchar column is 8,000 characters. However, if you define the column with no size—that is, Varchar () — then the length will default to 1.


Difference between the char and Varchar datatype

Char belongs to fixed length characters datatype, Varchar belongs to variable length characters datatype. If you define a column to be 20 characters long, but If you enter fewer than the number of characters defined like 10 characters then it will consume only 10 characters memory. It will not consume the defined memory.



Now we look in the query:





Output of the select statement:


We can see in above image the output of the select statement, the ZipCodelength value are varying according to data in zipCode field.


So we should use the Char for the fixed length data field and Varchar for the variable length data field.



Nvarchar DataType

Nvarchar datatype same as varchar datatype but only difference nvarchar is used to store Unicode characters and it allows you to store multiple languages in database. nvarchar datatype will take twice as much space to store extended set of characters as required by other languages.

Difference between Varchar and Nvarchar DataType
Varchar(n)
Nvarchar(n)
1.    Non-Unicode Variable Length character data type.



Example: Declare @FirstName As Varchar(20) = ‘Developer’

Select @FirstName



2.    It takes 1 byte per character.



Example: Declare @FirstName As Varchar(20) = ‘Developer’

Select @FirstName AS FirstName,

DATALENGTH(@FirstName) As Length



Result:

FirstName Length

Developer 9


3.    Can store maximum 8000 Non-Unicode characters.

4.    If the value of n is not specified at the time of variable declaration or column definition then it is considered as 1.

Example: Declare @FirstName As Varchar =’Developer’

SELECT @firstName FirstName,

DATALENGTH(@firstName) Length



Result:

FirstName Length

D                1

5.     If n is not specified in while using Cast/Convert   functions, then it is considered as 30.
     Example:
     Declare @firstName Varchar(50)=
‘Unicode character sets hold up to 65,536’
   
Select CAST(@firstName As Varchar) FirstName,
DATALENGTH(CAST(@firstName As varchar)) Length 
      Result:
FirstName                                Length
Unicode character sets hold up  30
1.    Unicode Variable Length character data type. Nvarchar  can store both the Unicode and non-Unicode (i.e) Chinese, Japanese, Korean etc) character.



Example: Declare @FirstName As Nvarchar(20) = ‘Developer’

Select @FirstName

2.    It takes exactly double bytes per character as compare to Varchar. It takes 2 bytes per Unicode/Non-Unicode character.



Example: Declare @FirstName As Nvarchar(20)= ‘Developer

Select @FirstName AS FirstName,

DATALENGTH(@FirstName) AS Length



Result:

FirstName Length

Developer 18

3.    Can store maximum 4000 Unicode/Non-Unicode characters.

4.    If the value of n is not specified at the time of variable declaration or column definition then it is considered as 1.

Example: Declare @FirstName As Nvarchar =’Developer’

SELECT @firstName FirstName,

DATALENGTH(@firstName) Length



Result:

FirstName Length

D                2

5.    If n is not specified in while using Cast/Convert     functions, then it is considered as 30.
      Example:
       Declare @firstName Nvarchar(50)=
 ‘Unicode character sets hold up to 65,536’
   
 Select CAST(@firstName As Nvarchar) FirstName,
 DATALENGTH(CAST(@firstName As Nvarchar)) Length
  
 Result:
 FirstName                                 Length
 Unicode character sets hold up   60 

    

So if you are not using other languages then it’s better to use varchar datatype instead of nvarchar.