Caution!!! BigData S.L.I.P.S.: five tips when using analytics


BigData_SLIPS

Along my brief research on BigData, I’ve found 5 type of S.L.I.P.S that a data scientist might encounter along the way: Statistic, Learning, Information, Psychology and Sources.

1) Statistic (Left Foot)

Is without any doubt the main and well-known technical aspect. The most common slip concerning statistic is misleading correlation with causation. In other words, discovering correlations among variables doesn’t necessarily imply a cause-effect relation. Mathematically speaking, correlation is a necessary but not sufficient condition for a cause-effect relationship.

(see also K. Borne: Statistical Truisms in the Age of BigData).

2) Learning (Right Foot)

OK, lets assume that a cause-effect relationship exists: which model\algorithm to chose in order to describe the relationship? There are many: ARMA, Kalman’s Filter, Neural Networks, customized,… which one fits best? A model that has been validated with the data available now might be not valid anymore in the future. So, constantly monitoring and measure the error of prediction with the estimated values by the model.

Choosing a model implies making assumptions. In other words, never quit to learn from data and be open to break assumptions otherwise predictions and analysis will be slanted.

3) Information (Right Hand)

Which information is really meaningful? That’s the first point to clarify before implementing a bigdata initiative or any new BI tool for your business.

Another point is misleading information with data. According to information theory, and a well-grounded common sense as well, data are facts while information is an interpretation of facts based upon assumptions (see also the D.A.I. model).

(see also: D. Laney & M. Beyer: BigData Strategy Essentials for Business and IT).

4) Psychology (the Head… of course!)

Have you ever heard about eco-chamber effects and social influence? Well, what happen is that social media might amplify irrational behaviours where individuals (me included) base its decisions, more or less consciously, not only on their knowledge or values but also on the actions of those who act before them.

In particular, whenever dealing with tricky-slippy tools such as bigdata sentiments is better to consider carefully the relevance and impacts of psychology and behaviours. The risk is to gather data that is intrinsically biased (see also My Issue with BigData Sentiments.)

(see also:

D. Amerland: How Semantic Search is changing end-user behaviour

C. Sunstein: Echo Chambers: Bush v. Gore, Impeachment, and Beyond – Princeton University Press

e! Science News: Information technology amplifies irrational group behavior).

5) Sources (Left Hand)

Variety!!! That is one of the three suggested by D. Laney: Volume, Velocity and Variety. Not only choosing the right model is important in order to avoid predictions’ and insights’ biases: what about the reliability of the sources of data that has been used for the analysis? If the data is biased predictions and insights will be biased as well. In particular, any series of data has a variance and a bias that can not be eliminated.

How to mitigate such a risk? By gathering data from different sources and weight them accordingly to its reliability: the variance.

Moreover, as a bigdata scientist and as a consumer as well, never forget positive and negative SEO tactics. There is a social-digital jungle there! (see Tripadvisor: a Case Study to Think Why BigData Variety matters).

Feelink – Feel & Think approach for doing life!

My Issue with BigData Sentiment Bubble: Sorry, Which Is the Variance of the Noise? (NON Verbal Communication)


Why sentiment analysis is so hard? How to interpret the word “Crush” in a tweet? Crush as in “being in love” or Crush as in “I will crush you”? According to Albert Mehrabian communication model and statistics, I would say that on average a tweet for a sentimenter has an accuracy of 7%. No such a big deal, isn’t it?

Let’s think about it by considering, as an example, the case of the sentiment analysis described in My issues with Big Data: Sentiment: crush as in “being in love” (positive) or crush as in “I will crush you” (negative)?

What is a sentimenter? As a process, is a tool that from an input (tweets) produce an outupt like “the sentiment is positive” or “the sentiment is negative“. Many sentimenters are even supposed to estimate how much the mood is positive or negative: cool!

Paraverbal and non-verbal communication

Anyhow, according to Albert Mehrabian the information transmitted in a communication process is 7% verbal, 38% paraverbal (tone of the voice) and the remaining 55% is non-verbal communication (facial expressions, gestures, posture,..).

In a Tweet, as well in a SMS or e-mail, neither paraverbal nor non-verbal communication are transmitted. Therefore, from a single tweet is possible to extract only the 7% of the information available: the text (verbal communication).

So, what about the paraverbal and non verbal communication? During a real life conversation, they play a key role since they count for 93% of all the message. Moreover, since paraverbal and non verbal messages are strictly connected with emotions, they are exactly what we need: sentiments!

Emotions are also transmitted and expressed though words such as “crush” in the example mentioned. However, within a communication process, not always the verbal and non-verbal are consistent. That’s the case when we talk with a friend, he\she saiys that everything is ok while we perceive, more or less consciously, something different from his\her tone or expressions. Thus we might ask: are you really sure that everything is ok? As a golden role, also for every day life, I would recommend to use non-verlbal signals as an opportunity to make questions rather than inferring mislead answers (see also: A good picture for Acceptance: feel the divergences & think how to deal with).

For these reason, the non-verbal messages are a kind of noise that interferes with verbal communication. In a tweet, it is a noise that interferes with the text. Such a noise can be as much disturbing as much the transmitter and the receiver are sensitive to the non-verbal communication. It might be so much disturbing to change completely the meaning of the message received.

Statistic and Information Theory

From a statistic point of view the noise might be significantly reduced by collecting more samples. In Twitter, a tweet is one sample and each tweet have 7% of available information (text) and 93% of noise (non verbal communication) that is the unknown information.

From a prediction\estimation point of view no noise means no errors.

Thus, thanks to BigData, if the sentimenter analyzes all the tweets theoretically it’s possible to reduce the noise to zero and thus having no prediction error about sentiments…...WRONG!!!

Even if the sentimenter is able to provide a result by analyzing all the BigData tweets (see Statistical Truisms in the Age of Big Data Features):

the final error in our predictive models is likely to be irreducible beyond a certain threshold: this is the intrinsic sample variance“.

The variance is an estimation of how much samples are different each others. In the case of a communication process, that means how much emotions are changeable through time. Just for fun, next time, try to talk to a friend by changing randomly your mood happy, sad, angry,..and see what happen with him\her (just in case, before fighting tell him\her that is part of an experiment that you’ve read in this post).

In Twitter, the variance of the samples is an estimation about how much differently emotions are impacting the use of certain words in a tweet, from person to person at a specific time. Or, similarly, by considering one person, how much emotions are impacting the use of words differently through time.

Like in a funnel (see picture), the sentimenter can eliminate the noise and thus reduce the size of the tweet bubbles (the higher the bubble the higher the noise) till a fixed limit that depends on the quality of the sample: its variance.

Sentimenter_Twitter_Funnel

So, I have a question for bigdata sentimenters: which is the sample variance of tweets due to non-verbal communication? Acknowledge the sample variance, the error of prediction of the best sentimenter ever is also given:

error of prediction (size of the bubble sentiment) = sample variance of tweets…

…with the assumption that both samples and algorithm used by the sentimenter are not slanted\biased. If this is not the case, the sentiment bigdata bubble might be even larger and the prediction less reliable. Anyhow, that is another story, another issue for BigData sentimenters (coming soon, here in this blog. Stay tuned!).

Feelink – Feel & Think approach for doing life!