Saving IT Expenditures and sourcing Business Intelligence and Analytics tools: which are the KPIs for BigData?


The IT expenditures are forecasted to growth due to the business speculation behind the buzz word BigData. Therefore, in order to avoid biased insights, How to measure the quality of the information extracted? How is it possible to get more value from BigData and saving useless expenditures?  Are the insights provided by business intelligence, analytics, and prediction tools really reliable?

A meaningful example about the speculation behind BigData, is the case of sentimenters that analyze tweets and posts: are they a BigData bubble? (see also My Issue with BigData Sentiment Bubble: Sorry, Which Is the Variance of the Noise?)

Bigdata_process

As a matter of fact, insights and predictions are final results of a transformation process (see figure) where the data in input is elaborated by an algorithm. A fantastic and exhaustive explanation about the process behind any business intelligence tool is the pyramid of data science where five stages are identified: 1) data gathering/selection, 2) data cleaning/integration/storage, 3) feature extraction, 4) knowledge extraction and 5) visualization (see The Pyramid of Data Science).

Anyhow, as for the production of products such as cars, clothes and a good meal in a restaurant, high quality results are ensured by the quality of the raw materials and the quality of the transformation process as well.

Thus, if the raw material for an analytic is the Data, how to assess the quality of the supply, the data? ACID compliance and pessimistic looking already define best practices in order to guarantee the quality of data in terms of data management and thus reduce maintenance cost and improve efficiency.

However, from a procurement point of view, how evaluate the quality of the supply within a procurement process for sourcing data? Similarly, how to evaluate the quality of the process that transform the supply (data) into insights and valuable information?

Like for the production of products a well-defined procurement process is ensured through well written specifications documents where requirements and key parameters/characteristics are clearly stated. Superfluous to say that a well-defined procurement process will ensure the quality of the supply, and the quality of the supply will ensure the quality of the final product/service. In this case, it will ensure the quality of the insights.

Undoubtedly, the huge amount of data available nowadays and new technologic improvements are generating more business opportunities than in the past both to improve process efficiency and to define new business models (see Firm Infrastructure Vs Catalyst to Change Business Models: the Double Side of IT (Change Management).

Thus, clearly define which KPIs to look for and negotiating the aspects that really matters will ensure best IT analytics services in terms of opportunities to exploit, thanks to the insights provided, as well as saving costs.

As an example, if are not yet available parameters for evaluating the quality of data and of the analytics as well, I would go to a restaurant where I know the quality of the suppliers instead of looking to reviews and advertisements based on questionable data suppliers: fake TripAdvisor’s users (see Tripadvisor: a case study to think why bigdata variety matters).

Beeing aware that ignoring bigdata opportunities means also ignoring a better restaurants, with delicious meal and low prices, Companies that define best a procurement process for data sourcing will enjoy the best meal in a bigdata restaurant.

Feelink – Feel and think approach for doing life!

Advertisements

My Issue with BigData Sentiment Bubble: Sorry, Which Is the Variance of the Noise? (NON Verbal Communication)


Why sentiment analysis is so hard? How to interpret the word “Crush” in a tweet? Crush as in “being in love” or Crush as in “I will crush you”? According to Albert Mehrabian communication model and statistics, I would say that on average a tweet for a sentimenter has an accuracy of 7%. No such a big deal, isn’t it?

Let’s think about it by considering, as an example, the case of the sentiment analysis described in My issues with Big Data: Sentiment: crush as in “being in love” (positive) or crush as in “I will crush you” (negative)?

What is a sentimenter? As a process, is a tool that from an input (tweets) produce an outupt like “the sentiment is positive” or “the sentiment is negative“. Many sentimenters are even supposed to estimate how much the mood is positive or negative: cool!

Paraverbal and non-verbal communication

Anyhow, according to Albert Mehrabian the information transmitted in a communication process is 7% verbal, 38% paraverbal (tone of the voice) and the remaining 55% is non-verbal communication (facial expressions, gestures, posture,..).

In a Tweet, as well in a SMS or e-mail, neither paraverbal nor non-verbal communication are transmitted. Therefore, from a single tweet is possible to extract only the 7% of the information available: the text (verbal communication).

So, what about the paraverbal and non verbal communication? During a real life conversation, they play a key role since they count for 93% of all the message. Moreover, since paraverbal and non verbal messages are strictly connected with emotions, they are exactly what we need: sentiments!

Emotions are also transmitted and expressed though words such as “crush” in the example mentioned. However, within a communication process, not always the verbal and non-verbal are consistent. That’s the case when we talk with a friend, he\she saiys that everything is ok while we perceive, more or less consciously, something different from his\her tone or expressions. Thus we might ask: are you really sure that everything is ok? As a golden role, also for every day life, I would recommend to use non-verlbal signals as an opportunity to make questions rather than inferring mislead answers (see also: A good picture for Acceptance: feel the divergences & think how to deal with).

For these reason, the non-verbal messages are a kind of noise that interferes with verbal communication. In a tweet, it is a noise that interferes with the text. Such a noise can be as much disturbing as much the transmitter and the receiver are sensitive to the non-verbal communication. It might be so much disturbing to change completely the meaning of the message received.

Statistic and Information Theory

From a statistic point of view the noise might be significantly reduced by collecting more samples. In Twitter, a tweet is one sample and each tweet have 7% of available information (text) and 93% of noise (non verbal communication) that is the unknown information.

From a prediction\estimation point of view no noise means no errors.

Thus, thanks to BigData, if the sentimenter analyzes all the tweets theoretically it’s possible to reduce the noise to zero and thus having no prediction error about sentiments…...WRONG!!!

Even if the sentimenter is able to provide a result by analyzing all the BigData tweets (see Statistical Truisms in the Age of Big Data Features):

the final error in our predictive models is likely to be irreducible beyond a certain threshold: this is the intrinsic sample variance“.

The variance is an estimation of how much samples are different each others. In the case of a communication process, that means how much emotions are changeable through time. Just for fun, next time, try to talk to a friend by changing randomly your mood happy, sad, angry,..and see what happen with him\her (just in case, before fighting tell him\her that is part of an experiment that you’ve read in this post).

In Twitter, the variance of the samples is an estimation about how much differently emotions are impacting the use of certain words in a tweet, from person to person at a specific time. Or, similarly, by considering one person, how much emotions are impacting the use of words differently through time.

Like in a funnel (see picture), the sentimenter can eliminate the noise and thus reduce the size of the tweet bubbles (the higher the bubble the higher the noise) till a fixed limit that depends on the quality of the sample: its variance.

Sentimenter_Twitter_Funnel

So, I have a question for bigdata sentimenters: which is the sample variance of tweets due to non-verbal communication? Acknowledge the sample variance, the error of prediction of the best sentimenter ever is also given:

error of prediction (size of the bubble sentiment) = sample variance of tweets…

…with the assumption that both samples and algorithm used by the sentimenter are not slanted\biased. If this is not the case, the sentiment bigdata bubble might be even larger and the prediction less reliable. Anyhow, that is another story, another issue for BigData sentimenters (coming soon, here in this blog. Stay tuned!).

Feelink – Feel & Think approach for doing life!