Caution!!! BigData S.L.I.P.S.: five tips when using analytics


BigData_SLIPS

Along my brief research on BigData, I’ve found 5 type of S.L.I.P.S that a data scientist might encounter along the way: Statistic, Learning, Information, Psychology and Sources.

1) Statistic (Left Foot)

Is without any doubt the main and well-known technical aspect. The most common slip concerning statistic is misleading correlation with causation. In other words, discovering correlations among variables doesn’t necessarily imply a cause-effect relation. Mathematically speaking, correlation is a necessary but not sufficient condition for a cause-effect relationship.

(see also K. Borne: Statistical Truisms in the Age of BigData).

2) Learning (Right Foot)

OK, lets assume that a cause-effect relationship exists: which model\algorithm to chose in order to describe the relationship? There are many: ARMA, Kalman’s Filter, Neural Networks, customized,… which one fits best? A model that has been validated with the data available now might be not valid anymore in the future. So, constantly monitoring and measure the error of prediction with the estimated values by the model.

Choosing a model implies making assumptions. In other words, never quit to learn from data and be open to break assumptions otherwise predictions and analysis will be slanted.

3) Information (Right Hand)

Which information is really meaningful? That’s the first point to clarify before implementing a bigdata initiative or any new BI tool for your business.

Another point is misleading information with data. According to information theory, and a well-grounded common sense as well, data are facts while information is an interpretation of facts based upon assumptions (see also the D.A.I. model).

(see also: D. Laney & M. Beyer: BigData Strategy Essentials for Business and IT).

4) Psychology (the Head… of course!)

Have you ever heard about eco-chamber effects and social influence? Well, what happen is that social media might amplify irrational behaviours where individuals (me included) base its decisions, more or less consciously, not only on their knowledge or values but also on the actions of those who act before them.

In particular, whenever dealing with tricky-slippy tools such as bigdata sentiments is better to consider carefully the relevance and impacts of psychology and behaviours. The risk is to gather data that is intrinsically biased (see also My Issue with BigData Sentiments.)

(see also:

D. Amerland: How Semantic Search is changing end-user behaviour

C. Sunstein: Echo Chambers: Bush v. Gore, Impeachment, and Beyond – Princeton University Press

e! Science News: Information technology amplifies irrational group behavior).

5) Sources (Left Hand)

Variety!!! That is one of the three suggested by D. Laney: Volume, Velocity and Variety. Not only choosing the right model is important in order to avoid predictions’ and insights’ biases: what about the reliability of the sources of data that has been used for the analysis? If the data is biased predictions and insights will be biased as well. In particular, any series of data has a variance and a bias that can not be eliminated.

How to mitigate such a risk? By gathering data from different sources and weight them accordingly to its reliability: the variance.

Moreover, as a bigdata scientist and as a consumer as well, never forget positive and negative SEO tactics. There is a social-digital jungle there! (see Tripadvisor: a Case Study to Think Why BigData Variety matters).

Feelink – Feel & Think approach for doing life!

Advertisements

Saving IT Expenditures and sourcing Business Intelligence and Analytics tools: which are the KPIs for BigData?


The IT expenditures are forecasted to growth due to the business speculation behind the buzz word BigData. Therefore, in order to avoid biased insights, How to measure the quality of the information extracted? How is it possible to get more value from BigData and saving useless expenditures?  Are the insights provided by business intelligence, analytics, and prediction tools really reliable?

A meaningful example about the speculation behind BigData, is the case of sentimenters that analyze tweets and posts: are they a BigData bubble? (see also My Issue with BigData Sentiment Bubble: Sorry, Which Is the Variance of the Noise?)

Bigdata_process

As a matter of fact, insights and predictions are final results of a transformation process (see figure) where the data in input is elaborated by an algorithm. A fantastic and exhaustive explanation about the process behind any business intelligence tool is the pyramid of data science where five stages are identified: 1) data gathering/selection, 2) data cleaning/integration/storage, 3) feature extraction, 4) knowledge extraction and 5) visualization (see The Pyramid of Data Science).

Anyhow, as for the production of products such as cars, clothes and a good meal in a restaurant, high quality results are ensured by the quality of the raw materials and the quality of the transformation process as well.

Thus, if the raw material for an analytic is the Data, how to assess the quality of the supply, the data? ACID compliance and pessimistic looking already define best practices in order to guarantee the quality of data in terms of data management and thus reduce maintenance cost and improve efficiency.

However, from a procurement point of view, how evaluate the quality of the supply within a procurement process for sourcing data? Similarly, how to evaluate the quality of the process that transform the supply (data) into insights and valuable information?

Like for the production of products a well-defined procurement process is ensured through well written specifications documents where requirements and key parameters/characteristics are clearly stated. Superfluous to say that a well-defined procurement process will ensure the quality of the supply, and the quality of the supply will ensure the quality of the final product/service. In this case, it will ensure the quality of the insights.

Undoubtedly, the huge amount of data available nowadays and new technologic improvements are generating more business opportunities than in the past both to improve process efficiency and to define new business models (see Firm Infrastructure Vs Catalyst to Change Business Models: the Double Side of IT (Change Management).

Thus, clearly define which KPIs to look for and negotiating the aspects that really matters will ensure best IT analytics services in terms of opportunities to exploit, thanks to the insights provided, as well as saving costs.

As an example, if are not yet available parameters for evaluating the quality of data and of the analytics as well, I would go to a restaurant where I know the quality of the suppliers instead of looking to reviews and advertisements based on questionable data suppliers: fake TripAdvisor’s users (see Tripadvisor: a case study to think why bigdata variety matters).

Beeing aware that ignoring bigdata opportunities means also ignoring a better restaurants, with delicious meal and low prices, Companies that define best a procurement process for data sourcing will enjoy the best meal in a bigdata restaurant.

Feelink – Feel and think approach for doing life!