Social Media Magic Quadrant (Part 2): Ability to Search Content


Social Network Magic Quadrant

The second aspect of a social media that wants to lead is the ability search and provide meaningful content to users.

(see also Social Media Magic Quadrant)

When we need an information, where to look for it in the web? As a part of a marketing communication strategy, from a SEO perspective, knowing who is leading as a search engine is crucial in order to be visible and been known as a brand.

(see also: Sentiment Analysis in Semantic Search: Find Out What Others are Thinking About Your Brand).

In other words: be visible as a brand where most likely people goes to look for something.

According to @netmarketshare (July, 2014), Google is without leading with 68,75% of queries, then Baidu (China) with 18,03%. Other search engines such as Yahoo, Bing and Ask, are just striving.

Anyhow, how Google gained its leadership as a search engine? Which are the main abilities as a search content leader?

1. Giving what people are looking for?

Able to collect the information that people want to know, that is meaningful and useful. In one word, a search that is semantic = content + context.

Providing a result that is contextualized according to the network and interests is what can boost user engagements. Is what Yahoo is might attempt to achieve with the acquisition of Flurry (…unless it’s only speculation).

2. What about developing a brand as a search engine? Google is not only leading as a search content providers. It’s also the best workplace 2014 mainly because provides to theirs employers Great Challenges, a Great Atmosphere and Great work.life balance. It might be correlation relationship between the two fact or it’s simply a coincidence. Anyhow, will you buy a product or a service of a company that is recognized as bad employer?

What about SNs Strategies?

Considering how Google is investing in its social media platform (Google+), according to social media magic quadrant, my perception is that Google is attempting to switch from “SN Content Searcher” to a “SN Leader”. In others words, from a search content “of others” (platform) to a search content “of my own (platform)” leadership. Feelink – Feel & Think Approach for Doing Life!

Advertisements

Social Media Magic Quadrant (Part 1): Ability to Generate Content


Social Network Magic Quadrant

Recalling Social Media Magic Quadrant as a tool for assessing the strategic position of a social media website, there are two key aspects:

1) Ability to generate content

2) Ability to search and provide meaningful content for the single user (contextualized search)

Considering the ability to generate content, here below an interesting infographic by Dustin W. Stout that gives the measure of who is leading as a content generator.


Click the animation to open the full version (via PennyStocks.la).

Well, e-mail is still the most used communication tool as well as WhatsApp has gained a oustansting leadership of instant messaging by intercepting and disrupting SMSs from mobile phones. Something to consider also by all the mobile operators.

Regarding microblogging, there is no doubt that Twitter is leading against Tumblr in terms of number of posts generated.

Moreover, as expected, there is no competition between Facebook and Google+ considering the number of likes.

Anyhow, how a social media will lead as a content generator?

Number of active users?

Ability to engage people? Or What?

Share your thoughts.

Feelink – Feel & Think approach for doing life!

Social Media Magic Quadrant: who will be successful? The key of a linkage with search engines


Social Network Magic Quadrant

Around the world many Social Networks are in place, many just born and many other just died.

Level of engagement, usability and giving to final users which information to share are more likely what a SN should develop in order to be successful nowadays.

Anyhow, who among the big ones like Facebook, Twitter, LinkedIN, Google+,… will survive also in the future?

Which are the key factor of success in this race?

I guess that for leading as a Social Network one key ingredient is required: a strong linkage between the content (+context) and a well-known search engine.

That’s because when people would know something more about a topic, company or person, usually type a key word in a search engine or in a Social Network as well. Therefore, having meaningful content that is delivered\promoted as results of the search engine is what will lead a social media to success.

On the other side, a search engine has to defend its brand (because they have it!!!). A search engine wouldn’t return results with bad contents and\or from people with bad reputation. Thus, the algorithm of a search engine should defend its brand by providing good contents from trusted social networks.

Well, probably, among the big ones who has the strongest linkage between a Social Network and a search engine is Google: Google+ (SN) and Google search (Search Engine).

However, despite Google search is the most used search engine, Google+ is not leading as a content generator and thus able to reach a critical mass of contents. Promoting Google+ by engaging as much (trusted) users as possible is part of its strategy, as far as I realized.

On the other side, according to its Merge & Acquisition , the most popular SN, Facebook, doesn’t seem to show an interest on promoting to user as well as investing on a search engine. Who will win? We’ll see.

Meanwhile, other SNs have already shown how the linkage with a good search engine is strategic. Think about LinkedIn that is widely used by hiring manager for screening candidates.

Recalling the fairy tale Snow White and the seven Dwarfs:

Mirror, mirror on the wall, who’s the fairest of Social Networks all?

(see also: Semantic search algorithm, behaviorism and fairy-tale Snowwhite with the seven dwarfs. Would SEO behave like Grumpy?)

Snow White and Semantic Search

Snow White and Semantic Search

Feelink – Feel & Think approach for doing life!

P.S.: By the way, will Gartner Inc. approve the Social Media Magic Quadrant? 😉

TV Audience and Tweets Flow: a great beauty or bigdata SLIP n.1 for marketing communication strategists (statistic)?


TV_Audience and Tweets: a big beauty or bigdata SLIP n.1 (statistic)?

After being awarded as the best foreign language movie (Italy) Academy Awards 20014, The Great Beauty, directed by Paolo Sorrentino, got an outstanding audience last week when it was broadcasted in Italy in TV prime time.

Comments and opinions about the movie apart (I would recommend to see it), providing trends and flows among social medias is getting more frequent every day. Few day ago, it has been posted by the Italian TV Network that transmitted the movie, a “statistic” (here) regarding the Tweet flows with the purpose to explain when twitters’ peaks happened as well as gathering the main influencers.

Accordingly to a third party analysis, twitters’ peaks happened at specific moments: 1) a meaningful sentence by Jep Gambardella, the protagonist, 2) when the Sabrina Ferilli (famous Italian actress) showed up in the movie with all her beauty and 3) at the end of the movie.

Very interesting. However, looking carefully at the charts (see figure above) I have noticed two things:

  1. Twitters’ peaks happen concurrently with a temporary decline of the TV audience (share). Thus, a correlation (negative) between peaks in Twitter and TV Share exists.
  2. The Twitters’ peaks and audiences’ downturns occur with a perfect timing: one each 30 minutes.

Since advertisements’ stops during TV shows, and radio broadcasts as well, are previously defined according to a specific TV time clock…

…well, I am wondering: Is there also a cause-effect relationship between advertisements’ stops during TV programs and the peaks registered in Twitter?

Who knows. An answer should be provided only analyzing data and real facts carefully. For example, why not putting chips in our home that register and transmit also when the refrigerator has been opened to bring something to eat or even when a WC has just been flushed? Other stimulating correlations might be found by gathering such kind of data.

Anyhow, finding correlations it’s quite easy. Just observe what happen. Finding causation relationships is definetely much more tricky (see also BigData S.L.I.P.S. n.1: statistic) since a deep knowledge of what is going to be analyzed is required and it is quite easy to fall into wrong assumptions. In this case, the beauty of human behaviours.

By the way, concerning the connection between Tweets and TV shows, last year Twitter and BBC America have established a partnership for advertising (see Mashable, Twitter Partners With BBC America to Promote Branded Videos).

Maybe it’s just a coincidence… or maybe Twitter and BBC have the information that when people go to the toilette is just for posting a tweet and not beacause of a TV break 😉

Feelink – Feel & Think approach for doing life!

Caution!!! BigData S.L.I.P.S.: five tips when using analytics


BigData_SLIPS

Along my brief research on BigData, I’ve found 5 type of S.L.I.P.S that a data scientist might encounter along the way: Statistic, Learning, Information, Psychology and Sources.

1) Statistic (Left Foot)

Is without any doubt the main and well-known technical aspect. The most common slip concerning statistic is misleading correlation with causation. In other words, discovering correlations among variables doesn’t necessarily imply a cause-effect relation. Mathematically speaking, correlation is a necessary but not sufficient condition for a cause-effect relationship.

(see also K. Borne: Statistical Truisms in the Age of BigData).

2) Learning (Right Foot)

OK, lets assume that a cause-effect relationship exists: which model\algorithm to chose in order to describe the relationship? There are many: ARMA, Kalman’s Filter, Neural Networks, customized,… which one fits best? A model that has been validated with the data available now might be not valid anymore in the future. So, constantly monitoring and measure the error of prediction with the estimated values by the model.

Choosing a model implies making assumptions. In other words, never quit to learn from data and be open to break assumptions otherwise predictions and analysis will be slanted.

3) Information (Right Hand)

Which information is really meaningful? That’s the first point to clarify before implementing a bigdata initiative or any new BI tool for your business.

Another point is misleading information with data. According to information theory, and a well-grounded common sense as well, data are facts while information is an interpretation of facts based upon assumptions (see also the D.A.I. model).

(see also: D. Laney & M. Beyer: BigData Strategy Essentials for Business and IT).

4) Psychology (the Head… of course!)

Have you ever heard about eco-chamber effects and social influence? Well, what happen is that social media might amplify irrational behaviours where individuals (me included) base its decisions, more or less consciously, not only on their knowledge or values but also on the actions of those who act before them.

In particular, whenever dealing with tricky-slippy tools such as bigdata sentiments is better to consider carefully the relevance and impacts of psychology and behaviours. The risk is to gather data that is intrinsically biased (see also My Issue with BigData Sentiments.)

(see also:

D. Amerland: How Semantic Search is changing end-user behaviour

C. Sunstein: Echo Chambers: Bush v. Gore, Impeachment, and Beyond – Princeton University Press

e! Science News: Information technology amplifies irrational group behavior).

5) Sources (Left Hand)

Variety!!! That is one of the three suggested by D. Laney: Volume, Velocity and Variety. Not only choosing the right model is important in order to avoid predictions’ and insights’ biases: what about the reliability of the sources of data that has been used for the analysis? If the data is biased predictions and insights will be biased as well. In particular, any series of data has a variance and a bias that can not be eliminated.

How to mitigate such a risk? By gathering data from different sources and weight them accordingly to its reliability: the variance.

Moreover, as a bigdata scientist and as a consumer as well, never forget positive and negative SEO tactics. There is a social-digital jungle there! (see Tripadvisor: a Case Study to Think Why BigData Variety matters).

Feelink – Feel & Think approach for doing life!

Semantic search algorithm, behaviorism and fairy-tale Snowwhite with the seven dwarfs. Would SEO behave like Grumpy?


How does semantic search work? Which are the implications regarding SEO tactics and users/customers’ behaviors?

Google search is not unlike the “Mirror, mirror on the wall, who’s the fairest of them all?” where the question asked, reveals (in the fairy tale) the Evil Queen’s narcissistic obsession

, what a great metaphor to explain how semantic search works! (see Google Search and the Racial Bias).

I will take the assist from David Amerland to help me to better understand how the SEO world (something still unknown from me) as well as remembering childhood times with the fair tale “Snow White and the seven dwarfs“.

So, let’s have a look at the characters of the famous fairy-tale:

The mirror is the result of the search engine. According to what I’ve understood about semantic search, the mirror reflects back a result that is contextualize accordingly to the user and his/her relationships among the social networks as well as thorough the analysis of past behaviours.

Snow White is the most beautiful creature in the WEB forest. She publishes smart content as well as she establishes such trusted relationships in the social medias so that the mirror (the semantic search engine) reflects back a beautiful princess… accordingly to the algorithm I would say.

The evil queen is the bad guy, attempting to be viewed as the most beautiful in the WEB forest while it is not. The evil queen struggles and suffers a lot for that, since the mirror suggest always Snow White as the best result… the life in the digital jungle is not so easy for the evil queen!

The poisoned apple represents a trick, a negative SEO attack where the objective either is to game the search engine (the mirror) or to compromise the reputation of Snow White. Fake reviews, negative or positive SEO tactics, are just an example of how an apple could be poisoned in order to kill digitally a competitor and game the search engine algorithm (see the case of Tripdavisor).

The seven dwarfs are data scientists and SEO experts that are mining the WEB forest in order to get some valuable and reliable information from the WEB. Usually they are well-intentioned and thus willing to protect the beauty of Snow White from negative SEO (the poisoned apple and the evil princess).

The charming Prince represents all the users, companies and individuals, that go deeper and deeper into the WEB forest in order to discover the truth. Mirror’s result apart: Who is really the fairest in the WEB forest?Encountering few smart dwarf might be useful for the charming Prince, both in the forest to discover the beauty of Snow White and in the WEB to find out great contents and reputations accordingly to personal impressions rather than only relying on algorithms.

…so, which is the moral of the fairy-tail “Snow White and the seven dwarfs” applied to the modern semantic search and SEO?

An interesting point has been pointed out by D. Amerland in his article “How semantic search is changing end-user behaviour“. In particular:

The fact remains that the web is changing, search has changed and the way we operate as individuals, as well as marketers, has changed with it.

Since the semantic search is so powerful to influence the behaviour of the end-user (individuals, companies,…), the point is: what kind of algorithm there is behind the mirror on the wall? Which are the criteria behind the result that identify the fairest princess in the WEB?

More interesting doubt: what happen if the criteria behind the search algorithm (the mirror) change so that the fairest in the WEB would be Grumpy, one of the seven dwarfs? Would all the end-user and SEO really want to become and behave like Grumpy?

seo_mirror_on_the_wall

Saving IT Expenditures and sourcing Business Intelligence and Analytics tools: which are the KPIs for BigData?


The IT expenditures are forecasted to growth due to the business speculation behind the buzz word BigData. Therefore, in order to avoid biased insights, How to measure the quality of the information extracted? How is it possible to get more value from BigData and saving useless expenditures?  Are the insights provided by business intelligence, analytics, and prediction tools really reliable?

A meaningful example about the speculation behind BigData, is the case of sentimenters that analyze tweets and posts: are they a BigData bubble? (see also My Issue with BigData Sentiment Bubble: Sorry, Which Is the Variance of the Noise?)

Bigdata_process

As a matter of fact, insights and predictions are final results of a transformation process (see figure) where the data in input is elaborated by an algorithm. A fantastic and exhaustive explanation about the process behind any business intelligence tool is the pyramid of data science where five stages are identified: 1) data gathering/selection, 2) data cleaning/integration/storage, 3) feature extraction, 4) knowledge extraction and 5) visualization (see The Pyramid of Data Science).

Anyhow, as for the production of products such as cars, clothes and a good meal in a restaurant, high quality results are ensured by the quality of the raw materials and the quality of the transformation process as well.

Thus, if the raw material for an analytic is the Data, how to assess the quality of the supply, the data? ACID compliance and pessimistic looking already define best practices in order to guarantee the quality of data in terms of data management and thus reduce maintenance cost and improve efficiency.

However, from a procurement point of view, how evaluate the quality of the supply within a procurement process for sourcing data? Similarly, how to evaluate the quality of the process that transform the supply (data) into insights and valuable information?

Like for the production of products a well-defined procurement process is ensured through well written specifications documents where requirements and key parameters/characteristics are clearly stated. Superfluous to say that a well-defined procurement process will ensure the quality of the supply, and the quality of the supply will ensure the quality of the final product/service. In this case, it will ensure the quality of the insights.

Undoubtedly, the huge amount of data available nowadays and new technologic improvements are generating more business opportunities than in the past both to improve process efficiency and to define new business models (see Firm Infrastructure Vs Catalyst to Change Business Models: the Double Side of IT (Change Management).

Thus, clearly define which KPIs to look for and negotiating the aspects that really matters will ensure best IT analytics services in terms of opportunities to exploit, thanks to the insights provided, as well as saving costs.

As an example, if are not yet available parameters for evaluating the quality of data and of the analytics as well, I would go to a restaurant where I know the quality of the suppliers instead of looking to reviews and advertisements based on questionable data suppliers: fake TripAdvisor’s users (see Tripadvisor: a case study to think why bigdata variety matters).

Beeing aware that ignoring bigdata opportunities means also ignoring a better restaurants, with delicious meal and low prices, Companies that define best a procurement process for data sourcing will enjoy the best meal in a bigdata restaurant.

Feelink – Feel and think approach for doing life!

My Issue with BigData Sentiment Bubble: Sorry, Which Is the Variance of the Noise? (NON Verbal Communication)


Why sentiment analysis is so hard? How to interpret the word “Crush” in a tweet? Crush as in “being in love” or Crush as in “I will crush you”? According to Albert Mehrabian communication model and statistics, I would say that on average a tweet for a sentimenter has an accuracy of 7%. No such a big deal, isn’t it?

Let’s think about it by considering, as an example, the case of the sentiment analysis described in My issues with Big Data: Sentiment: crush as in “being in love” (positive) or crush as in “I will crush you” (negative)?

What is a sentimenter? As a process, is a tool that from an input (tweets) produce an outupt like “the sentiment is positive” or “the sentiment is negative“. Many sentimenters are even supposed to estimate how much the mood is positive or negative: cool!

Paraverbal and non-verbal communication

Anyhow, according to Albert Mehrabian the information transmitted in a communication process is 7% verbal, 38% paraverbal (tone of the voice) and the remaining 55% is non-verbal communication (facial expressions, gestures, posture,..).

In a Tweet, as well in a SMS or e-mail, neither paraverbal nor non-verbal communication are transmitted. Therefore, from a single tweet is possible to extract only the 7% of the information available: the text (verbal communication).

So, what about the paraverbal and non verbal communication? During a real life conversation, they play a key role since they count for 93% of all the message. Moreover, since paraverbal and non verbal messages are strictly connected with emotions, they are exactly what we need: sentiments!

Emotions are also transmitted and expressed though words such as “crush” in the example mentioned. However, within a communication process, not always the verbal and non-verbal are consistent. That’s the case when we talk with a friend, he\she saiys that everything is ok while we perceive, more or less consciously, something different from his\her tone or expressions. Thus we might ask: are you really sure that everything is ok? As a golden role, also for every day life, I would recommend to use non-verlbal signals as an opportunity to make questions rather than inferring mislead answers (see also: A good picture for Acceptance: feel the divergences & think how to deal with).

For these reason, the non-verbal messages are a kind of noise that interferes with verbal communication. In a tweet, it is a noise that interferes with the text. Such a noise can be as much disturbing as much the transmitter and the receiver are sensitive to the non-verbal communication. It might be so much disturbing to change completely the meaning of the message received.

Statistic and Information Theory

From a statistic point of view the noise might be significantly reduced by collecting more samples. In Twitter, a tweet is one sample and each tweet have 7% of available information (text) and 93% of noise (non verbal communication) that is the unknown information.

From a prediction\estimation point of view no noise means no errors.

Thus, thanks to BigData, if the sentimenter analyzes all the tweets theoretically it’s possible to reduce the noise to zero and thus having no prediction error about sentiments…...WRONG!!!

Even if the sentimenter is able to provide a result by analyzing all the BigData tweets (see Statistical Truisms in the Age of Big Data Features):

the final error in our predictive models is likely to be irreducible beyond a certain threshold: this is the intrinsic sample variance“.

The variance is an estimation of how much samples are different each others. In the case of a communication process, that means how much emotions are changeable through time. Just for fun, next time, try to talk to a friend by changing randomly your mood happy, sad, angry,..and see what happen with him\her (just in case, before fighting tell him\her that is part of an experiment that you’ve read in this post).

In Twitter, the variance of the samples is an estimation about how much differently emotions are impacting the use of certain words in a tweet, from person to person at a specific time. Or, similarly, by considering one person, how much emotions are impacting the use of words differently through time.

Like in a funnel (see picture), the sentimenter can eliminate the noise and thus reduce the size of the tweet bubbles (the higher the bubble the higher the noise) till a fixed limit that depends on the quality of the sample: its variance.

Sentimenter_Twitter_Funnel

So, I have a question for bigdata sentimenters: which is the sample variance of tweets due to non-verbal communication? Acknowledge the sample variance, the error of prediction of the best sentimenter ever is also given:

error of prediction (size of the bubble sentiment) = sample variance of tweets…

…with the assumption that both samples and algorithm used by the sentimenter are not slanted\biased. If this is not the case, the sentiment bigdata bubble might be even larger and the prediction less reliable. Anyhow, that is another story, another issue for BigData sentimenters (coming soon, here in this blog. Stay tuned!).

Feelink – Feel & Think approach for doing life!

Tripadvisor: a case study to think why bigdata variety matters


tripaadvisor

The recent scandals about fake reviews has put the reliability of TripAdvisor under discussion (see The Guardian).

Such a bad quality of service is not useful for consumers, entrpeneurs as well as in the long run for the reputation of TripAdvisor. So, where is the problem?

Clearly it’s a question of reliability of the sources of information and specifically for TripAdvisor is a question of assessing the reliability of the user that post a new review. Nice and easy…like discovering the hot water. However, thinking also at the practice of the so-called Negative SEO, that is not only an issue of web sites like TripAdvisor but also for all the companies that have to promote theirs brands in the social networks (who think doesn’t need it, raise up the hand).

In order to fix the issue, Tripadvisor developed the service Report Blackmail that tracks and eventually bans the users that are using Negative SEO tactics. For example, 100 user managed by a restaurateur that are reporting cases of colitis and runs in the reviews of the competitor near the corner. Such a solution try to catch fake users when they’ve already done the “attack” as well as, if not properly working, it might ban by mistake honest users. It sound reactive rather proactive, isn’t it?

So, are there other approaches that can fix the problem of malicious reviews proactively? An idea could be use new IT bigdata technologies and re-think the business model. How? (see also MIT Sloan Management Review: technology as a catalyst for change).

An approach could be associating the Tripadvisor user with a unique ID, for example a TripAdvisor idetity card, while to restaurateurs and hotel managers have an ID card reader (RFID, infrared,etc.). Thus, once the consumer eat the meal and goes to pay the restaurateur track the consumer ID that univocally identify the user, plus time and position. Finally, the user have just to fill the form for his\her review that now can be fully validated. Potentially, once the users sign in the TripAdvisor website, a list of pending reviews not already filled might be also provided in order to facilitate the process and thus creating the so-called “customer experience”. Moreover, by tracking precisely the date, it is also possible to provide evaluations that are more meaningful for the customer by giving less importance to aged reviews.

With the technology currently available actually even a smart phone could be a card reader since it might equipped with a RFID or a magnetic stripe reader and, by developing a specific app, the restaurateur could easily and quickly transmit a transaction with the TripAdvisor ID of the customer.

tipadvisor_new_business_model

Apart from the solution proposed, that is an example that stresses the importance, when defining a bigdata strategy, to identify first the information that is really meaningful (user, time, position) as well as having a Variety of sources in order to validate the reliability of the data. In the case of Tripadvisor is crucial to correlate the data coming from the restaurateurs with the reviews of the couple customer\user (together!!!).

Thinking about the definition of BigData by Gartner:

Bigdata is high-volume, -velocity and -variety information assets that demand
cost-effective, innovative forms of information processing for enhanced insight and decision making

So, Variety is one of the “Vs (Volume, Velocity and Variety) and the Volume of data is only what is up to the sea level of the iceberg called BigData.

Do you think that variety matter? I think yes, it matters!

If you think so as well and you have the opportunity to visit Italy I would you recommend (personal advice) to enjoy meals in restaurants where are shown logos such as the following and relying on the word of mouth, an evergreen.

Recensioni

They are not implementing Variety like TripAdvisor as well but reviews are made by professionals and they do not have social media and WEB2.0 visibility risks. Of course, I would recommend to find other sources (use variety!).

Have a nice journey and enjoy the meal!

Feelink – Feel & Think approach for doing life!

Chaos vs. Determinism: why not both? From evolutionary theory to BIG Data challenge


How was the Universe created? It was generated by chance or it was created with a specific purpose?

Chaos vs. Determinism is one of the toughest issue to address for philosophers and it has been debated since the age of the ancient Greece.

Is the world nowadays governed by chaos or determinism? Hard to say, but what I notice is that sometimes Chaos and Determinism together might create an outstanding synergy. When? Here there are at least four examples: the Evolutionary Theory, the New Product Design process, the Lateral & Vertical thinking and the challenge of Big Data with Social Media.

1.  Evolutionary theory

The Darwin’s evolutionary theory is undoubtedly the most meaningful example of how chaos and determinism can work very well together…otherwise we couldn’t be here to discuss how this world works!

Since also Mother Nature cannot foreseen what will happen in the future, how is it possible to survive? By generating continuosly chaotic genetic mutation in the DNA a thus create a large variety of species: simple and brilliant! The generation of new alternatives, through DNA mutations, happens also when the environment is not changing because such variety of species will more likely guarantee the life in our planet Earth if a big change occurs.

Just think what happened 65milion years ago with the extinction of Dinosaurs. The impact of a big asteroid changed radically the climate and the T-Rex, together with his big friends, wasn’t able to adapt to the new environment condition. What happened is that a new family of species more adaptable escape from the extinction: mammals.

Mother Earth is not efficient like human being tends and likes to be. She is effective, likes redundancy and varieties in order to let the life carry on. How many times financial advisors said? “Diversification! That is the way to mitigate the risk of market’s volatility and uncertainty”. Either they consciously know the evolutionary theory or they are survivors from the natural selection.

2.  New Product Design                                                       

Another example is taken from the business world. Words like innovation, internationalization, diversification, mass-customization, not only have inspired the famous “business lingo bingo” game in order to stay awake during a work meeting, but also they have in common the same objective: continuously create new products. A company that doesn’t invest on the development of new product, in order to fill the customer needs that change through time or to reach\establish new market, is doomed to die.

Anyway, a new product is the result of a process: the New Product Design (NPD).Well, such process is divided into many different stages. Briefly, at the beginning there is brainstorming phase in which are collected all the new ideas in terms of needs without thinking if a new idea makes sense or is not feasible. For example, thinking about a new umbrella: “I want to use an umbrella like a parachute!” Why not? …ok, probably using an umbrella as a parachute is not practicable. So, how to organize and select all the ideas that came out from a chaotic brainstorming? A solution is the so called KJ method invented by Kawakita Jiro. It’s a process that organize, prioritize and select all the needs that really matters in a structured way. Probably also a parachute umbrella, is needed who knows!

Once the needs have been classified, the NPD process analyzes systematically all the needs related to the features required by the new product through the QFD (Quality Function Development) and the Pugh matrix. As a result, there is one or a couple of new solutions that are feasible and that fit all the significant needs. Just in case, if doesn’t cost so much effort, also others additional needs like “parachute umbrella” might be added in the new product in order to be “different” in the market.

Now, considering the brainstorming as a genetic mutations and the KJ\Pugh matrix as a natural selection, does the NPD process is like the evolutionary theory applied to products?

3. Lateral vs. Vertical thinking

Considering again the example of the umbrella parachute, it came out during the brainstorming phase without thinking if it would be feasible or not, while during the NPD process it might be more likely eliminated due to many technical as well as reasonable limitations: is there someone that really need a parachute umbrella?

This is the first distinction that Edward de Bono, the inventor of the lateral thinking, suggets between the Lateral and the Vertical thinking. Respectively, one is productive while the other is selective. Not only, Edward de Bono defines many others adjectives that characterized the vertical and lateral thinking as follow:

Lateral thinking: productive, stimulating, discontinuous, incoherent, do not use negations, open to intuitions, unspecific, less probable, open\probabilistic process.

Vertical thinking: selective, analytical, continuous, coherent, use negations, relevance focused, specific, more probable, close\deterministic process.

According to the adjectives mentioned above the aim of the lateral thinking is to find new solutions\ideas in an incoherent and chaotic way in order see the things from different perspective. On the contrary the vertical thinking select the intuitions in a structure way in order to develop a new coherent model. That’s what happened to the father of Quantum Theory Max Planck.

At the beginning ,when he got the intuition to assume that the energy of the particles can change only in discrete amounts, no less that the so called Planck constant, Max Planck was extremely skeptical because such assumption was not coherent with classical physic. Than many others brilliant minds such Bohr, Heisenberg, de Broglie, Einstein, Schrödinger, Pauli and others demonstrated that the assumption of Plank works with physical phenomena at microscopic scales. A new physic model was born thanks to a winning combination between the lateral and vertical thinking: the Quantum Mechanics.

More: see Lateral Thinking by Edward de Bono.

4. Big Data Challenge

Social Media phenomenon is undoubtedly having significant impacts in the way the people communicate and interact as well as the businesses operate. Some decades ago the main trouble was how gathering the needed information while nowadays it’s the opposite: which information is really relevant? The Big Data is going to address this issue, in order to organize, classify and select the relevant information that is generated almost randomly by billions of sources, me included, in the world. Why the information is generate randomly? Well, the Big Data issue is going to be addressed from the technical point of view and many tangible results has been achieved. Think about the mass-customized advertisement and NPD (new product design, see above).

However, Big Data is not only a question of technology. Also the human factor is interested since the information technology and social media might amplify an irrational behavior of groups by creating the so called Social Object’s effect. Retweets call likes, likes call posts and posts calls retweets again into spiral loop. In fact, as Tom Dickson showed: “It blends!

Anyway, why the social object might stimulate an irrational behavior? Prof. Vincent F. Hendricks from the University of Copenhagen underline the fact that the online discussion take place in a kind of echo chambers: “In group polarization, which is well-documented by social psychologists, an entire group may shift to a more radical viewpoint after a discussion even though the individual group members did not subscribe to this view prior to the discussion” (see Information technology amplifies irrational group behavior). That is because the human behaviuor is highly influenced by the group.

The influence of the grpup is one aspect. Than, when I discovered that a social object in Twitter or Facebook reaches its peak of influence only after two hours and then it rapidly declines I realized that also the time factor might force to an irrational behaviour. If you want to follow the peaks you must react quickly, and when a quick reaction is required the human brain rely to the amygdala by asking: flight or fight?

The amygdala is switched on whenever a dangerous or a stressful situation occurs. The amygdala, since activates quick reactions, saved humans (and other species like rabbits!) from extinction when thousands and thousands of years ago the human being were struggling against predators every day. Fortunately a social object doesn’t hurt like a saber-toothed tiger so there is no risk to die physically, possibly only digitally.

Anyway, in order to fully exploit the chaos generated by the social media, dealing amygdala might be useful in order to navigate rather than drifting in the digital see. So feel, think and than just in case post, tweet and like.

Chaos and Determinism: inseparable twin brothers of knowledge!

Feelink – Feel & Think approach for doing life!