Where Information Lives

EMC Journal

Subscribe to EMC Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get EMC Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

EMC Journal Authors: William Schmarzo, Jason Bloomberg, Jordan Knight, Mat Mathews, Bruce Popky

Related Topics: EMC Journal, Marketing and Sales, Big Data on Ulitzer


Effective Analytics | @BigDataExpo #BigData #IoT #M2M #DigitalTransformation

Measure the effectiveness of whatever decisions are ultimately made in order to continuously refine the analytic models

I was reading an interview with John Krafcik, CEO of Google’s Self-driving Car Project, in the August 8th issue of Bloomberg BusinessWeek. The article referenced a survey by AlixPartners where they found that 73% of people wanted autonomous vehicles.  But when people had the option to have a steering wheel in the car, allowing optional full control to the driver, the acceptance rate jumped to 90%.  This finding, that people are much more accepting of automation and new ideas when they have the option of control, is totally consistent with what we found with respect to how to deliver big data analytics.

The big data engagements we run for EMC focus on applying predictive and prescriptive analytics to deliver recommendations to help key decision makers become more effective at at their jobs.  For example, delivering recommendations to teachers in how to best group their students based upon the subject area, or to mechanics regarding what parts to replace when performing maintenance on a wind turbine, or to physicians regarding what medications and treatments will likely deliver the best results given a patient’s overall wellness, or to appraisers to help them more accurately determine the value of a property, or to an underwriter to help them to determine which loans to accept given a reasonable level of risk, or etc.

But how does one ensure that the business stakeholders, the humans in the process, are accepting of the analytics and recommendations that are being delivered to them? Being right doesn’t necessarily make you persuasive.

Effective Recommendations Put Humans in Control
We learned through several engagements that as we deliver recommendations to business stakeholders or directly to customers involved in the process or decision that we had to provide three options to the humans in order to ensure their buy-in to the analytics.  Those three options that we presented to the business stakeholders were:

  • They could accept the recommendation and we would measure how effective the outcome was versus the model, or
  • They could reject the recommendation and we would measure how effective the outcome was versus the model, or
  • They could change the recommendation and we would measure how effective the outcome was versus the model.

Note:  in some situations, we also offered the option to select the [MORE] option to get more details (usually presented as interactive charts or tables) in support of the recommendation.  But after a while, we found that the users seldom selected that option

For example, the organization may be executing on a customer retention business initiative.  The organization could be applying big data analytics to deliver retention offers (new services, lower prices, more features, etc.) to their “high value, at risk” customers based upon the likelihood of the customer’s attrition (Customer Attrition Score) and the customer’s potential lifetime value (Maximum Customer LTV Score).  So when Jane Smith calls the call center about a billing issue, the model would look up Jane’s “Customer Attrition Score” and “Maximum Customer LTV Score” to recommend a specific retention offer to the customer service representation.

Let’s say that the data indicates that Jane has a high likelihood of attrition (based upon a change in her usage behaviors and social media sentiment) and that she has a very high “Maximum Customer LTV Score” (based upon both the number of additional services that could be sold to Jane, plus her strong social media following).  The prescriptive model may recommend the following retention offer:

[Offer Jane 10% off of her current cable service over the next 3 months]

The call center representative has the options to:

  • Accept the recommendation and make that offer to Jane, or
  • Reject the recommendation and make no offer to Jane, or
  • Change the recommendation based upon the conversation that the Customer Service Rep is having with Jane.

Let’s say that the Customer Service Rep decides that the best offer for Jane (based upon the conversation the Customer Service Rep is having with Jane) is to:

[Offer Jane 50% off new high-speed Internet service over next 6 months]

The customer service rep may have learned from their conversation that Jane’s biggest usage problem was streaming her favorite shows during her weekend binge watching.  With this additional insight in hand, and the knowledge from the scores about Jane’s likelihood to attrite and her potential life time value, the customer service rep made the decision to change the recommendation to something that the customer service rep felt was more relevant to the problems that Jane was having.

Test, Measure and Learn for Continuous Model Evolution
In all cases, we want to measure the effectiveness of whatever decisions are ultimately made in order to continuously refine the analytic models and scores.  By constantly measuring the effectiveness of the recommendations AND allowing the humans in the process the freedom to test different ideas, the models can learn from the humans.

More importantly, there are always going to be some humans who produce better results than the models due to their experiences, training, and intuition (or in some cases, dumb luck).  Humans may be able to react and adjust to new information (coming from the interaction that they are having with the customers) than the time required to update and re-run the models.  In the end, involving the humans as a key factor in the analytics process will ensure that models don’t go stale and that the models are constantly improving.

As the BusinessWeek article highlighted, humans will usually be more receptive to new ideas and new technologies if they feel like they are still in control.  If you want your decision makers to accept the recommendations of your analytics, then you had better allow the humans an opportunity to provide feedback to the models.  This a clear win-win-win for everyone – data scientists who are building the analytic models, business stakeholders who are interacting with the analytic results, and the customers with whom we are trying to provide a differentiated experience.

The post Effective Analytics Put the Humans In Control appeared first on InFocus.

More Stories By William Schmarzo

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business” and “Big Data MBA: Driving Business Strategies with Data Science”, is responsible for setting strategy and defining the Big Data service offerings for Hitachi Vantara as CTO, IoT and Analytics.

Previously, as a CTO within Dell EMC’s 2,000+ person consulting organization, he works with organizations to identify where and how to start their big data journeys. He’s written white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power an organization’s key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the “Big Data MBA” course. Bill also just completed a research paper on “Determining The Economic Value of Data”. Onalytica recently ranked Bill as #4 Big Data Influencer worldwide.

Bill has over three decades of experience in data warehousing, BI and analytics. Bill authored the Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements. Bill serves on the City of San Jose’s Technology Innovation Board, and on the faculties of The Data Warehouse Institute and Strata.

Previously, Bill was vice president of Analytics at Yahoo where he was responsible for the development of Yahoo’s Advertiser and Website analytics products, including the delivery of “actionable insights” through a holistic user experience. Before that, Bill oversaw the Analytic Applications business unit at Business Objects, including the development, marketing and sales of their industry-defining analytic applications.

Bill holds a Masters Business Administration from University of Iowa and a Bachelor of Science degree in Mathematics, Computer Science and Business Administration from Coe College.