Wednesday, November 01, 2006

Webinar on Building Brand Loyalty

If you are interested in strengthening your brand and using data to develop brand loyalty then you might be interested in this free webinar that Debbie DeGabrielle and I are doing. You can sign up on the Intelligent Results website.

The general topics and learning in this webinar will include:

  • Gathering customer insight from all your data sources, structured data, as well as text
  • Tailoring marketing programs, service packages and products to respond to new found customer insight
  • Building predictive models and strategies to proactively engage customers
  • Simulating and forecasting expected costs and benefits of potential offers and treatments before you engage your customers
  • "Getting ahead" of your customers, engaging them based on triggers in their recent behavior and events

Tuesday, October 31, 2006

Webinar: Reduce Attrition and Build Customer Loyalty with Analytics

Here's a link to stream the webinar we did today! I'm also happy to send you the slides that we used or the Federal Reserve Board Study that Debbie and I reference.

Intelligent Results makes the Wikipedia

I guess the new Internet definition of being "for real" is getting a spot on Wikipedia.


Visit the Intelligent Results reference


Monday, October 02, 2006

Think customers: The 1to1 Blog

Think customers: The 1to1 Blog: "WalMart Overextends
by John Gaffney"

Interesting post from 1 to 1. The ideas expressed below John's post in the comments also illustrate the struggle today's large mega brands face as they grow. Credit card issuers becoming banks, becoming stock brokers and retirement planners, becoming insurance agents and latte vendors. Of course large companies must grow but the questions is can their brands? Or, should they look to create new brands.

The art and science of brand marketing may truly be knowing when a brand is stretched beyond its ability to deliver on it's promises and balancing that with creating so many brands that the consumer can't develop a relationship with any of them. Here's a couple examples. I don't want salads or burritos from McDonalds but I'm happy to go to a McDonalds owned restaurants for either if they inherit some of the value and convenience attributes their parent has. On the other side, as a loyal Marriott customer I'm completely confused about where I'm now supposed to stay when I go down-market. There seem to be 3 or 4 Marriott owned "courtyard" type hotel chains now and I can't tell the difference. The effect is I don't frequent any of them.

The net-net is that brands can't stretch forever and that marketers need to listen to the "voice of the customer" to understand where issues are arising and when expectations are out of line. This same voice can then help to segment customers, identify business opportunities and illustrate the brand attributes necessary for new emerging offerings.

Tuesday, September 12, 2006

What Makes Predictive Analytics Work

What makes predictive analytics work? Good data, great algorithms, smart statisticians? Yeah sure, that stuff helps but none of it makes predictive analytics work. In my experience the most critical thing in making analytics work is a an operational business leader with the experience and vision to see them integrated into their business. I know this sounds like the old adage about CRM solutions not being about the software but being about the people and processes. In fact it is, and maybe doubly so for predictive analytics and decision management because the whole goal of an analytics solution is the tell operations (marketing, sales, customer service, collections, etc...) the right decision and get them to do the right thing.


Last week I got a call from a client. This experienced operational manager, Scott C., understands what makes analytics work and how to apply them to make his operations work better. Instead of waiting for his analytics group to suggest uses for predictive analytics Scott keeps a constant eye out for decisions that he and his team make that could be improved (made more profitably, faster and more consistently) through predictive scores and decision rules.


Scott's called last week was about a situation with his current collections and recovery group. Like the settlement offer pricing  and outsourcing decisions we've helped him with in the past, Scott is looking for an application that will tell his managers which accounts are right for a specialized treatment they've developed. The catch of course is that this treatment, while effective, is expensive. Our goal is to develop predictive models and strategies that optimize this new treatment's use given the bank's goals and constraints.


In this way Scott is leveraging analytics, not to replace operations but to supercharge it. Scott knows that only his operational team could have come up with the new treatment but that only analytics can prescribe it's use for optimal impact. When analytics are embraced by operations in this way new solutions are quickly developed for decisions that the analytics side of the business may never have known existed. What's more, because these new solutions are directly aligned with the operation's goals (bonuses) they are often better understood and more quickly adopted.


Tuesday, September 05, 2006

Intelligent Enterprise Magazine: Performance Notification Is Not Performance Management

Intelligent Enterprise Magazine: Performance Notification Is Not Performance Management

This is an interesting topic for those interested in Triggers. It begins to talk about the complexity required to make triggers, or event-based operational intelligence useful for solving real business problems. I posted a bit more on this in Customer Analytics and Decision Management: Triggers in a data driven world.

Friday, September 01, 2006

Scores only work if you use them

If you've been in the analytics business for a while you've had to deal with customers asking about the business value they are going to get from a new score or from scoring in general. The quick answer should be a question, "How are you going to use the score"? Only once the strategy is understood can we value the score. In other words, don't just look to scores and models to add value to the business, instead look to an entire strategy.



The case study below is based on work Don Davey, Director of Collections and Recovery Solutions at Intelligent Results has done and illustrates what I'm talking about. For the first half of the example Don uses a very simple strategy where the client increases 30 day collections by about 8% through differentiating actions across segmented populations instead of treating every account the same. The blue dots below provide more details about this business case.







  1. Historically, accounts like these have had a 30 day collections value of $1,145,008 and a 30 day liquidation rate of 0.9% when all worked together.


  2. The client splits the accounts into two nearly even groups based on the IR 30 day payment score. This score rank orders accounts based on their likelihood to make a payment. Therefore, low value accounts get low scores and high value accounts get higher scores.


  3. This separation between the segments is apparent when you look at the calculated 30 day collections ($188K vs. $1M), liquidation rates (0.2% vs. 1.7%) and unit yields ($1.63 vs.$14.64) for each segment.


  4. The strategy was to work the higher value group 50% more and thereby increase production from that group. It turns out that adding 50% effort to that segment yields an increase of about 10%, moving 30 collections from the high value segment from $1.026M to $1.128M.


  5. The opposite treatment was performed on the low value segment and the result is a 10% reduction in their output, lowering collections from $119K to $107K.



Ultimately the increase in production from the high value segment greatly outweighs the decrease from the low value segment and in total the new strategy yields an overall increase of 8%.



Hopefully, this simple example illustrates how it's not just the score that creates business value but in fact the score combined with a solid strategy of differentiating actions. In reality you would seldom split your portfolio down the middle because a bit more analysis can tell us where to draw the line and which accounts warrant what levels of calling effort. In the next posting we'll get into how to really juice your calling strategy by optimizing effort levels for multiple segments based on call and agent costs, capacity constraints, and the expected values of different effort levels for each account. Lastly, before you jump in and start slicing up your operations please make certain that you've got the right infrastructure to plan, simulate, test, execute, track and refine data-driven strategies.



Tags: , , , , , ,

Thursday, August 31, 2006

Great Case Study on KeyBank

Here is a  terrific article (KeyBank Case Study) on how KeyBank (KEY) is increasing profitability while improving customer satisfaction thought the use of predictive models, behavioral event scoring and targeted strategies. Chip Clarke, senior vice president of strategic analytics, talks about both developing targeted segments based on structured and unstructured data and driving strategies and specific treatments for each segment.


To read the whole article see: KeyBank Case Study at 1to1 Magazine


Wednesday, August 30, 2006

Almost caught up to where we were in 2001

It's funny that for the most part Internet retailers and the rewards card issuers are just now catching up to where the vendors of Internet personalization, targeting, data-mining, etc... wanted them to be in 2001. For example, a friend of mine used his rewards card to purchase a router at a local Staples store. A couple days later he received this personalized email.



Now I'm not saying that others haven't been doing this, in fact I know of several companies with fairly advanced campaign targeting systems in place. What I like is that this email is essentially trigger based and personalized and from a standard retailer.

Text Mining for Predictive Modeling

Last week a colleague of mine involved in a product evaluation of text mining tools for predictive modeling began asking questions about the "proper" criteria for selection and about the necessary features for a successful product. As the conversation turned into a request for a write up I remembered an article that I wrote a while ago entitled The Next Wave in Customer Analytics for DM Review where I talked about what it takes to use text effectively for production quality predictive models.


Thursday, August 24, 2006

A/B Testing with PREDIGY

The need for a great platform to plan, perform, manage, track and report on A/B tests is of critical importance to our customers as they compete to make better, more profitable data-driven decisions. Often referred to as champion-challenger tests in the banking and risk management worlds, the steps, systems and best practices to make this work can be daunting. At Intelligent Results we've done our best to unite these in single platform, PREDIGY. I've listed several of the key features that PREDIGY provides that you really should know about. This is certainly not an exhaustive list but these little details make being data driven much, much easier.







  1. IR_ControlNumber - PREDIGY creates and maintains a variable called "IR_ControlNumber" for every dataset loaded into it. This random variable is created at both design-time and run-time allowing it to be used as the basis for strategy splits. IR_ControlNumber is a single numeric variable between 0 and 999 randomly assigned to every account. While each account's number is completely random it can be consistently generated for the same record meaning that the same record will always get the same IR_ControlNumber. This makes this variable ideal for champion-challenger tests over a single campaign or for extended duration tests.



  2. Segment Name and Strategy Code - Each end node or leaf of the strategy tree has both a Segment Name and a Strategy Code.

    a. The Segment Name is a unique identifier for that group of accounts. PREDIGY enforces that these Segment Names are unique which means that at scoring time each account will fall into one and only one segment and that the Segment Name for each account will be logged to the IR Report database. Examples below include "Low Value Control Segment" and "High Value Dialing Segment".

    b. The Strategy Code is a non-unique code or score that the strategy will assign to each record at scoring time. You can see below that two of the segments both use "BAU" as the Strategy Code. This means that downstream systems that take action off of this "BAU" code will not know whether the account came from the high or low value segment. Both the Segment Name and Strategy Code for a given account are always available for reporting out of IR Report.








  1. IR Report – IR Report is designed to capture and report on operational PREDIGY scoring and decisioning applications. Basically, the way it works is that whenever accounts are scored on the IR Production Engine a series of files are written which include all of the input and output values for every record processed. This data is automatically loaded into the IR Report database. A loading program called IRReportDBLoader.exe manages this process, including schema generation, loading and marking files that have already been loaded. Reports can then be run against this IR Report database using PREDIGY’s embedded Crystal Reports or any other query and reporting tools. In general 3 types of reports can be created from IR Report: IT reports focused on the things IT cares about like processing speed, errors, etc...; Statistical reports monitoring input and output variable and score distributions to guard against drift and untimely model aging; and finally, Strategy reports allowing for A/B testing and measurement of the effectiveness of strategies. For these Strategy reports to be complete it is necessary to append outcome data to the database once that data becomes available. What will already be logged into IR Report will include which Segment Name (unique leaf node) and Strategy Code (action that operations should have taken, potentially common to multiple segments) each account fell into on every scoring run. It’s also important to remember that any other account level information provided or generated at scoring time can also be logged into the IR Report database; such as the accounts score on one or more models or strategies (regardless of whether or not they a re used in the current strategy), balance, state, etc...



  2. Formulas and Actions – The formulas and actions created in design time provide users a simple yet powerful way to estimate and simulate how accounts will flow through their run-time application and what that will mean to their business. In the example below we have calculated several simple measures based on the historic performance of the accounts already loaded into PREDIGY. These include the number and percentage of accounts, the percentage of good and bad accounts and the sum of payments from those accounts in a 30 day period. These formulas and any others that the user wants are easily applied to each node in the decision tree and are updated instantly as the user changes their business rules, models, split points, etc... The Actions and their included formulas combine both calculated measures from the existing records with additional information supplied by the business user allowing you to create estimates, simulations and business cases directly in-line with your decision tree. For example, when we calculate the cost of the "ACTION: Demand Letter" (actions are circled in purple) the formula includes a user defined constant for each letter that will have to be sent for each account. Creating Formulas and Actions is very simple and both can be used throughout the decision tree for data analysis and predicting future business results. Formulas and Actions are purely informational and do not have any impact on the operational scoring process regardless of which actions or formulas you put on various end-nodes or leaves.To affect a downstream system's process you should use Strategy Codes as mentioned above.





I'm sure there are things I've left out of this write-up but it should give you a starting point to investigate using PREDIGY to manage a larger part of the Modeling, Decisioning, Scoring, Tracking, Reporting and Improving life-cycle. As you read this please feel free to comment and because I've skimmed over several sections like deploying and configuring your application (IRX file), the IR Production Engine and it's web-services and batch options, IR Report, etc... please don't hesitate to send me an email.

Triggers in a data driven world



To begin talking about and understand triggers in a data-driven world one needs to think about an entire strategy including models, data variables, business rules and segment based outcomes as a trigger. In fact, what makes it a trigger is more how it is used then the type of data that it processes. A trigger can and often should contain all the complexity (models, rules, data transformations, etc...) as full blown campaigns. The key is that triggers are executed frequently: whenever data changes and/or there is an opportunity to take the right data-driven action. For a company like Intelligent Results whose production engine can easily be called in real-time or batch this isn't a big deal; however, for many organizations that use scores/models and decision trees today this is difficult because they have to recode those models and decision rules into other operational systems to make them work. So once you think of the entire decision tree with all its scores, variables and actions as a trigger you can imagine that any type or types of data and analysis could go into the trigger. For example, a trigger might in fact be statistically based, like a score with cut points, or might be based on the occurrence of a words or concept in a call center conversation. What's more a trigger can also involve multiple elements like a cross-sell offer trigger which initiates an offer if the transaction value of the current shopping cart is >25% over the 6 month average or the predictive model score, p(acceptance), for the optimal product is 35%, or the customer record in the past contains the concept of "moving or relocation." In that example, text, temporal roll-ups, transformations (feature math) and predictive models would all have to interact to fire a "trigger" and initiate the Action. Triggers are an awesome concept for Intelligent Results and we should never miss an opportunity to bring them into the conversation. They are hot in the industry right now and they rely on several concepts that we've long supported:





  • Execute Often - and we really mean score, evaluate and decision often. Triggers are often used in real-time or fast batches and we believe this is important because you want to be able to react as soon as the information changes or as soon as the customer gives you an opening.



  • Use Scores and Decisions - Triggers are more than scores and data. Triggers may require scores and data which we provide, but they also require decisions, cut-points, thresholds and actions. Only PREDIGY gives customers the ability to do all that in one system.



  • Use All Your Data - Powerful Triggers can be based on all types of data, used independently and in combination. From unstructured text and sequence data to data comparisons (transformation in feature math) and statistical models PREDIGY allows users to use and combine all types of data into simple and complex business rules and triggers.



  • Rapid Production - Triggers are more valuable when they can be quickly designed, simulated, implemented and executed. PREDIGY makes all that fast and reliable. The PREDIGY application factory concept integrates and streamlines each of these steps. And of course with the IR Production Engine your Triggers can be easily integrated, deployed and distributed across multiple operational systems and client interaction channels.



  • Track Everything - the PREDIGY platform does that.




Below I've added 2 examples of Triggers that a banking call center customer might use.



The first, is an example of a skills based routing trigger used by a credit card company to route inbound emails about AOL Billing issues to specialized agents. Without going into to much detail on why this is necessary (that's a whole other posting on discovering attrition drivers) you can see that if the email is about AOL charges then the customer is to be routed to a special queue. This is a very simple text based trigger that could be executed by the IR Production Engine against emails, call center call, web forums, etc... The IR Production Engine makes it possible for a bank to run this strategy in real-time or in batch.



The second example is a bit more complex. It's designed to allow call center agents to constantly, profitably and quickly respond to customer's rate adjustment requests or to proactively offer rate adjustments to particularly valuable segments. The first decision element looks at a customer's current interest rate and compares it to the rate they could get from industry competitors. If the delta is within 2 points no action is taken. If the delta is 2 points or greater then the value of the customer is assessed. If they are not a high spender then the account must be sent for additional review (in fact the entire rate management strategy behind this additional review could also be run in real-time if the bank wanted to but that's a different posting). If the customer is a high spender then the call center agent is empowered to make an immediate rate adjustment. This simple, real-time data-driven strategy improves customer satisfaction, regulatory compliance and profitability.

Wednesday, August 23, 2006