On Hold for 45 Minutes? It Might Be Your Secret Customer Score (#GotBitcoin?)
Retailers, wireless carriers and others crunch data to determine what shoppers are worth for the long term—and how well to treat them.
Two people call customer service at the same time to complain about the same thing. One waits a few seconds before a representative gets on the line. The other stays on hold. Why the difference?
There’s a good chance it has something to do with a rating known as a customer lifetime value, or CLV. That secret number is used by all manner of companies to measure the potential financial value of their customers.
Your score can determine the prices you pay, the products and ads you see and the perks you receive.
Credit-card companies use the scoring systems to decide what to offer customers who want to cancel their cards.
Wireless carriers route high-value callers immediately to their most skilled agents. At some airlines, a high score increases the odds of a seat upgrade.
“There’s no free lunch,” says Sunil Gupta, a marketing professor at Harvard Business School who has researched models for calculating lifetime value. “The more profitable you are, the better service you will get.”
These days, companies are resorting to all sorts of data and scores to size up consumers and predict their behavior. Retailers use risk scores to try to limit merchandise returns and prevent e-commerce fraud. There are scores to measure the likelihood a person will become sick, cancel a subscription or bad-mouth a company.
Everyone with a bank account, cellphone or online shopping habit has at least one CLV score, more likely several.
And most people have no inkling they even exist, let alone how they are used, what goes into them or how accurate they are.
Unlike credit scores, CLVs aren’t available to consumers and aren’t monitored by any government agency.
“There needs to be a public conversation around the accuracy of the scores being used,” says Pam Dixon, executive director of the World Privacy Forum, a nonprofit digital-privacy research group. “You can essentially be accused of being cheap or a fraudster, and it may not even be true.”
To determine how the scores are compiled and how they are used, The Wall Street Journal interviewed data scientists who develop the models and employees of the software and analytics firms that help companies put them to use.
Most CLV score users contacted for this article declined to comment on how they score customers, citing competitive reasons. Many say the scores make them more comfortable offering costly services and products in the short term because they are confident they will pick up more business in the long term. Some say they aim to increase each customer’s lifetime value by encouraging repeat business.
In some respects, the scores are just a high-tech version of what shopkeepers have done for generations—make judgments on a customer’s value based on how they look or behave. As far back as 20 years ago, academics were publishing models to calculate the future value of customers.
Now there are hundreds of analytics firms that calculate customer lifetime value, each with its own approach. Some of them put a value on shoppers based simply on what they spend, while others use hundreds of data inputs, adding and deducting points for demographic information such as ZIP Codes or behavioral details such as the number of returns they make or when they shop.
“Not all customers deserve a company’s best efforts,” says Peter Fader, a marketing professor at the University of Pennsylvania’s Wharton School who helped popularize lifetime value scores. His scoring method is based on transaction history, which he says is all companies need to determine how customers will behave in the future. This year, he sold the firm he co-founded, Zodiac Inc., which performs such analysis, to Nike Inc.
The data that goes into a score can come from transaction records, website interactions, customer-service conversations, social-media profiles and third-party brokers such as Acxiom LLC and Alliance Data Systems Corp.’s Epsilon, which sell information on such things as the number of bedrooms in a house and the type credit card someone carries. Each piece of data is weighted based on past patterns and perceived level of predictability.
Marital status is often factored in, with some companies assuming that singles are better customers, and others, the opposite. Age also is a common input, potentially penalizing older people because of their shorter projected lifespans.
Some companies deduct points from shoppers who exhibit costly behaviors. Banks sometimes take into account the calls people make to customer-service agents or the number of times they visit branches. Online retailers track shoppers who buy things only when they are deeply discounted. People expected to cost more than they spend can have a negative score.
Computer systems sometimes tag customers as high-value or low-value. Marketing staffers or service agents gauge interactions based on the status. Vendors such as Zeta Global and Salesforce Inc. can automatically offer discounts and other incentives based on the scores.
Phone Service
At wireless carriers such as Verizon Communications Inc. and Sprint Corp. , lifetime value can determine marketing offers and other perks. At some carriers, high-value customers who are at risk of switching to another carrier are prioritized and get routed to a top-rated call center.
Verizon and Sprint declined to provide specifics about how they assess customer value. “The predominant way we route calls is based on the reason for the call,” says a Sprint spokeswoman. She says customer lifetime value is “one of many ways we guide customer interactions.”
Zeta Global, whose clients include wireless carriers, generates scores using data points such as the number of times a customer has dialed a call center and whether that person has browsed a competitor’s website or searched certain keywords in the past few days. The firm says it has a database of more than 700 million people, with an average of over 2,500 pieces of data per person.
When a person’s “churn” score, which predicts his or her chances of switching to another carrier, exceeds a certain threshold, Zeta’s system flags that customer to a customer-service agent. The higher the customer’s lifetime value, the more likely that Zeta will recommend responding to the customer more quickly and offering free phones and other perks, says David Steinberg, Zeta’s chief executive. “Most of this comes down to how you’re marketed to and how you’re treated,” he says.
Apparel
Apparel retailers often compare a shopper’s lifetime value with the cost of marketing to that person before deciding whether to woo him or her and how much money to spend doing so.
“What CLV does is allows us to see beyond the day-to-day to ensure we’re focused on the quality of the new customers we’re acquiring, not just the quantity,” says Ed Boyle, senior director of performance marketing at Bonobos, an apparel retailer acquired by Walmart Inc.
In a research paper last year, ASOS, an online retailer, said it scores shoppers on over 100 data inputs, including a customer’s age and location. Since ASOS doesn’t recoup delivery costs for returned items, “customers can easily have negative lifetime value,” the paper said. The company declined to comment on the paper.
Brad Birnbaum, chief executive of customer-service platform Kustomer Inc., says some of his e-commerce clients use scores, including CLV, to respond to email inquiries. “If you’ve got an angry shopper with a high lifetime value, you might want to bump up the priority,” he says.
Shoppers with higher scores, however, won’t necessarily get the best deals all the time, says Jerry Jao, chief executive of Retention Science, which has worked for companies such as Target Corp. and Procter & Gamble Co. Retailers sometimes withhold discounts to high-value customers until they are at risk of losing them. “Why waste a 25% offer when the person is going to buy anyway?” Mr. Jao says.
Cars
At auto dealerships, a high score can mean access to loaner cars, preferential service slots and special events, says Scot Eisenfelder, chief executive of Affinitiv Inc., which uses lifetime value to create marketing campaigns for dealerships. The scoring helps dealerships weed out costly customers. “This is what you call grinders—people who visit 16 stores to get the absolute lowest price,” he explains.
Mr. Eisenfelder says his firm develops scores by crunching data on things such as previous car purchases, whether a household has a teenager, where else a person has shopped and ZIP Codes, which can be used as a proxy for income. Someone who has a Neiman Marcus credit card is going to be more valuable for a car dealership than someone with a credit card from a discount chain, he says.
Air Travel
At airlines, CLV scores incorporate frequent-flier information and other data. A high score can increase a person’s chances of getting seat upgrades or better service, says Laks Srinivasan, co-chief operating officer of Opera Solutions LLC, which works with airlines, retailers, banks and other companies.
The firm’s scores can draw from more than 5,000 data “signals” per customer, Mr. Srinivasan says, translating them into recommendations for flight attendants, gate agents and other personnel. The company tracks, for example, the number of times a person calls to complain over the prior 90 days, which can affect the CLV.
An airline can compare how often a shopper complains with his or her lifetime value and customer experience score, which measures inconveniences such as number of times in the middle seat, flight delays and lost bags.
“A high-value customer who had a real service disruption and never calls to complain should be compensated more quickly than someone who is complaining and costing time and money,” Mr. Srinivasan says.
Credit Cards
To calculate lifetime value, credit-card companies analyze spending behavior, payment history and credit scores, among other things.
“Banks know what you buy, and where and when you buy it,” says Arpan Dasgupta, head of financial services and telecom practices at Fractal Analytics, which helps companies analyze customer data.
“It’s powerful data that can be useful for CLV.”
The score can determine which customers receive credit-card offers and other incentives. When customers call to cancel at a card company such as American Express Co. , their relationship with the issuer and past spending behavior are some of the criteria used to determine what perks will be offered to retain them.
Meanwhile…
The Secret Trust Scores Companies Use to Judge Us All
In the world of online transactions, trust scores are the new credit scores—but good luck finding out yours.
When you’re logging in to a Starbucks account, booking an Airbnb or making a reservation on OpenTable, loads of information about you is crunched instantly into a single score, then evaluated along with other personal data to determine if you’re a malicious bot or potentially risky human.
Often, that’s done by a service called Sift, which is used by startups and established companies alike, including Instacart and LinkedIn, to help guard against credit-card and other forms of fraud. More than 16,000 signals inform the “Sift score,” a rating of 1 to 100, used to flag devices, credit cards and accounts owned by any entities—human or otherwise—that a company might want to block. This score is like a credit score, but for overall trustworthiness, says a company spokeswoman.
One Key Difference: There’s No Way To Find Out Your Sift Score.
Companies that use services like this often mention it in their privacy policies—see Airbnb’s here—but how many of us realize our account behaviors are being shared with companies we’ve never heard of, in the name of security? How much of the information one company shares with these fraud-detection services is used by other clients of that service? And why can’t we access any of this data ourselves, to update, correct or delete it?
According to Sift and competitors such as SecureAuth, which has a similar scoring system, this practice complies with regulations such as the European Union’s General Data Protection Regulation, which mandates that companies don’t store data that can be used to identify real human beings unless they give permission.
Unfortunately GDPR, which went into effect a year ago, has rules that are often vaguely worded, says Lisa Hawke, vice president of security and compliance at the legal tech startup Everlaw. All of this will have to get sorted out in court, she adds.
Another concern for companies using fraud-detection software is just how stringent to be about flagging suspicious behavior. When the algorithms are not zealous enough, they let fraudsters through. And if they’re overzealous, they lock out legitimate customers. Sift and its competitors market themselves as being better and smarter discriminators between “good” and “bad” customers.
Algorithms always have biases, and companies are often unaware of what those might be unless they’ve conducted an audit, something that’s not yet standard practice.
“Sift regularly evaluates the performance of our models and tries to minimize bias and variance in order to maximize accuracy,” says a Sift spokeswoman.
“While we don’t perform audits of our customers’ systems for bias, we enable the organizations that use our platform to have as much visibility as possible into the decision trees, models or data that were used to reach a decision,” says Stephen Cox, vice president and chief security architect at SecureAuth. “In some cases, we may not be fully aware of the means by which our services and products are being used within a customer’s environment,” he adds.
Digital Bouncers:
When an account is rejected on the grounds of its Sift score, Patreon sends an automated email directing the applicant to the company’s trust and safety team. “It’s an important way for us to find out if there are any false positives from the Sift score and reinstate the account if it shouldn’t have been flagged as high risk,” says Ms. Hart.
There are many potential tells that a transaction is fishy. “The amazing thing to me is when someone fails to log in effectively, you know it’s a real person,” says Ms. Hart. The bots log in perfectly every time. Email addresses with a lot of numbers at the end and brand new accounts are also more likely to be fraudulent, as are logins coming from anonymity networks such as Tor.
These services also learn from every transaction across their entire system, and compare data from multiple clients. For instance, if an account or mobile device has been associated with fraud at, say, Instacart, that could mark it as risky for another company, say Wayfair—even if the credit card being used seems legitimate, says a Sift spokeswoman.
The risk score for any given customer, bot or hacker is constantly changing based on that user’s behavior, going up and down depending on their actions and any new information Sift gathers about them, she adds.
For Our Protection?
These trustworthiness scores make us unwitting parties to the central tension between privacy and security at the heart of Big Tech.
Sift judges whether or not you can be trusted, yet there’s no file with your name that it can produce upon request. That’s because it doesn’t need your name to analyze your behavior.
“Our customers will send us events like ‘account created,’ ‘profile photo uploaded,’ ‘someone sent a message,’ ‘review written,’ ‘an item was added to shopping cart,’” says Sift chief executive Jason Tan.
It’s technically possible to make user data difficult or impossible to link to a real person. Apple and others say they take steps to prevent such “de-anonymizing.” Sift doesn’t use those techniques. And an individual’s name can be among the characteristics its customers share with it in order to determine the riskiness of a transaction.
In the gap between who is taking responsibility for user data—Sift or its clients—there appears to be ample room for the kind of slip-ups that could run afoul of privacy laws. Without an audit of such a system it’s impossible to know. Companies live under increasing threat of prosecution, but as just-released research on biases in Facebook ’s advertising algorithm suggest, even the most sophisticated operators don’t seem to be fully aware of how their systems are behaving.
That said, sharing data about potential bad actors is essential to many security systems. “I would argue that in our desire to protect privacy, we have to be careful, because are we going to make it impossible for the good guys to perform the necessary function of security?” says Anshu Sharma, co-founder of Clearedin, a startup that helps companies combat email phishing attacks.
The solution, he says, should be transparency. When a company rejects us as potential customers, it should explain why, even if it pulls back the curtain a little on how its security systems identified us as risky in the first place.
Mr. Cox says it’s up to SecureAuth’s clients, which include Starbucks and Xerox, to decide how to notify people who were flagged, and a spokeswoman said the same is true for Sift.
Companies use these scores to figure out who—people or potential bots—to subject to additional screening, such as a request to upload a form of ID.
Someone on a travel service buying tickets for other people might be a scammer, for instance. Or they might be a wealthy frequent flyer.
“Sometimes your best customers and your worst customers look the same,” says Jacqueline Hart, head of trust and safety at Patreon, a service for supporting artists and creators, which uses Sift to screen transactions on its site. “You can have someone come in and say I want to pledge $10,000 and they’re either a fraudster or an amazing patron of the arts,” she adds.
Related Articles:
Equifax, FICO Team Up To Sell Your Financial Data To Banks (#GotBitcoin?)
Over-Inflated Credit Scores Leave Consumers / Investors At Risk In A Recession (#GotBitcoin?)
Lenders Share Their Underwriting Secrets With Credit Karma (#GotBitcoin?)
Your Questions And Comments Are Greatly Appreciated.
Monty H. & Carolyn A.
Go back
Leave a Reply
You must be logged in to post a comment.