Over-Inflated Credit Scores Leave Consumers / Investors At Risk In A Recession (#GotBitcoin?)
Consumer credit scores have been artificially inflated over the past decade and are masking the real danger the riskiest borrowers pose to hundreds of billions of dollars of debt. Over-Inflated Credit Scores Leave Consumers / Investors At Risk In A Recession (#GotBitcoin?)
That’s the alarm bell being rung by analysts and economists at both Goldman Sachs Group Inc. and Moody’s Analytics, and supported by Federal Reserve research, who say the steady rise of credit scores as the economy expanded over the past decade has led to “grade inflation.”
This means debtors are riskier than their scores indicate because the metrics don’t account for the robust economy, skewing perception of borrowers’ ability to pay bills on time. When a slowdown comes, there could be a much bigger fallout than expected for lenders and investors. There are around 15 million more consumers with credit scores above 740 today than there were in 2006, and about 15 million fewer consumers with scores below 660, according to Moody’s.
“Borrowers with low credit scores in 2019 pose a much higher relative risk,” said Cris deRitis, deputy chief economist at Moody’s Analytics. “Because loss rates today are low and competition for high-score borrowers is fierce, lenders may be tempted to lower their credit standards without appreciating that the 660 credit-score borrower today may be relatively worse than a 660-score borrower in 2009.”
The problem is most acute for smaller, less sophisticated firms that lend to people with poor credit histories, deRitis said. Many of these types of lenders rely mainly on the data supplied by Fair Isaac Corp., the so-called FICO score, and are unable or choose not to include other measures — such as debt-to-income level, economic data or loan terms — into their models for measuring risk, he said.
Car loans, retail credit cards and personal loans handed out online are the most exposed to the inflated scores, according to deRitis. This kind of debt totals around $400 billion, with nearly $100 billion bundled into securities that’s been sold to investors, data compiled by Bloomberg show.
What has analysts concerned is that cracks have already begun to show up in the form of a rising number of missed payments by borrowers with the highest risk, despite a decade of growth. And now with the economy showing signs of weakness, as seen with the recent inversion of the Treasury yield curve, those delinquencies could grow and lead to larger-than-expected losses for investors in riskier asset-backed securities.
“Every credit model that just relies on credit score now — and there’s a lot of them — is possibly understating the risk,” Goldman Sachs analyst Marty Young said in an interview. “There are a whole bunch of other variables, including the business cycle, that need to be taken into account.”
Fair Isaac Corp. created its FICO credit score product in 1989, and it’s still used by more than 90 percent of U.S. lenders to predict whether a would-be borrower is an acceptable risk. Most scores range from 300 to 850, with a higher score purporting to show that someone is more likely to pay back debts. A competitor, VantageScore, was created in 2006 by the three major credit raters Experian, TransUnion and Equifax.
The concern that’s come up, Goldman and Moody’s say, is that lenders haven’t adjusted their underwriting standards as average credit scores have risen during one of the longest economic recoveries on record. So as cracks start to appear in the economy, someone whose credit score rose to 650 from 550 since the Great Recession may pay their bills more like they did 10 years earlier.
“Borrowers’ scores may have migrated up, but inherently their individual risk, and their attitude towards credit and ability to pay their bills, has stayed the same.” deRitis said. “You might have thought 700 was a good score, but now it’s just average.”
Big banks and lenders have been savvy enough to recognize the problem and include many other factors besides credit scores in their underwriting. This is probably true for some of the smaller lenders too.
FICO acknowledges that the credit score alone may not be enough to make informed underwriting decisions, and other factors need to be considered.
“The relationship between FICO score and delinquency levels can and does shift over time,” said Ethan Dornhelm, vice president of scores and predictive analytics at FICO. “We recognize there’s a lot more context you can obtain beyond a consumer’s credit file. We do not think that score inflation is the issue, but the risk layering on underwriting factors outside of credit scores, such as DTI, loan terms, and even trends in macroeconomic cycles, for example.”
But according to Goldman’s Young, the change in scores helps explain why missed payments on auto loans have significantly risen in recent years despite low unemployment, increasing wages and a relatively strong economy.
In February, the Federal Reserve Bank of New York said the number of auto loans at least 90 days late exceeded 7 million at the end of last year, the highest total in the two decades that the data has been tracked. Meanwhile, the subprime segment of auto-loan asset-backed securities has seen 30-day delinquencies rise 81 percent since 2011, driven by looser underwriting due to rising competition between lenders, according to S&P Global Ratings.
Marketplace lending — loans handed out online — has been flashing signs of stress. Missed payments by consumers and writedowns for online loans bundled into bonds increased last year, according to PeerIQ, a New York-based provider of data and analytics for the consumer lending sector.
“We don’t see the purported improvement in underwriting just yet,”’ PeerIQ wrote in a recent report tracking marketplace lending.
Russell-Dowe also avoids the retail credit card sector. So-called private label credit cards — those issued by stores, rather than big banks — saw the highest number of missed payments in seven years in 2018, according to credit bureau Equifax.
She urges investors to do the difficult work necessary to figure out how each lender approaches underwriting and to determine whether they take other factors into consideration besides just scores.
“As an investor it’s incumbent on you to do that deep credit work, which means you have to know as much as possible about how things should pay off or default,” she said. “If you don’t think you’re being paid for the risk, you have no business investing in it.”
The Secret Trust Scores Companies Use to Judge Us All
In the world of online transactions, trust scores are the new credit scores—but good luck finding out yours.
When you’re logging in to a Starbucks account, booking an Airbnb or making a reservation on OpenTable, loads of information about you is crunched instantly into a single score, then evaluated along with other personal data to determine if you’re a malicious bot or potentially risky human.
Often, that’s done by a service called Sift, which is used by startups and established companies alike, including Instacart and LinkedIn, to help guard against credit-card and other forms of fraud. More than 16,000 signals inform the “Sift score,” a rating of 1 to 100, used to flag devices, credit cards and accounts owned by any entities—human or otherwise—that a company might want to block. This score is like a credit score, but for overall trustworthiness, says a company spokeswoman.
One Key Difference: There’s No Way To Find Out Your Sift Score.
Companies that use services like this often mention it in their privacy policies—see Airbnb’s here—but how many of us realize our account behaviors are being shared with companies we’ve never heard of, in the name of security? How much of the information one company shares with these fraud-detection services is used by other clients of that service? And why can’t we access any of this data ourselves, to update, correct or delete it?
According to Sift and competitors such as SecureAuth, which has a similar scoring system, this practice complies with regulations such as the European Union’s General Data Protection Regulation, which mandates that companies don’t store data that can be used to identify real human beings unless they give permission.
Unfortunately GDPR, which went into effect a year ago, has rules that are often vaguely worded, says Lisa Hawke, vice president of security and compliance at the legal tech startup Everlaw. All of this will have to get sorted out in court, she adds.
Another concern for companies using fraud-detection software is just how stringent to be about flagging suspicious behavior. When the algorithms are not zealous enough, they let fraudsters through. And if they’re overzealous, they lock out legitimate customers. Sift and its competitors market themselves as being better and smarter discriminators between “good” and “bad” customers.
Algorithms always have biases, and companies are often unaware of what those might be unless they’ve conducted an audit, something that’s not yet standard practice.
“Sift regularly evaluates the performance of our models and tries to minimize bias and variance in order to maximize accuracy,” says a Sift spokeswoman.
“While we don’t perform audits of our customers’ systems for bias, we enable the organizations that use our platform to have as much visibility as possible into the decision trees, models or data that were used to reach a decision,” says Stephen Cox, vice president and chief security architect at SecureAuth. “In some cases, we may not be fully aware of the means by which our services and products are being used within a customer’s environment,” he adds.
When an account is rejected on the grounds of its Sift score, Patreon sends an automated email directing the applicant to the company’s trust and safety team. “It’s an important way for us to find out if there are any false positives from the Sift score and reinstate the account if it shouldn’t have been flagged as high risk,” says Ms. Hart.
There are many potential tells that a transaction is fishy. “The amazing thing to me is when someone fails to log in effectively, you know it’s a real person,” says Ms. Hart. The bots log in perfectly every time. Email addresses with a lot of numbers at the end and brand new accounts are also more likely to be fraudulent, as are logins coming from anonymity networks such as Tor.
These services also learn from every transaction across their entire system, and compare data from multiple clients. For instance, if an account or mobile device has been associated with fraud at, say, Instacart, that could mark it as risky for another company, say Wayfair—even if the credit card being used seems legitimate, says a Sift spokeswoman.
The risk score for any given customer, bot or hacker is constantly changing based on that user’s behavior, going up and down depending on their actions and any new information Sift gathers about them, she adds.
For Our Protection?
These trustworthiness scores make us unwitting parties to the central tension between privacy and security at the heart of Big Tech.
Sift judges whether or not you can be trusted, yet there’s no file with your name that it can produce upon request. That’s because it doesn’t need your name to analyze your behavior.
“Our customers will send us events like ‘account created,’ ‘profile photo uploaded,’ ‘someone sent a message,’ ‘review written,’ ‘an item was added to shopping cart,’” says Sift chief executive Jason Tan.
It’s technically possible to make user data difficult or impossible to link to a real person. Apple and others say they take steps to prevent such “de-anonymizing.” Sift doesn’t use those techniques. And an individual’s name can be among the characteristics its customers share with it in order to determine the riskiness of a transaction.
In the gap between who is taking responsibility for user data—Sift or its clients—there appears to be ample room for the kind of slip-ups that could run afoul of privacy laws. Without an audit of such a system it’s impossible to know. Companies live under increasing threat of prosecution, but as just-released research on biases in Facebook ’s advertising algorithm suggest, even the most sophisticated operators don’t seem to be fully aware of how their systems are behaving.
That said, sharing data about potential bad actors is essential to many security systems. “I would argue that in our desire to protect privacy, we have to be careful, because are we going to make it impossible for the good guys to perform the necessary function of security?” says Anshu Sharma, co-founder of Clearedin, a startup that helps companies combat email phishing attacks.
The solution, he says, should be transparency. When a company rejects us as potential customers, it should explain why, even if it pulls back the curtain a little on how its security systems identified us as risky in the first place.
Mr. Cox says it’s up to SecureAuth’s clients, which include Starbucks and Xerox, to decide how to notify people who were flagged, and a spokeswoman said the same is true for Sift.
Companies use these scores to figure out who—people or potential bots—to subject to additional screening, such as a request to upload a form of ID.
Someone on a travel service buying tickets for other people might be a scammer, for instance. Or they might be a wealthy frequent flyer.
“Sometimes your best customers and your worst customers look the same,” says Jacqueline Hart, head of trust and safety at Patreon, a service for supporting artists and creators, which uses Sift to screen transactions on its site. “You can have someone come in and say I want to pledge $10,000 and they’re either a fraudster or an amazing patron of the arts,” she adds.
Your Questions And Comments Are Greatly Appreciated.
Monty H. & Carolyn A.Go back