PODCASTS, 12 September 2024

Nexi Talks: Zahlungsbetrug

Betrug im Zahlungsverkehr schläft nie. Es fühlt sich an, als müsste man acht Tage die Woche arbeiten, nur um mit den Kriminellen Schritt zu halten, die uns ständig auf neue Weise herausfordern. Deshalb haben wir Nexi Talks ins Leben gerufen: eine neue Audio-Miniserie zur Betrugsprävention, die du überall anhören kannst.

In der Serie wirst du Beiträge von ethischen Hackern, Kriminologen, Journalisten, ehemaligen Betrügern, Softwareingenieuren, Data Scientists und vielen weiteren Experten hören.

Erfahre, wie künstliche Intelligenz sowohl von den Guten als auch von den Bösen im Kampf um die Vorherrschaft eingesetzt wird. Lerne, Social-Engineering-Betrügereien zu erkennen und abzuwehren, und verstehe, wie du dein Unternehmen und deine Kunden vor dem zunehmenden Betrugsrisiko schützen kannst.

Episode 2

In Episode 2 sprechen wir mit Troels Steenstrup Jensen von KPMG Dänemark, zusammen mit Alberto Danese und Sean Neary von der Nexi Group, darüber, wie Fortschritte in den Bereichen KI und maschinelles Lernen das Betrugsgeschehen sowohl positiv als auch negativ verändern.

Hören Sie jetzt zu und erfahren Sie mehr über:

  • Den Hype rund um Generative KI und deren Auswirkungen auf die Betrugsprävention.
  • Den fortlaufenden Bedarf an menschlicher Aufsicht über KI-Systeme, um Betrug in Echtzeit zu bekämpfen.
  • Die innovativen Methoden, mit denen Betrüger KI nutzen, um ihre Taktiken zu verbessern, einschließlich Social Engineering und personalisierter Angriffe.
  • Die Zukunft der Betrugsprävention und die Rolle der KI beim Schutz von Banken, Händlern und Verbrauchern.

Søren Winge: Welcome to our podcast, Nexi Talks, that will hopefully help you better understand and prevent deception during the current war on payment fraud. We'll be joined by some of the best minds in the business, so you can learn from those who know payment fraud best. My name is Søren Winge, and I'll be your host.

Now, if you missed episode one, please go back and listen. You'll get some great insights into the ways fraud is evolving. But today we will try to answer the question everyone is asking. How are the advancements in artificial intelligence and machine learning changing the fraud game, both for good and bad?

We’re asking these questions, not to machines, but to three very human experts today. So, I'm pleased to be joined by Troels Steenstrup Jensen, who’s Head of Machine Learning & Quantum Technologies at KPMG Denmark.

Troels Steenstrup Jensen: Thank you, Søren. It's great to be here.

Søren Winge: And Alberto Danese, who’s Head of Data Science at Nexi.

Alberto Danese: Hello everyone. It's good to be here.

Søren Winge: And finally, our usual guest, Sean Neary, Head of Fraud Risk Management Services at Nexi.

Sean Neary: Hello, Søren. Hi, everybody.

Søren Winge: Welcome to you all. Let's get into it. Now Troels, there's so much hype about generative AI and how this tool can revolutionize our daily work lives. But from a front perspective, when did the journey around AI really start?

Troels Steenstrup Jensen: AI has been around for a long time. It was actually defined all the way back in around 1955, where John McCarthy defined it as the science of making intelligent machines. So basically, making computers do intelligent things. It has, of course, evolved a lot since then. And, let me talk about these two different evolutions within AI.

So, there's the part that relies on what we call training data, on examples of what we want the computer to do. And there's a part that does not rely on training data, that does it out of the box. If we start with the one that does not rely on training data, that is an example. If you ask your navigation system to take you from point A to point B, it finds you the shortest route. It does not need a whole history of how people might have driven from point A to point B. But if you take the other part, the part that relies on data, this one is the one that's relevant for fraud and actually also for generative AIs. This is the one where you show the machine a lot of examples of what you're looking for.

So, in the case of fraud, you would show it a lot of transactions and say, this one was fraud, this one was a normal transaction. And then you ask it the question or basically get it to, to classify between these, these two types.

Søren Winge: So, how did the journey start at KPMG in terms of leveraging the strengths of AI?

Troels Steenstrup Jensen: So, the collaboration between KPMG and Nets, which is now part of Nexi, started back in 2016. That's exactly when Nets had decided to purchase a big data platform. Now, finally, it was possible to really connect all the transactions. So, we're talking billions and billions of transactions on, you can say, one piece of compute that could also train models. And with training models, we mean that we, in a certain way, show it the historical transactions and get it to create an algorithm that can decide whether this was normal behavior or fraudulent behavior.

Also, at that time, the behavior of the fraudsters was changing. Sure, we could talk more about this, but we were seeing more advanced attacks. We were seeing volume attacks; we were seeing robot attacks. And it was becoming more and more challenging to write rules to prevent this fraud because these rules needed to be more and more specific, making them harder to write and harder to maintain.

So suddenly, the idea of having AI to do automatic rule writing showed up, and that very quickly matured into, maybe it doesn't have to write a lot of rules. Maybe it just has to write, you can say, very advanced rules. So, maybe it just has to actually change over and not have binary rules, but actually create a score that gives the probability of this being fraud or normal behavior, giving all the information available to the algorithm.

Søren Winge: So, after about a decade of working with this in house and given the accelerated developments the last couple of years, we're around the time where we are ready to hand over the reins to AI. What do you say, Alberto?

Alberto Danese: Well, Søren, I think, we are not ready yet, to be honest. If you think about it, as humans, we have leveraged tools and machines for centuries. And every century, every decade, every year, we've seen improvements in such tools. No matter the hype around artificial intelligence or generative AI, at the end of the day, it's a tool, it's an extremely advanced tool, but it's been designed, it's been developed by humans. And I think the place for AI and for machine learning in fraud prevention, but also elsewhere, is that of a very advanced assistant that can help us improve what we do.

Søren Winge: So, how has that changed the role between us and the machines? What is our role today, maybe compared to before?

Alberto Danese: Looking at how fraud prevention actually works. I think in the past, it used to be 100% based on fraud analyst expertise. At Nexi, I'm privileged to be working with analysts that have years of experience and have seen many events, many situations where fraud prevention has to be in place. And until a few years ago, it was 100% on them to write, to define rules of anomaly, basically, in order to deny transactions that were very unlikely to be genuine.

Now, the situation has changed quite a bit, as Troels was saying, not just in the last year or two, but probably in the last decade and now the activity of fraud analyst has been integrated with algorithms with AI.

And so, we still have great expertise of our experts of our analysts, but we have been able to develop some AI algorithms that complement that integrate existing rules. We also have to consider that we are in a field where we have a strong time constraint because as cardholders, as users of digital payments, we all know that when we want to make a purchase online or in a physical store, we expect the transactions to be authorized right away, in a few milliseconds.

And as fraud experts, as data scientists, we have a very limited time frame to operate and to evaluate if a transaction is genuine or not. And so, we really have to design not only effective systems but also very quick systems in order for our customers to have a positive experience.

Sean Neary: And I suppose to add to that Alberto, you say about it, sort of has to co-exist with the traditional, as we call them, fraud analysts.

And the extra part of that is the adaptability to trends, right? I think there could be this misnomer that these machine learning AI models adapt to new trends instantaneously. And you know, the new trend kicks in today and it's working tomorrow, detecting it for tomorrow and the week going forward. Well, in fact, then if you agree, that's not true, right?

And this is why you need these rules in place to then put these tactical immediate changes in when a new trend kicks in, at which then eventually we'll make it into the model. But not at that reactive speed that I think some people often believe.

Alberto Danese: I agree 100%. I think AI may seem like magic, but there is a lot of work under the hood.

And as Troels was saying also, there is the need to train AI algorithms and training takes time and also deploying a new model takes time. So, I think it's very important to be able to put in place quick solutions in some specific events and take into consideration that releasing a new, updated AI model, it's something that doesn't happen from the day to the night or the other way around.

Søren Winge: So, clearly this is becoming an increasingly a powerful tool, but also an important one for the fraudsters who are also leveraging this opportunity. I think clearly the last couple of years, we've really seen a development. How do you see the fraud landscape developing from the fraudsters point of view, Sean?

Sean Neary: Yeah, so that is a very good question and probably a perspective not many people always originally thought about. I think you have these old-school long-term analysts and fraud fighters. If they remember, scams back in the day were quite manually done. They were organized criminals. You would find that they would probably be one or two major players in the fraud space.

It took a lot of organization, data gathering, preparation. It was a lot of investment on their side. They're very calculated in where they put their attack because they had to get a return on their investment because there was a lot of upfront cost in doing it. It was a lot of, it was very manual.

And what we've seen is, with this AI or ML becoming a commodity and available publicly available piece of technology now at cost or even sometimes free, they're able to industrialize this and they're able to create efficiencies. They are a company in their own right. They do have call centers. They have rooms of people.

They are collaborating across the globe, and now with the use of AI, a one-man band can act as if he was an army of 20 people, that we might've seen 10 years ago. And not only is the scale of efficiencies a scary thing and in part being utilized for fraud, but if you think about language barriers, you will find that there are certain regions that just really weren't constantly under attack.

Specifically, if you look at places like the Nordics, where the language is harder to grasp, harder to fake, there are so many nuances. There’re so many local ways, depending on where you are in your countries, that it was just, again, didn't give you your return on investment. Moving into the latest sort of technology we have for AI, such as ChatGPT, we're seeing a lot more fraud spread into these regions because of these translation services that are very convincing.

And then you're finding not just that, but they're diversifying their channels of attack. They're now not just doing via email: they're doing it through social media. They're now able to do it through video. Through voice, this is again, as a result of the accessible nature of some of the tools you can get with mimicking people's voices, mimicking people's faces with deep faking.

So, we've seen it, we've seen a huge change and it's changing in the modus operandi that we're seeing between the customers, the fraudsters, and the us as the banks and the financial institutions. And I know we're going to cover these in future episodes.

Søren Winge: Yeah, so many of these types of attacks, which are becoming much more tailored, maybe also discussed under the heading social engineering- how these criminals are really becoming much better at doing these targeted attacks.

So, Troels, seeing from your point of view, how are you seeing this develop?

Troels Steenstrup Jensen: I think there were very nice points covered by Sean. A part of the social engineering is, of course, to understand and say that the person that is being targeted. And you can really use these new AI developments in order to much closer understand who you're actually trying to target.

Let's say you have an open Facebook profile, or you have a LinkedIn profile, and then, then you can actually have AI that goes in there and analyzes the content that's available and tells the type of target you actually have, what might be the weak spots for this type, for this target. So again, as Sean was saying in the beginning, it really removes some of the investment needed from the fraudsters’ side, because if you can suddenly just have an AI analyze, this is the best attack vector for this person, given his LinkedIn or his Facebook profile. That makes it a lot easier to start that social engineering and make it successful.

There's also an element of fraud called CEO fraud, where the adversaries, they managed to hack their way into a CEO or senior person within a company, and then analyze the types of communication; the emails that are being sent back and forth, and then at some point starts an attack where they literally pretend that they're now the CEO of this person that they've taken over, and they've actually done the effort into actually learning how does this person write, what are the normal working hours and done their homework. So, this really looks like it's coming from the person that they're impersonating. And of course, making sure that there's an urgency and an important deadline.

Søren Winge: What can we do to stop this? How can we, maybe, further leverage AI to protect banks, merchants, and consumers from this growing fraud?

Troels Steenstrup Jensen: You can dig into that vast knowledge repository that Alberto also mentioned from the fraud analysts who are working with this, really to pull out what are the areas that, where the current rules are not strong, and how do we make that into an AI solution. We made a setup where all of these small pieces, small evidence that this might be fraud, were really connected up and then aggregated in through an example model to, you can say, reinforce those signals.

So, a little bit like looking for the small crumbs here and there that at the end of the day says, no, this is not normal behavior. There have been some major updates of the model since then. One of the updates was in order to also teach it how to do some of what the current model, current rules were doing. Because as Sean was also saying, it's a very nice setup where you, if you need to respond to a very new type of fraud that the model is not detecting. Then you can put in a rule and then as you're updating the model, you would like the model then to learn this behavior to a large extent so you can clean up in the number of rules you have, so you don't have a growing rule base.

I think explainability is really the part that fixes that gap between rules and scores, because rules are very easy to explain because you typically could write them with a certain scenario or a certain fraud pattern in mind.

So, it's embedded. Once a rule is filed, that this is this type of fraud, this has triggered on, or a score can be high for a number of reasons. So, you really need the explainability in order to, to say, why does this come out as high? And that can really be used by these agents who generally review alerts on the fraud platform afterwards, so they know what to look for.

Sean Neary: I suppose to add into that, you’re talking about sort of general model governance as well. So, there are some boundaries that we have to abide by when we're working in the environments we are. Such as things as protected attributes, for example, certain things that you're not necessarily allowed to utilize when modeling. Whereas for fraudsters, the gloves are off, whatever data attributes they have, whatever they can utilize to make a better output they can use.

So, there are some restrictions there, for sure. And as you say, explainability is very important for us to understand that we are utilizing this technology in the correct way, and we are not going in with utilizing it blindly either. It is in a controlled manner and that again, does hinder some of the algorithms you want to use, the types of approaches you want to take.

And Alberto, I'm sure you've probably seen the evolution of where model governance has gone and how strict it is. What do you see?

Alberto Danese: I think, you know, explainability is playing a crucial role for us. Being able to really understand why a model gave a high risk to a transaction, to an authorization is key also in debugging, let's say, in understanding if the model actually performed the way we expect.

It's very important, not just in production, when a model is live, but also in development. We can understand if there is something wrong in the development of the models because, at the end of the day, it's not magic.

Søren Winge: Maybe Sean, you could elaborate a bit on how we've also used this model internally and developed it ongoingly. Also, in terms of how we try to use the opportunities also in other areas.

Sean Neary: Yeah, definitely. And I think what I'm going to say, we, I'm going to address it more from an industry perspective. We say, is this the future? It's the now, and actually it's also the past. Alberto has already referenced how long it's been in use for, and the same with Troels, on the 10 years.

And that is true. We've been utilizing variations of the umbrella of AI, you know, machine learning in fraud commercially for decades. I've nearly been in this, 18, 20 years. The models have been around since them when I first started in this domain. So, it's more about how we adapted and iterating with the new capabilities that this tool gives us, AI in general, you heard about algorithms, algorithm types. I think there has been a vast, I'd say innovative movement on the types of algorithms that are then utilized in fraud detection, specifically. We started off heavily in the neural net world, the black box world, as everyone likes to call it, that we knew what was going on. And then we've moved more to that open-source capabilities and moving into like more explainable elements, such as like random forests, for example, gradient-boosted trees on top of that. These have been readily available off the shelf algorithms designed and created openly. And we're, we're adopting that and that's allowing us then to then get far models out faster, explain how they work, understand them. The cost of the hardware is obviously also then shrunk.

So, we can now have more powerful hardware to run more sophisticated algorithms. And we are bound by costs. Businesses do have to manage their costs. We don't have this unlimited array of machines that we can run whenever we want forever. We have to do it within a certain cost. But the transaction monitoring fraud, that's been there, and it's been around for a long time.

But what we've been more iterating on if things like voice recognition. That was the second thing to come in the UK, and everywhere else has been around for quite some time, and started utilizing that natural language processing elements of, again, this technology to identify trust, which is important.

Is it Sean? Do I recognize him? Is it a voice I recognize from our previous calls? So, then also identifying the risk. And this is then from trained behaviors, that we're then seeing as you mentioning here. We then pivoting that across into the predictive side, the future, predicting the next transaction, and then assessing; is it against our next prediction?

We predicted Sean is then going to buy some new trainers. Were they trainers? Were they not? Was it a cash withdrawal in a different country? We're moving forward with that, but then you've got to think about the operational side of things. You hear a lot about the use of this for efficiencies. I mentioned scale earlier and the industrial size scale. Because of that scale for us to keep on top of the volume and the attacks we're getting, it'd mean that we'd have to scale our operations to handle all those cases, handle those customer calls, handle those interactions, bring down those bot emails or fake websites that we're using. So, for us, the future is utilizing this now, again, in a more diverse way, similar to the fraudsters, across our entire ecosystem of fraud management.

What can we do in operations? How can we utilize it there for call management, our chat functionality? When we're trying to get money back and retrieve the money for the customers through the merchants, through the schemes. Can we automate those things? That enables us to put more personnel, more people into the front of the fight, running those analytical models, working with Alberto and the team and Troels and Co. That for me is the future. Automate where you can, diversify the use of this technology across your channels, so you have an interconnected strategy and management to fight this.

Søren Winge: A lot of things are going on under the hood and the data pools are growing in size. And, of course, it increases the knowledge based on which we make these decisions in a split second or in a millisecond, but how do we manage all that data?

Alberto Danese: If we get down to the nitty gritty, we are dealing with more than ten million transactions per day in all the countries where we are present as Nexi and Nets.

So, it's a huge volume of transactions of authorization that takes place in a regular day. I'm not talking of the Black Friday or the days where we have even a higher load. And we have to be able to process this information quickly, as I mentioned before. So, I think when it comes to machine learning models, there are a lot of technicalities.

If I have to highlight just a few points, we have some information of the authorization itself. That's, by the way, a standard, an ISO Standard, because it allows transactions to be made everywhere in the world, thanks to international schemes. So, we have a lot of information like the amount of the transaction, the merchant, and so on and so forth.

But we have to be able to integrate this information of the transaction itself, of the authorization itself, with historical behavioral data on the card. For instance, on the merchants, on previous iterations of the cards with that merchant. And so really the challenge from a machine learning engineering point of view is to be able to do this very effectively and integrate information in the authorization itself. Let's say behavioral patterns, then really represent the data that is used as a training data for a machine learning model, and then is used in real time to score a transaction, because at the end of the day, we want to give a score, a risk score of a transaction.

So, I think this is the challenge, using real time data, but also incorporating historical behavior. And I mentioned multiple times the technological challenge, but another important aspect, actually a statistical one. In statistics, we consider every event that happens in two percent of the situations or less to be a so-called rare event.

And when it comes to frauds, actually, we are way lower than that. I will tell a funny story. I interview a lot of graduates, and I recently did actually a round of interviews, and I often ask some questions that are not part of the technical assessment or stuff like that. And I asked them to give me an estimate of what they expect to be a ratio, the fraud rate, let's say.

And obviously I know the reality of things. And some people think that frauds are maybe around ten percent of all the authorization. I had a guy tell me 20%. And I was blown away because it's incredibly far from reality. Now I won't go into the exact numbers for obvious reasons, but if we take a look at the European PSD2, that's the Payment Services Directive, it talks about all stuff related to payments, including frauds.

And when they speak about frauds, they measure frauds, and they also provide some thresholds in basis points. A basis point is one case out of 10, 000 transactions. So, we are talking of 0.01 or 0.0 something percent of transactions that are actually fraud. So, we're in a very challenging environment also from a statistical level because we have a few, let's say, attempted frauds in a world of genuine transactions. And this is really the second challenge that we face besides the technological one.

Søren Winge: Even though fraud is growing, it's growing for a very low starting point.

Alberto Danese: Exactly, I think our business would not be sustainable with a much higher level of frauds, to be honest.

Sean Neary: And you've got to think, when you say it's growing, but so are the genuine transactions. Percentage is not changing.

Søren Winge: So, where do you think AI will take us from here? What is the next kind of frontier that we'll see?

Sean Neary: Where will AI take us from here? I think there’re still limitations in AI. You heard Alberto say, we're not ready to hand over the reins. I think AI is going to take us in a more scalable fashion for what we're doing.

As I mentioned before, so we'll be able to utilize it to do a lot more operationally, specifically. It's a tough one to answer right now, Søren, just due to the rate of change that actually happens in industry. But for me, we're doing a great job with what we have today. If I'm honest, if you apply it correctly and you apply time to it correctly, we're able to output fantastic results, utilizing AI, specifically in the transaction monitoring space.

As I've said before, I think it's more about where we can apply it elsewhere in the ecosystem of fraud management. What other channels or what stages in a payment can we then utilize that in? Or specifically what use cases can you do it in? So, scams, for example. Scams, we haven't really touched it just yet on here, but we've spoken predominantly about general card transaction fraud.

Think about scams, that's a genuine person making a transaction. You can have all the history in the world in your model and it can tell you that it's genuine because it probably is because it's the customer clicking yes. And you have things such as signals, trust signals, attributes that identify, did they authenticate?

Scams is, yes, they have authenticated. Well, I think the future could be is more that in depth behavioral understanding of a spending pattern of a person and Troels mentioned it. We've got more access to data now. We've got more insight to what a person looks like. Yes, we've got some challenging laws on actually data rights and data privacy rights.

But at the same time, I believe there is so much data out there that we can start applying it in areas which were really hard to predict and use a predictive model. And go more into a true behavioral model. And that's what we're seeing with the advancement of these ChatGPT models and deep learning algorithms that we're seeing being utilized in our organizations today.

So, I think that's more of the future, is putting it to those use cases that were probably deemed impossible to be beneficial in using this technology.

Søren Winge: Thanks for putting on the light on this, so to speak, and talking about how you see the future. Maybe, as a few closing remarks and takeaways, Alberto, how do you see, what are the key takeaways seems from your point of view, in terms of AI and fraud?

Alberto Danese:  I think that for people like us with a passion for data, for algorithms, for technology, we are living amazing times. Technology, AI is not only running, but it's accelerating. We have advancements, huge advancements in smaller timeframes. Every month, every week, actually, we have new advancements. And it's just great because as Sean was mentioning, we can scale to in the countermeasures that we put in place.

I think the key challenge and also the key takeaway is that we have a lot of opportunities, a lot of technology, a lot of hype on AI. We have to be great at what we do in understanding which parts of the AI advancements are actually useful for fraud prevention, because at the end of the day, what we care is providing a safe, a good experience for our customers. And I think AI can help us a lot in this.

Søren Winge: Sure. What about you, Troels? How do you see it?

Troels Steenstrup Jensen: Let me start with a small anecdote that I was listening to when I first started working with fraud. I was told that the very, very first fraud prevention measure that were put in place, basically immediately after a new card scheme was launched many years ago, that literally consisted of a matrix printer that would print out every transaction on a long piece of paper.

And then at some point, an analyst would take a look at it and evaluate if some of that looked fraudulent or not. I just thought that was a bit of an interesting historical perspective at how it started. And then maybe continuing what Alberto was also saying, that there have been huge advancements on what's possible. And I think it's simply such an interesting area to work with. It's an important one where we're keeping cardholders safe. We're using technology to do that. And the technological landscape is continuously improving. Computer is increasing data availability, also cross channels is increasing, and the algorithms that can be employed are also getting better.

It's really fascinating to see every year there's something new you can do and still keep to those millisecond requirements that are really hard requirements, because you as a cardholder want that transaction to go through quickly. So, I think it's simply an exciting area to work with where you constantly are on the verge of what's actually possible to get up and running in production in order to reduce fraud even further.

Søren Winge: And Sean, maybe, you have a perspective in the end?

Sean Neary: Yeah, my key takeaway, I think that the first thing is for everyone to recognize that they may hear about the fraudsters having these tools, but they must understand that we have them too. And we have to say, we do have the same access. We have the same skill.

We have the same investments that we're putting in. So that's, I think that's a clear takeaway and we're continuing to do that. We adopt a similar rate to the fraudsters do. That's important for everyone to understand. But the second takeaway is we must all recognize that AI machine learning is not the silver bullet to this specific domain.

You've heard throughout this whole episode, there is still the need for us as humans to be involved in this. There is still that requirement for us to then work in tandem, alongside this technology, and take advantage of the powers that it gives us to fight against fraud. And as long as we work together, in harmony, we are going to win this war against fraud.

Troels Steenstrup Jensen: I think that's actually quite relevant. What you also said, Sean, that we also have access to those tools, similar tools, but there are also some things that we have access to that the fraudsters do not. We have access to the full card transaction history of the credit card, so that’s why we can actually say if something is normal behavior, according to this card or not- that information that a fraud will not generally have available. They might just have a card number available. They don't know necessarily what does normal behavior look like for this card. That's really what gives some of those weapons or insights that can be used against the fraudsters that you can really say, this is the normal behavior for this card.

And if the fraudster tries to commit fraud or something using some means that is not normal for that card, then we'll catch it and stop it.

Søren Winge: That's about it for today. In the next episode, we'll hear from a leading bank about how fraudsters are using tools and techniques, including AI, to perpetrate phishing, vishing and smishing scams, which generate millions for them, every year. In the meantime, for more information, visit nexigroup.com or connect with us on LinkedIn.

Thanks for listening, and we'll see you next time!

Episode 1

Im ersten Kapitel hörst du Jerry Tylman vom Fraud Red Team und Sean Neary von der Nexi Group, die über die Entwicklung des Betrugspräventionspanoramas sprechen.

Der Podcast und das Transkript sind in englischer Sprache verfügbar.

Søren Winge: Welcome to this new podcast, Nexi Talks, where we will be doing a deep dive into fraud prevention. We have one aim: to help you understand and prevent deception as the war on payment fraud continues to heat up. We'll be joined by some of the best minds in the business, so you can learn from those who know payment fraud the best.

My name is Søren Winge, and I'll be your host.

Today, I'm joined by Jerry Tylman, Partner at Greenway Solutions and Founder of Fraud Red Team. His company mimics the tactics of fraudsters to highlight the risks to banks. Welcome to you, Jerry.

Jerry Tylman: Hi, Søren, very happy to be here today.

Søren Winge: I'm also joined by Sean Neary, Head of Fraud Risk Management at Nexi. Hi, Sean.

Sean Neary: Well, thanks, Søren. It's good to be here and I can't wait to jump into detail with you and Jerry on these subjects; specifically from a banking side: the challenges that we're facing on this increased agility from the fraudsters as a result of the increased availability of the technology, such as AI.

Søren Winge: Great to have you both with us. Right, let's get into it.

So, Jerry, how did we end up here today? How has fraud evolved, not least driven by AI?

Jerry Tylman: Fraud's been around for a long time, and it always follows the opportunity, and it adapts to the changing control environment. So, as banks introduce new products and services, you are always going to see fraud slightly behind that new introduction.

Søren Winge: Can you maybe elaborate a bit on that? How do you see the criminals follow these new opportunities?

Jerry Tylman: Generally, what happens in banking is: you roll out a new product and then you see where the fraud comes from, and over time you adapt your controls to the fraud that you are seeing. So, as banks came out with credit cards, fraudsters figured out ways to steal those credit cards, or steal all the numbers on those credit cards, to be able to use it through electronic channels. When they introduced online banking, they figured out ways to be able to steal your user ID and your password and to break into that account to commit what we call account takeover and move that money to other bank accounts.

The fraudsters are always looking for that gap, either in the actual code itself or in the processes associated with it. And generally, they find those things and it takes quite a while for the banks to be able to catch up.

And in the interim, there's a lot of money to be made.

Søren Winge: So, is it, in a way, a flaw in terms of how we design these systems?

Jerry Tylman: It's not that there's an absence of thinking about any of the fraud attacks that are there. It's just that you can't think of everything that the fraudsters are going to be able to do.

So, at some point in time, you have to release that product. And then you have to see where the fraud manifests itself. And one of the reasons that we created our service is to help banks accelerate finding those gaps and those weaknesses in their products and in their channels. And hopefully we can find them faster than fraudsters, and we can help them close those gaps before customers lose money, they are disrupted, and the banks have to spend a lot in operational expense to be able to deal with those defrauded customers.

Sean Neary: And that's interesting, right? So, Jerry, if you think about it: if we look back to how fraud was many years ago, when I started 20 years ago to where it is now, it's also a discussion point of how scalable it was back then to how it is now, right, and the rate of change of those attack vectors or MOs that we are seeing that your team are being brought in to do.

Because if you look back to when digital banking first, sort of, came out, there was lots of unknowns. Authentication wasn't that great. The tooling available to fraudsters didn't really exist. You found that it could be one specific gang that was then trying to work, but they were having to buy a specific list for one single bank at any one time, attack that bank for a certain period in a specific way, with very limited information they have.

So, that rate of change just wasn't there, right? And it gave banks the possibility to try and get on top of it. Is it fair to say, also, that because of the digital explosion, the availability of tools now that was opened up through, not just AI, but also through the anonymous communication channels, such as the dark web? Scaling is now almost infinite for these fraudsters, and they are able to try multiple attack vectors at any one time to try and see if there are any flaws in more of a broader aspect of the business.

Jerry Tylman: Yeah, a great example of this would be new accounts and identity verification. One of the problems that financial institutions deal with today is that these data breaches that have been happening for the last 15 years are so big that you can basically assume everybody's information is on a bad actor database somewhere in the world.

Søren Winge: So, Jerry, how do you see that the banks can adapt to this?

Jerry Tylman: I think of adaptation in two ways. One is how the banks have always done it, which is a reactive mode. And what you are doing there is you are looking at the true frauds that you get. And you are asking yourself, how did we miss this particular fraud? What changes do we need to make to our rules to be able to catch this the next time that we see it?

The difference between fraud detection and I would say cyber security has been: cyber security a long time ago, they adopted this sort of Red Teaming approach to proactively testing their controls. So, they are constantly probing and seeing, hey, how can I break into the interior of the bank and be able to exfiltrate data or something like that.

Whereas the approach in fraud has always been somewhat the opposite, which is we look at where we have losses, and we figure out how do we change our controls. And so, what we have been trying to do is say, let's flip that a little bit and let's be proactive, right? Some people will call it “offensive security”, where you are trying to beat your controls ahead of the bad guys and allow you to tweak those things before the losses manifest themselves.

And I would really say this, that fraud follows a couple of things, right? One is fraudsters are always going after our customers because our customers seem to be the weakest link in the whole chain. They go after any kind of change. So, anytime you introduce a new channel, like a digital wallet, or when they were introducing banking over phone and banking online, so anytime a new channel is introduced or anytime a new control is introduced, they are going to test that control. So, things that we are seeing right now would be like biometrics, fingerprints, voices, faces, etc. And then you also have to keep in mind what your competitors are doing, because they might be pushing that change to you, so, you have to be aware of the entire banking ecosystem and what those competitors are doing because fraud might be coming to you.

Sean Neary: The fraudsters, they are not a corporate organization, right? Some of them could just be a group of two people, some of them could be a group of 50 working across certain boundaries, but they don't have the restrictions of adaptability like we do in the banks.

So, how can the banks adapt to that change? And how fast can banks change? Because before, you had very more lockdown channels, there were very few attack vectors, like I was saying earlier on. So, you could control that, and they didn't come along as often.

I'm not sure if you have seen a similar thing in the US but like we have seen across in Europe: as soon as one hole goes down, the other one opens up but then the bank itself has to get funding, has to then get the right competencies and team together to make that change. Quite often by the time that change has been put in, at least from a back-end perspective, you are almost behind the curve, and I like this “offensive” approach to preventing fraud.

I see the industry quite often being a detection and an investment for fraud detection, which is a bit too far down the line given the speed and the rate of change that we are having today. And it's something that is truly driven by the boundaries you have when working in tier one, tier two, or any financial sector. We can only work as fast as our businesses can make decisions and our technology can also catch up because again, you were not all running on the top end technology, you are bound by legacy/huge platforms that have been there for a long time, maybe with different data structures, different connectivity types. Whereas the fraudsters, they'll just go and buy a new service. They'll spin up a new AWS environment and throw some applications running off that because they can, or their friends have just written a new algorithm to help write the new smishing aspect.

Jerry Tylman: We like to think of problems in three buckets. There are the “known” problems where I'm working on fixing something that I know is a problem right now. And then there are the “known unknown” problems where I know I have a problem. I don't know how the fraudsters are beating me. And then there are the “unknown unknowns”, which is there may be some problem that I'm not aware of yet and I have no idea what it is and how it's going to manifest itself. And so great example of rapidly fixing problems is in this known unknown category.

So, we have been approached several times by our clients where they are getting beat and they haven't figured out how they are getting beat. So, in the case in the United States, we have a person-to-person payment method called Zelle, which allows me to send money to you up to, depending on the bank, maybe $5,000 at a time and the money arrives instantly. So, obviously fraudsters love speed and attacking Zelle transactions is something that they like to do. So, one of the controls that the banks put in place was: before I could send a Zelle to you, I would have to enter a one-time passcode into the system. All makes sense, right? And one of the ways that the fraudsters have been stealing the one-time passcodes is through social engineering and they would essentially get the customer to give them the passcode.

In this particular situation, this fraud was happening at such a magnitude that there was no way that the bad guys were getting the customers to give away that many codes. And the customers weren't calling into the bank saying, “I gave the code to somebody”. So somehow, they were able to go into the system and redirect that one-time passcode instead of going to the legitimate customer, it was going to the bad guy. And so, they gave us that problem and they said, what's going on? How are they doing it? And so, our team started taking a look at it and within a couple of days, we figured out in the code, how this was actually happening. And we went back to the bank, we said, “it's in the code, they are doing this in the middle of the transaction. They are inserting their phone number, so the one-time passcode is going to them”.

And they took that to the development team. And the development team was like, “no, that can't be possible, there's no way they can do it”. So, we actually videoed our guys doing it and showed them exactly where in the code we were doing this insertion during the transaction. And they were like, “ah, yes, it's possible, we see where it's happening”. So, sometimes when the problem is big enough and thousands of customers are being impacted and millions of dollars are lost, then all of a sudden, you get all the resources you need to be able to fix something and it can happen within days, and we have seen this multiple times.

So, in the United States, 2022-2023, our FBI estimated that over $10 billion was lost to scams. This is where customers gave the money to the bad guys because they were scammed. And a lot of people think that was just based on the reported number of incidents. So, they think the number was probably five times larger, so, call it $50 billion.

A $50 billion company is, I think, in the United States would be in the Fortune 100. So, if Scam Inc is really 50 billion, we are dealing with entities that are combined, essentially a Fortune 500 company. And there's a tremendous amount of incentive to be able to continue to do this and that attracts a lot of very bright people in a lot of different parts of the world where ripping off Americans isn't necessarily against the law. So, we are up against what I would say is a well-funded adversary that they are technically adept. They are attracting great talent, and they are persistent threat, and we have to treat it that way. And if we start treating it that way, which is what the cyber community has been doing for the last 20 years, I think you'll see that we get more resources and more collaboration.

Søren Winge: So, Jerry you mentioned before, the example that one bank hired you and you devoted a lot of time and resources to identify an issue in their one-time password process towards their customers, where in fact criminals had found a way to redirect these codes and could exploit this bank.

I guess what will happen is that they will then – the criminals – move on to the next bank. Can you see that the banks could collaborate more closely to exchange insights around what is going on? I expect that the next bank would have the same or similar system that they could exploit in the same way.

Jerry Tylman: Yeah, that's something that we are thinking about because that “known unknown” at the one bank that came to us and said, we are getting beat, this is how we are getting beat. That's potentially an “unknown” at 50 other banks. So, do we go test 50 other banks to see if we can do this at 50 other banks? Or do we put a bulletin out and do we say, “hey, we found this problem at this financial institution. You should check this. It was a security flaw there that, resulted in, millions of dollars being lost”. And so, within our network of testing customers, we are looking at: could we issue these bulletins and then run these tests simultaneously to see if that gap exists there.

So, that's one form of collaboration that we are looking into as part of our service. But I would say that collaboration is difficult because it requires lots of banks agreeing on how to share information and when to share information and the legality of sharing that information. So, it's not something that gets done quickly, right? And again, fraudsters don't have to create committees and figure out if it's legal. Fraudsters can go ahead and do something the minute they think that it's profitable. So, in instances where collaboration is taking place, it's been very successful. It just takes a long time to get there.

I would say that other things that have been going on in the industry for years would be things like consortium databases, where if you find a particular device, like a laptop or a phone that's associated with fraud, you could put it on to a vendor’s negative list and if you are working with that vendor, you could check their negative list, that is built based on all the customers that they have. But I think for the bad guys, think of how well funded they are. If they lose a device, they just get a new device and a new one and a new one.

And what we have seen are that there are these, what they call SIM farms, where you might have in one room, 500 iPhones or 500 Android phones all hooked up and all being used to send out smishing text messages or putting something out on WhatsApp or some other social media platform. So, what we’re finding is that as soon as we make a change, like you are sharing data about that one bad device, the bad guys just figure out, “hey, here's a way to get around that, I'll just have 500 devices”.

So, what we really have here is a cat and mouse game where every move that the banks make to control the environment just creates a counter move on the part of the bad guys to figure out how do I pivot and get around that new control.

Sean Neary: Exactly back to that point about their ability to scale and adapt now. Based on that growth of technology again, 20 years ago, it would have cost a fortune to try and acquire all those mobile phones, have a racking system, acquire contracts and mobile phone numbers to get it working and now you can buy phone cents on the dollar that are digitally enabled with some software that's running it, right? As you say, they can spin one farm down and spin one up. And that's, as a result of that exponential growth and cost reduction in tech.

Jerry Tylman: And what they have also done is to ensure the life of that phone goes a little longer is they don't try to send 50,000 messages from it in one day. They might send one every 10 seconds. And they just dial down what they send out to. And so instead of talking about an IRS refund, they might just send a message that says, “hello”. And then all of a sudden, if you respond to that and you don't report it. As in, you don't delete it and report it as a junk text message and you respond to it, then the fraudster starts engaging you, they start grooming you, and all of a sudden, you are locked into the beginnings of a romance scam with that bad guy.

So, they not just adapt in terms of the scale of devices, but also the speed at which they send these things out. They throttle it down and they change the language in it, which makes it really difficult to detect that's a bad guy using a phone trying to scam me.

Sean Neary: And this comes also down to that end user, right? Because we have spent a lot of this conversation talking about us as institutions who are fighting against this adversary. The one consistent thing here is the customers, is the cardholders, the end users, us who were on the end of that mobile phone. And I don't know about you, but there is a huge change in an end consumer, again, thanks to the digital age technology availability; expectation of instantaneous gratification from shopping or buying. But you mentioned scams and there's only so much you can technically do from a scam perspective when really the person being scammed is a human and it comes down to sort of education.

Jerry Tylman: Yeah, it's a tricky situation. But scams are interesting. I love this topic because scams are this… I call it the intersection of psychology and technology, right? And people don't fall for scams because they are stupid. People fall for scams because they’re humans. And these psychological factors in play in scams are what make them so effective. These psychological factors are like curiosity and scarcity and authority, greed and urgency…

Sean Neary: And that winning right? Feeling like you are getting a good deal. You feel like you are winning.

Jerry Tylman: Yeah, exactly. That's greed, right? And so they are, they come into play, and I've fallen for these, right? I had a situation where I got a scam text from the toll road company about a recent toll that I had. And it said, “hey, make sure you pay the $12.47 cents before Friday. Otherwise, you are going to get a $50 late fee”. And what is that? That's authority! It looked like the text came from the toll road company and its urgency. Pay before Friday because otherwise you'll get a $50 late fee. And it was also convenience, the technology was just “click here” and I'll go to where I have to pay.

So, I didn't even have to get off the couch. I just had to just sit on the couch and pay the bill. And I went in there and I gave them all of my information except my social security number. And then I gave them my credit card information and I clicked enter and then literally two seconds later, I'm like, what did I just do?

Sean Neary: And it's crazy how you immediately knew. But in the moment, being a human, you wanted to quickly get it off your to do list. It's actually a regular item that you do. It was just coincidence, right? I had the same thing when trying to pay tax bills. It just happens to be a coincidence that I was waiting for communication to come back. And it's that immediate, fast, “get it off my to do list” rather than sit back, double check, really look at the originating –  

Jerry Tylman: That's what I did. And so that was just a human behavior tied to three psychological factors, right? That made it really good. And I looked at that again and I'm like, “that was pretty clever”. That was good. And that toll road scam, that's being done in every state in the United States right now. It's probably happening all over Europe.

Sean Neary: Oh, definitely.

Jerry Tylman: So, that's a pretty clever one. And so wouldn't it have been better maybe from an education perspective, if that scam text message had actually been sent by a good guy. And if I clicked on that link, it would have said something like “you might've clicked on a phishing link, you better be more careful next time”. And what's interesting is in corporate America, we do those tests with our employees every single day.

And there's this whole concept of friendly phishing, where we send our corporate employees these phishing messages to test them. And it's a very effective way of testing them. It's classical conditioning, right? It's learning by doing. And so, the first time they get one of these really clever scams that are combining authority and urgency and convenience that I'm not getting it from a bad guy. I'm getting it from a good guy who's testing me.

And I think that's a paradigm shift that's going to be really, really hard for people inside financial institutions to think about, should I scam my customers as a way of educating them? It's going to be a difficult conversation, but eventually, I think we are going to get there because the current methods, just quantitatively, the evidence would say are not working because the losses just continue to grow every year.

Søren Winge: Jerry, leveraging on the same methods, if you will, that the corporates use internally about friendly phishing, that could actually be a tool for the banks to use towards their customers, rather than the classical information campaigns, which are not apparently working to, the extent that they hope for.

Jerry Tylman: The reason that we don't pay attention to these messages, the current educational messages where you log onto a website and it says beware of scammers is because you are not going to your bank to be educated about scams, you’re going to your bank to pay a bill or to check the balance. You have a task. That's why you are there, right? And so, there's another psychological principle called selective attention that essentially says that we filter out noise. And so that message about, educating you about scams, beware of scams, right, it's just noise because I'm trying to complete the task. And what we have to do is we have to look back at what are the effective ways of training people and use those, and It's a little bit daunting to think about sending a scam message to your customer, but that's really the best way that they are going to learn.

Søren Winge: So maybe Sean, maybe you can explain, you at Nets/Nexi, you are serving a number of banks across Europe in terms of fraud detection, fraud management. How are you leveraging the insights you might get around one bank or around a certain situation you identify in one country maybe, and share that across for other banks to benefit from?

Sean Neary: Yeah. It’s a good question. When you look at what's happening in a specific market or in a specific country, there are many variables that you have to consider that might not be the same in a different country. You have to know the ins and outs of your customers. And you have to layer, that's the other part, one system will not do it for you. It will not be able to meet all your needs, especially if you try and put all your changes into that one system, you will see a very slow rate of change and the capabilities to change due to your backlog becoming huge.

So, what you have to do is layer it. You layer it with external research and data sharing between banks and different entities and general domains, so you take that information, you bring it in. You then take actual data from your actual systems, and you write rules, physical rules. People might say it's old school, I don't see rules disappearing for a very long time. They are there to manage a strategy and a balance. They are there to have a fast adaptability because whilst you have AI / machine learning, which could be your second layer of defense at least in the detection perspective. The rate of change: you have to retrain the model, you have to also layer it on top of what are your customer education strategies? What are your operational defenses in the call centers where fraudsters try and phone up and fish information out of the bank themselves? What are your authentication strategies for the customer? How have you applied them within your 3-D Secure channels? Are you sharing data between the different aspects of the user journey when they make a payment, when they move money, because they all go through different systems. Are they connected? If so, how are they connected? How are you utilizing what we call in the industry signals, so identifiers of fraud.

Jerry Tylman: The one thing I would add where I really think that AI can help is that if you can increase the size of the dataset to include the other financial institution that is involved in the transaction. So, when you think about scammers, you have a lot of customers that are being scammed by, say, the same gang or the same person, but they are at 50 different banks. But a lot of that money is finding its way to one or two bank accounts on the other side.

And so, if you add visibility into both who's sending the money and who's receiving the money, then you might be able to do a better job of being able to spot the scam because if 50 people are all sending $12.47 cents, take my toll road example, right, all that money's going to some bank account over here.

You could then say, ah, everybody who just sent money to that bank account, there's 50 different accounts out there. This is a scam. And so somehow if you can see both sides of that payment equation, and you could instantly see that this is a scam that's playing out.

And so, it's interesting, most banks only have visibility into what their customer is doing and where they are sending it. And maybe if ten from their bank all sent to the same person, they should be able to spot that. But then if you had information from the other side and the other side was alerting all these incoming banks of all these incoming transactions, you might have better visibility across the industry to what's going on with that particular scam.

So, the scale of being able to collect more data or have more insight is where AI is really going to be leveraged because then we are going to be able to spot things a lot faster.

Sean Neary: Yeah, I agree. And before, if you pitched that to me, maybe five years ago, I would be going, I don't have an unlimited budget to create such a huge dataset and maintain it and run it. But luckily, we are also seeing it to be more of a commodity and readily available at a cheap cost for us to use this technology in this space as well. And we are going to see that grow even further and even faster, I think, from what you are seeing in the market and its adoption.

Jerry Tylman: Because when you think about it today, your system might be able to detect that this is probably a scam. So, what do we do? We call it the customer and say, “hey Jerry, did you mean to send money to the toll road company? Cause we think it's a scam. And I'm like yeah, yeah, I meant to send that, it's legit”.

But if you said,” Jerry, we have determined on the other end that you just sent money to a scammer”. That's a different conversation. And so, a lot of times what's happening is banks are actually picking up on the anomalous behavior. But when they talk to the customer, they are convinced that, yeah, this is legit.

And so, you are like, okay, it's your money, go ahead, right? But if you can see all of this then it's a different conversation with the customer. So, you caught it. You can, and maybe what you do in that situation and say, I'm not going to let you send money because I know that's a scammer on the other side.

And you block the transaction, and you block the beneficiary and just say, look, you are on our negative list now. Your strategies will adapt based on the richness of the data set and your ability to drill into it using the AI tools.

Søren Winge: So, I guess a key takeaway of today's conversation, Jerry and Sean, is that the more data we have, the more insights can include, the more we increase our ability as fraud monitors or fraud detectors to identify and stop these type of scams quickly and maybe also, in terms of our rule setting to identify this next time it happens.

So, getting this broad input of information, adding more pieces to the puzzle, so to speak, will enable both the banks and/or providers to the banks to pick up on these things quickly as the banks will usually only be able to be reactive to these, and the question is how quickly can they close the gap? How quickly can they react? So, it doesn't continue to go on towards another bank in the domain.

Jerry Tylman: Yeah. And then, for me personally, the big paradigm shift is not just always being reactive, but just adding that proactive category to things to trying to get ahead of this.

Søren Winge: Yeah, because maybe having, maybe feeding your machine with a lot of data, a lot of transaction data that might enable also even the fraud prevention part of it to react very quickly and maybe even in real time as it would, leveraging on AI be able to detect it at the very beginning, right?

So, a great conversation! Could be interesting to hear, I mean, what are the key takeaways that you feel we should call out as summing up our conversation today?

Jerry Tylman: Yeah, I would say that having both a reactive capability where you learn from what went wrong and where the losses were to also adding that proactive capability. So, don't always let the fraud come to you, but constantly be testing all of these different layers because layers add complexity and complexity leads to gaps, right?

And find out where those gaps are because that's where the fraudsters are going to be focusing too. So, have a proactive capability that meshes well with your reactive capabilities. And I think that does a really good job of being able to spot the weaknesses before the bad guys get there. And that hopefully will protect customers and data and obviously reduce the amount of losses that financial institutions have to deal with.

Søren Winge: And I think this aspect of AI is also a very important lever to activate those layers we talked about earlier, right?

Anyway, this is something we'll address in the next episode, where we'll be joined by Troels Jensen, Director of NextGen Operations in KPMG Denmark, and Alberto Danese, who is part of the data science team at Nexi.

We're going to bust a few myths around AI in fraud and explore what it really means for you.

In the meantime, please visit nexigroup.com for more information on combating fraud. You can also connect with us on LinkedIn at Nexi Group. And of course you can also connect with our guests throughout the series.

The podcast is available on Apple Podcasts, Spotify, and indeed anywhere you usually get your podcasts. So, please like and subscribe and the next episode will be delivered straight to your device. Thanks for listening and join us again next time as we get to grips with the word on everybody’s lips: AI. 

Nexi
Nexi
Nexi