Yes, I would definitely put Arva in that category.
For now, we limit the amount of decisioning that is made by an LLM and make as much of the business logic as we can concrete in code. It's mostly used to extract information from documents, crawl websites and identify specific fraud signals.
So what you're saying is that you could've done this startup a decade ago without LLMs using traditional NLP and ML techniques. Or even just with straight up procedural code, OCR and a rules engine. Especially since as you say everything you're dealing with is highly structured.
I work at a bank and everything you mentioned was solved many, many years ago.
So the more interesting question then is why are fintechs still using manual techniques despite having the capability to automate it.
Fintechs often still have humans review docs, websites, perform web due diligence etc. Efficacy has vastly improved at these validation steps with the assistance of LLMs.
Interesting to hear that your previous bank has automated all of low/medium risk already, from what we have seen more traditional banks are far behind fintechs and are more risk averse in using new technologies. Nice to see that's not the case with all traditional banks.
> Efficacy has vastly improved at these validation steps with the assistance of LLMs.
Is that "efficacy" as the (customer-hostile) bank defines it, or is this more holistic interpretation that also factors in false-positives?
i.e. can you assert that things are better now for everyone, including the completely innocent people who often get caught-up in Kafkaesque KYC ("KKYC?") loops?
Yup exactly that! One of the benefits of what we're building is that fintechs/banks can now approve good customers quicker. So the innocent ones benefit greatly from Arva.
What about people who get disapproved because they were flagged by some automated screening? ...they end-up getting stuck in limbo because they were flagged, so they can't even (for example) close-out and withdraw any other accounts they have with the same institution - and they can't get any help because of the "we-can't-tell-you-how-to-evade-KYC" rules.
Decision making is also more accurate, human analysts often deviate from procedure. Also why banks/fintechs often spend so much on QA teams just to observe how the human analysts have performed.
Transaction monitoring is different, that's post account opening.
When people are going through an onboarding flow it is their first account!
For now, we limit the amount of decisioning that is made by an LLM and make as much of the business logic as we can concrete in code. It's mostly used to extract information from documents, crawl websites and identify specific fraud signals.