What does the team do?
The Applications Team bridges the gap between our machine learning at scale and our customers' needs. By combining a strong sense of what is right for the customer with knowledge of big data and distributed systems, we provide a seamless way for our customers to make decisions, learn insights and navigate the ever-changing world of fraud fighting without needing to understand the complex technology underneath.
Running MapReduce over hundreds of Hadoop nodes, designing HBase tables for terabytes of data, and building real-time, high-scale, fault-tolerant distributed systems are all everyday parts of the job. Additionally, we need to always be thinking about our customers: analysts under constant assault by fraudsters who need all the help they can get. That means creating a beautifully designed web application that distills machine learning signals into visualizations that allow for quick decision-making, that means automation tools that hide a multi-service architecture handling an endless event stream behind an elegant user interface, and that means taking that firehose of events and creating the capacity to search and providing analytics tooling to make that data understandable, actionable and usable.
As the breadth of our customer base and the size of our data grows, and as new threats to everyone's online safety and security appear, we will continue to face greater product and technology challenges as we help make the internet a better place.
What products does the team own?
• Workflows is an automation platform that empowers fraud managers to manage their business logic in changing fraud conditions without needing to involve their own engineering team. Since our customers rely on this on the critical path of their systems, it needs to provide low latency, high availability and “exactly once” notification behavior. On the frontend, there is also a user experience challenge of creating an interface that can work for a large variety of customers.
• Our customers spend hours every day looking at our console, so we want to make it both efficient and attractive. It allows customers to search over users and orders, and provides ML explainability visualizations to show why we find a user risky. It also provides a manual review queue application that allows multiple analysts to work efficiently through the same queues at once.
• Although not visible to customers, we also build and maintain all the REST APIs that power the console. This involves designing a coherent data model that can elegantly abstract away system internals into higher-level concepts. Searching and reporting capabilities also require that we maintain different data stores for different purposes (search cluster, relational databases, NoSQL databases) and keep them consistent.
What technologies does the team use?
• ElasticSearch to provide search capabilities for customers to explore their data in real-time
• Hystrix to provide failover to ensure one broken component doesn't create a system-wide problem
• Kafka to stream data between different services and ensure consistency
• We design tables and queries in HBase when our data scale and access patterns fit it
• We'll run MapReduce over our Hadoop cluster to analyze data and move it to where it needs to be
• PostgreSQL when a relational database is required, and we use partitioning and careful query design to make it scale
• Our Java-based servers use Dropwizard to provide REST APIs to our console
• Our console web application uses React, Backbone, D3, Sass, Jasmine, Webpack and other tools to create a responsive, dynamic user interface
• We are well-versed in the tools available in AWS, making tradeoffs on when to put them to work and when to build it ourselves
What is the team like?
We like to think we're the funnest engineering team at Sift... actually, we're pretty sure we are. Our customers love our console, and we are always thinking of how to make our product better for them.
We have a very diverse set of skills: a full-time product manager, frontend-focused engineers, backend-focused engineers, and some that do both. We work closely with our product designers and are in constant contact with the sales and support teams to understand the needs of customers, though we also attend customer meetings ourselves. At the same time, we also work alongside Sift's infrastructure-focused and ML-focused teams to ensure our system works harmoniously.
We plan against quarterly milestones, but tackle each project with two week sprints. We're spread between San Francisco and Seattle, and try to do team social outings when we're all in one place. There's always a mix of whiteboard collaboration and headphones-on, heads-down coding. Building the right thing for customers is our bread and butter.
Arjun is interested in distributed systems, soccer, and travel. He really likes ice cream, maybe a bit too much. You can typically find Kaoru programming, snowboarding, mountain biking, playing Magic cards, or eating. He's certain he doesn't pronounce his name correctly. Megan enjoys organizing items and learning new things. Prior to Sift, Megan worked in customer support and product management. Noah loves third wave coffees and mid-shelf bourbons, and can often be found dreaming of biking from one end of Europe to another. Jacob joins Sift after finishing his PhD on lightweight specifications for parallel software at UC Berkeley. Before Berkeley, he applied machine learning to search ranking at Google. Nick enjoys being in places where he can't speak the language, and lived overseas for 10 years before returning to the US.
Dropwizard and Java
React, Backbone, Sass, Jasmine, Webpack, and Gulp
Tech Lead - Software Engineer
San Francisco • Full-time • Applications
Senior Software Engineer
San Francisco • Full-time • Applications