Seattle Office Holds First Public Event: Custom Machine Learning Models at Scale

Matt GreenApril 21, 2017

On April 19, 2017 the newly-opened Seattle Sift Science Research and Development Office hosted its first public speaking event: Alex Paino, Keren Gu and Jacob Burnim presented at an Applied Machine Learning at Scale Meetup Event. It was a “sold-out” event with all 130 MeetUp RSVPs taken days before the event.

The abstracts from the event and the video are below.

If you are in the Seattle area and want to attend a future Sift Science talk, join the Applied Machine Learning at Scale MeetUp group.

The Abstracts from the Event:

Talk 1, Jacob Burnim: Feature Engineering in Practice

Abstract: Sift Science protects thousands of different businesses from all kinds of fraud and abuse, from a stolen credit card used to buy an airline ticket or a digital game, from a fake apartment or job listing, from a fraudulent money transfer, or from abuse of a referral program.  A key challenge in building a machine learning system to detect all of these diverse kinds of fraud and abuse is feature extraction — how to derive the most useful signals from all of the different kinds of raw data sent to Sift Science.  In this talk, I will discuss Sift Science’s approach to feature extraction and how our feature extraction system fits in with the rest of machine learning platform.

Talk 2, Alex Paino: Life of a Machine Learning Experiment

Abstract: At Sift Science, we are constantly running Machine Learning experiments aimed at improving the accuracy of our many abuse prevention products. In this talk, we’ll describe the lifecycle of these experiments, starting from the inception of an idea through to the deployment of a modeling change. Specifically, we’ll discuss how we come up with ideas for experiments, conduct experiments offline with minimal bias, and analyze the results of these experiments in order to arrive at a launch/no-launch decision. These last two items are particularly gnarly problems for us, as they require us to properly simulate an online environment and easily analyze an experiment over many thousands of models, respectively. As part of this talk, we will also cover the tools we have built that make it easy for everyone on the team to conduct valid, reproducible experiments.

Talk 3, Keren Gu: Does Your Model have a Safety Net?

Abstract: When building machine learning models for businesses, small shifts in score distributions of the models may result in large revenue impacts, regardless of the accuracy change, making the stability of the predictions as important as accuracy. At Sift Science, we take model stability very seriously. Hundreds of businesses are leveraging our predictions to make automated decisions that have direct bottom line impact. In this talk, we will take a look inside Safety Net, the system we built that monitors the stability of thousands of models for our customers.

Video:

Author