<img height="1" width="1" style="display:none" alt="" src="https://www.facebook.com/tr?id=367542720414923&amp;ev=PageView&amp;noscript=1">

    Not Found

  • 08:00


  • 09:30



  • 09:45
    Michael Phelan

    Delivering Data Science: Better, Faster, Cheaper For a New Healthcare CPG With 1.2 Billion Customers

    Michael Phelan - Global MLOps and Data Science Leader - Johnson & Johnson


    The failure of AI and Data Science projects to deliver value for organisations is well documented. A quick online search will throw up stark statistics from Garter, Accenture and many other research groups reporting 85% of projects don’t deliver value, 71% of companies struggle to scale AI and only 53% of AI research makes it into production. While MLOps is critical to the scalability of Data Science research, we present the beginning of an evolving holistic journey to deliver Data Science: Better, Faster, Cheaper for a new healthcare CPG with 1.2 billion customers.

    Dr. Michael Phelan is an award-winning, innovative, passionate full-stack AI and digital transformation leader with over 20 years’ experience delivering solutions, developing people and strategies across healthcare, service, electronics and educational sectors. He currently leads a new global MLOps & Data Science team for Johnson & Johnson Consumer Health.

  • 10:15
    Li Rong

    Who Are the MLOps in Yelp: From Prototype to Production

    Li Rong - Software Engineer - Yelp


    Machine Learning is a field which continues to grow, it is used pretty much everywhere, from image recognition to fraud detection to item recommendations to protein structure predictions. And what do these tasks have in common? To accomplish them, they all require robust machine learning models which need to be trained, tested and served. In this session we will be discussing having a dedicated ops team for this: MLOps. Who are the MLOps in Yelp? How do they differ from DevOps? And how building a horizontal layer of support across teams and providing infrastructure for experimentation, testing, training, model serving, data storage and everything in between can make a huge difference to the efficiency and quality of life of Machine Learning teams within Yelp. We will dive into details about how Yelp prototype ML projects and bring them into production with the help of MLOps to power thousands of small businesses.

    Li is an engineer currently in the Core-ml team(Applied Machine Learning group) . Li was training for a career in the dance industry until the pandemic hit, when Li went back to software engineering and specialized in Machine Learning and Artificial Intelligence. Before that Li has worked across stacks in various companies, including Goldman Sachs, Skyscanner, Transferwise, just to name a few. Li was also a coding instructor for the non-profit "Code First Girls".

  • 10:45


  • 11:20
    Natalia Koupanou-1

    Faster Operationalisation of Machine Learning Models with a Feature Store

    Natalia Koupanou - Data Science Manager - Huge Inc


    At Huge, we invest in the development of third-party data products that help our clients grow their brands and businesses. This is possible because of our underlying MLOps infrastructure with a common feature store. In this talk, we will explore how a feature store can be a game changer for your data team to operationalise machine learning models faster and at scale. We will cover the challenges of MLOps, what’s a feature store good for and whether you should build or buy.

    Natalia is an experienced Senior Manager, Lead Data Scientist with a demonstrated history of working in the financial services industry and building established delivery teams. Before starting a career in fintech, Natalia worked in data across a number of tech companies in various sub-sectors such as ecommerce (Zoro) and proptech (Zoopla). Skilled in Strategy, Machine Learning, Data Modeling and Analytical Skills. Natalia enjoys mentoring and blogs about her technical and personal experiences on Medium (medium.com/@nataliakoupanou).


  • 11:50

    APIs: They Matter More Than You Think in Machine Learning

    Pavlos Mitsoulis-Ntompos - MLOps Engineering Manager - King


    There’s been a great deal of work in ML research during the last decade. What we've seen the last 3 years is a new movement orthogonal to ML, ML Ops. ML Ops is still in its infancy, but its premise is how to create an ML infrastructure that will promote best practices and expedite ML projects. ML practitioners can be seen as chefs; they need the proper tooling to unveil their talent. However, the interface of this tooling is crucial to make them useful to ML practitioners. In this talk, I’ll provide my view on the importance of APIs and interfaces in Machine Learning.

    Pavlos Mitsoulis has 10 years of Machine Learning and Software Engineering experience. Currently, he is an Engineering Manager of ML Ops team at King (part of Activision Blizzard), leading King's central ML Platform. Additionally, he is the creator of Sagify, an open-source library that simplifies training, tuning, evaluating, and deploying ML models to SageMaker.

  • 12:20
    Casper da Costa-Luis

    Painless Cloud Experiments Without Leaving Your IDE

    Casper da Costa-Luis - Product Manager - Iterative


    Full lifecycle management of computing resources (including GPUs and auto-respawning spot instances) from several cloud vendors (AWS, Azure, GCP, K8s)... without needing to be a cloud expert. Move experiments seamlessly between a local laptop, a powerful cloud machine, and your CI/CD of choice.

    Iterative's open source, modular tools aim to deliver the best developer experience for ML teams. This talk will demonstrate how easy it is to use spot instances (with transparent data checkpoint/restore), switch between cloud providers (so no vendor lock-in), and keep costs down (with auto-cleanup of unused resources). This works whether you prefer to experiment in an IDE, a Jupyter Notebook, or even a raw terminal.

    Casper is a scientist at heart with a passion for maximising tooling ease of use (including simplifying installation, testing robustness, minimising breaking API changes, and maintaining impeccable documentation). He champions open source, and maintains projects receiving millions of daily downloads. In addition to 10+ years of experience in industry, he also has strong links to academia — with a PhD in Deep Learning for Medical Imaging (KCL), an MSc in Computing Science & BSc in Physics (both Imperial) — and has taught ML & programming at most universities in London.

  • 12:50


  • 14:00
    Andy McMahon (1)

    Delivering an Enterprise-Scale MLOps Capability to Optimize Time to Value

    Andy McMahon - Head of MLOps - NatWest Group


    As businesses embrace machine learning across their organizations, manual workflows for building, training, and deploying ML models tend to become bottlenecks to innovation. MLOps is helping companies streamline their end-to-end ML lifecycle and boost productivity of data scientists while maintaining high model accuracy and enhancing security and compliance. In this session, we will share some of the capabilities we have built and deployed at NatWest Group in order to accelerate our development processes and deploy valuable solutions to production more robustly, reliably and repeatably.

    Andy is ML Engineering Lead in the Data Science and Innovation team in NatWest Group. He helps develop and implement best practices and capabilities for taking machine learning products to production across the bank. Andy has several years’ experience leading data science and ML teams and speaks and writes extensively on MLOps. He has written the book “Machine Learning Engineering with Python”, published in 2021 by Packt and has received several awards and accolades, including the 'Rising Star of the Year' category at the British Data Awards in 2022.


  • 14:30
    Christian Rehm

    How Wayfair Leverages Google Vertex AI Towards MLOps Excellence

    Christian Rehm - Senior Machine Learning Engineer - Wayfair


    At Wayfair, we use machine learning to create experiences that our customers love, from search over personalization and recommendation to delivery time predictions. Supporting these high-impact use cases at scale requires robust and agile systems. Just as our recent growth necessitated a move from on-prem to public cloud based services, it required us to re-think our ML engineering strategy and move from home-grown custom solutions to industry-proven fully managed cloud based solutions. As we already made the decision to move our core systems to Google's Cloud Platform, it was natural to follow lead for our ML platform and tooling, which led us to invest heavily into Vertex AI, Google's end-to-end ML platform. In this talk, I will present the experience we had onboarding Vertex AI as well as the tooling and processes we have built to integrate and support it into our existing systems.

    Christian is a seasoned Machine Learning Engineer with experience in ML research and engineering across the automotive, adtech and e-commerce industries. Most recently, he uses his broad experience to build processes and solutions that bring the latest cutting edge models from ML research to production at Wayfair.

  • 15:00
    Aerin Booth

    Genesis Cloud: Is Dall-E Ethical? The Real-World Impacts of Machine Learning

    Aerin Booth - Cloud Sustainability Advocate - Genesis Cloud


    A hammer is just a tool, but you could use it to make something or break something. Put yourself in the shoes of the scientists who discovered how to turn oil into plastic. Software Engineers and Developers are now at the forefront of how we turn electricity and algorithms into work that might even have a worse impact on the world than single use plastic. Machine Learning & Diffusion algorithms have changed the way we can create art and have given us new tools, like Dall-E, that can lower the barrier to entry so that anyone could change the world and tell stories in art... or it could cause more harm than good.

    But if you have to build that new model, Cloud Computing allows us to be more efficient than ever by using intensive compute resources like GPUs that still produce Carbon Emissions (even if they are offset!). I'll help you understand how you can host things more efficiently. And how you could even send your batch jobs to providers like Genesis Cloud who offer GPU resources in Iceland and Norway that are 100% powered by renewables!

    Aerin Booth is a Cloud Sustainability Advocate, AWS Community Builder, Genesis Cloud collaborator & the founder of Cloud Sustainably. They’re on a mission to get more developers talking about Sustainability and Green IT. Aerin started their career working with digital teams in the UK government before working their way to become the Head of Cloud at the UK Home Office. While at the Home Office they led the creation of their Cloud Centre of Excellence, training 100’s of staff, introducing automated governance tools and even saved the department over £10m over 3 years through optimisation and reducing usage. Aerin is on a mission to help make businesses more sustainable & ethical and to do so, they have recently set up a podcast focused on Cloud Sustainability, Public Cloud for Public Good.

  • 15:30



  • 16:00
    Massimo Belloni-1

    There is No Such Thing as MLOps

    Massimo Belloni - Data Science Manager - Bumble


    After the huge hype and investments in data science and machine learning in the last decade, the word "MLOps" is on the lips of every data executive and - therefore - recruiter, trying to sell it as the new shiny thing. But do we really know what it's actually about? Since the 90s, DevOps specialists have always been a key part of every solid engineering team, taking care of all the messy and obscure aspects of delivering real things to real people; MLOps isn't much different, working both on cultural and technical sides of delivering data products at scale. Why do we need a different naming and skillset? In this talk, Massimo Belloni (Lead Machine Learning Engineer at Bumble) will give an overview of the current MLOps space, advices and best practices on building an engineering driven Data Science team and a lot of random opinions.

    Massimo Belloni is a machine learning engineer and data science manager currently working at Bumble (London) as Machine Learning Engineering Lead. He previously was Team Lead of the Data Engineering team at HousingAnywhere (Rotterdam, NL) and graduated in computer engineering at Politecnico di Milano (Italy). He has quite a broad and random set of interests within and outside the AI space, with a focus on consciousness, weak vs strong AI debate and football. In his spare time, also burgers and kebabs evangelist.

  • 16:30
    Detlef Nauck

    Implementing a Company-Wide Framework for Responsible AI

    Detlef Nauck - Head of AI & Data Science Research - BT


    The BT Group Manifesto explains how our responsible tech principles aim to ensure that while our tech is commercially viable and profitable, it is always for good, accountable, fair and open. When we build AI solutions we follow a company-wide governance framework that helps us to build AI ethically and supports developers in following best practice. The framework is underpinned by BT’s AI research programme and I will present results from our work on fairness evaluation of machine learning models and model monitoring.

    Detlef Nauck is the Head of AI & Data Science Research for BT’s Applied Research Division located at Adastral Park, Ipswich, UK. Detlef has over 30 years of experience in AI and Machine Learning and leads a programme spanning the work of a large team of international researchers who develop capabilities underpinning modern AI systems. A key part of Detlef’s work is to establish best practices in Data Science and Machine Learning for conducting data analytics professionally and responsibly. Detlef has a keen interest in AI Ethics and Explainable AI to tackle bias and to increase transparency and accountability in AI. Detlef is a computer scientist by training and holds a PhD and a Postdoctoral Degree (Habilitation) in Machine Learning and Data Analytics. He is a Visiting Professor at Bournemouth University and has published 3 books, over 120 papers, and holds over 20 AI patents.

  • 17:00


  • 18:00

    END OF DAY 1

    Not Found

  • 08:00


  • 09:30



  • 09:40
    Ghida Ibrahim

    An Intro To AIOps: How To Scale IT Operations With AI

    Ghida Ibrahim - Lead Quantitative Engineer - Meta


    In this talk, the speaker covers how to leverage AI and quantitative techniques in order to manage and optimize IT operations at scale. In particular, they will explain how techniques like time series forecasting, operations research, and statistical and causal inference could be leveraged to optimize infrastructure investments and resource allocation, enable predictive maintenance, and allow building infrastructure that is more aware of the needs of products such as video, real time calls and the metaverse

    Ghida is a lead quantitative engineer at Meta (previously Facebook) where she designs and builds quantitative systems to help scale and optimize Meta internal cloud and edge infrastructure, used to serve billions of people across Meta family of apps and products. Prior to joining Meta, Ghida worked for 6+ years in the Telco and media industries in multiple analytics and engineering roles, mainly focusing on optimizing large scale distributed systems. She holds a PhD and master’s (Diplome d’Ingénieur) in computer engineering from Institut Polytechnique de Paris.

    Ghida also teaches a course on using AI for scaling IT operations at the university of Oxford. She is a TEDx speaker and an Expert of the World Economic Forum, providing expertise on the future of computing. In the past, Ghida prepared and delivered the first online course on data science in Arabic attracting 30k+ learners, and built an award-winning platform for connecting refugees to opportunities, among others.


  • 10:05
    Tom Steenbergen-1

    Prediction Archiving at Picnic Using Kafka and Snowpipe

    Tom Steenbergen - ML Platform Lead - Picnic Technologies


    Logging and storing predictions is crucial for developing and maintaining machine learning models. It provides an invaluable feedback loop for your models to investigate where model performance can be improved. But more importantly, it helps you to go back and see what prediction was made by which model and features. In a time where more and more decisions are made by models, prediction archiving is a critical part of your machine learning platform. At Picnic we built an in-house solution to archive each and every prediction made, both by batch processes as wel by real-time services. In this talk we will present our solution that uses Kafka and Snowpipe to ingest all predictions in Snowflake, our analytical data warehouse.

    Tom Steenbergen is the ML Platform Lead at Picnic. Together with his team members, they are responsible for supporting data scientists working across the machine learning lifecycle. They build and provide the infrastructure and tooling used by all the machine learning models that are running in production at Picnic. Previously at Picnic, Tom worked as a full-stack Data Scientist building machine learning models in a variety of domains, ranging from time-series forecasting to natural language processing.

  • 10:30
    Harpal Sahota

    The Journey of MLOps At MATCHESFASHION

    Harpal Sahota - Lead Data Scientist - MATCHESFASHION


    MLOps at MATCHESFASHION started to gain traction approximately two years ago. Fast-forward those two years and we’re developing our own bespoke framework to monitor data/model drift, reproduce exact datasets using CDC/delta and we now have several models in production serving customers in real-time. This is scratching the surface of what we’re currently working on in the MLOps space. This talk will walk you through the conception of the framework, its uses and how MLOps is used within the company

    Harpal started working as a Lead Data Scientist at MATCHESFASHION in July 2020. As a Lead, he was involved in building the Data Science team from scratch, educating stakeholders on the value of Data Science and was a key contributor to the digital transformation of business operations utilising Machine Learning and Artificial Intelligence. He has a PhD in Computation Biology and has 10 years experience in solving complex business problems with innovative data solutions.

  • 10:55


  • 11:25
    Dejan Golubovic

    Deploying and Managing a Machine Learning Platform with Kubeflow at CERN

    Dejan Golubovic - Software Engineer, MLOps - CERN


    CERN is a particle physics research organisation based in Geneva, Switzerland, whose mission is to advance human knowledge by understanding the nature of the universe. CERN operates The Large Hadron Collider (LHC), world’s largest particle accelerator, measuring 27 km in circumference. The LHC accelerates beams of particles in opposite directions almost to the speed of light before making them collide. Collisions generate short-lived particles that are otherwise impossible to detect and observe in nature. Machine learning has been growing as a solution for challenges in different areas of development and operations at CERN. Areas where ML is being applied include particle classification using graph neural networks, 3DGANs for faster generation of simulation data, or reinforced learning for beam calibration. In this talk, we will present a centralised machine learning service based on Kubeflow that has been deployed at CERN with the goal of improving overall resource usage and providing researchers with a better user experience. We will discuss the open-source technology stack for deployment and management of the service on our private cloud, with the option of expanding to public clouds. We will talk about integration with other CERN services. We will present examples of ML in high-energy physics and talk about how Kubeflow facilitates workload scaling.

    Dejan Golubovic is a software engineer working in the CERN IT department. Dejan works on the CERN private cloud machine learning infrastructure with Kubernetes and Kubeflow. His primary responsibilities include the upkeep of the ML platform, collaboration with scientific users, and the development of Kubeflow features. Interests include containerised applications, Python programming, and applying technology to socially and scientifically impactful projects. Prior to joining CERN IT, Dejan worked at the CMS experiment where he developed deep learning models for particle reconstruction. Previously, he developed software for the automotive and retail industries. He holds a Master's degree from the University of Belgrade.

  • 11:50
    Ioannis Mesionis-1

    EasyJet's Holistic Data Science Operations Model - From Ideation to Scaled Product Delivery

    Ioannis Mesionis - Lead Data Scientist - EasyJet


    The data scientist profession has been dubbed as "The Sexiest Job of the Twenty-First Century" by Harvard Business Review. While some businesses thrive by capitalizing on data's transformative power, many more succumb to the hype as a result of a failure to manage and expand their data strategy. The mismatch of stakeholders has been acknowledged by enterprises as the driving force behind poor performance and lack of impact. Such issues can be directly traced back to process flaws and organizational structure, which result in long model delivery periods with little impact. The EasyJet Operating Model addresses the lack of understanding of end-to-end processes for implementing data science solutions and bridges the gap between accelerating the data science discipline and the absence of some of the required stakeholders and governance that would allow a proactive rather than reactive approach. By fostering a collaborative environment between business and data science, EasyJet was able to take a rigorous approach to iterative data science initiatives. The EasyJet Data Science Ops model is comprised of end-to-end documentation that describes a timeline of procedures and actions to be carried out, allowing key stakeholders to influence data product development in order to generate incremental business value. While transitioning from a "lean" start-up to a mature data-driven organisation might take time, the EasyJet Data Science Ops Model accelerates the process and guides numerous teams through the transformative "marathon" to generate momentum toward the goal of developing a strong data-driven company where data solutions can be produced more quickly and with greater scalability. Working with numerous stakeholder groups over time enables EasyJet to achieve its goal of becoming the world's most data-driven airline. The EasyJet Ops Model reveals areas where the aim is being missed and provides an honest appraisal, allowing EasyJet to take solid steps to improve the system. The efficiency of the EasyJet Ops Model is based on the involvement of all necessary stakeholders while ensuring the necessary support and governance are in place to guarantee that it is suitable for its purpose (from both a functional and non-functional perspective).

    Ioannis has worked as an ambitious data scientist expert and a trusted member of EasyJet's Data Science & Analytics community since 2019. In his current role as a Lead Data Scientist, he is on a mission to support EasyJet in reaching its ambition of becoming the world's leading data-driven airline. The famous Sherlock Holmes quote—"You see, but you do not observe"—was enough for him to end up holding an M.Sc in Data Science and counting over three years of experience in the field. Analogously to Sherlock Holmes and John Watson, Ioannis teams up with EasyJet's Digital, Customer & and Marketing departments to form an A-team that thinks beyond the obvious and delivers data-driven solutions under uncertainty through effective data products.

  • 12:15
    Neha Patel

    Model Serving at Scale @ ASOS.com

    Neha Patel - Machine Learning Engineer - ASOS


    With a catalogue containing nearly 70,000 items at any given time our search and recommendations systems are critical to surfacing the right product to the right customer at the right time.  We leverage machine learning models at scale to help our customers find their dream product.   
    This talk gives an insight into our MLOps maturity journey.  We’ll talk through an overview of Triton Inference Server for powering Search and Recommendations at scale, exploring how it works and the challenges of applying them across different projects. 

    Neha is a Machine Learning Engineer at ASOS.com. She’s a member of the ASOS AI Search & Recommendations team working on model training and serving at scale. She has a background in machine learning and software engineering, with over 11 years of experience working in different industries like e-learning, medical transcription & e-commerce. Neha likes applying machine learning solution to relevant business problems.  

    The AI team at ASOS is a cross-functional team of Machine Learning Engineers and Scientists, Data Scientists and Product Managers, that uses Machine Learning to improve the customer experience, improve retail efficiency and drive growth. 

  • Rick Bruins

    Model Serving at Scale @ ASOS.com

    Rick Bruins - Machine Learning Engineer - ASOS


    With a catalogue containing nearly 70,000 items at any given time our search and recommendations systems are critical to surfacing the right product to the right customer at the right time.  We leverage machine learning models at scale to help our customers find their dream product.   
    This talk gives an insight into our MLOps maturity journey.  We’ll talk through an overview of Triton Inference Server for powering Search and Recommendations at scale, exploring how it works and the challenges of applying them across different projects. 

    Rick is a Machine Learning Engineer at ASOS.com within ASOS AI Search & Recommendations team. He has a background in web development and over four years of experience in working in different industries like sportswear, e-commerce, and aviation. Rick likes designing and building creative solutions to translate ML models into production applications. 

    The AI team at ASOS is a cross-functional team of Machine Learning Engineers and Scientists, Data Scientists and Product Managers, that uses Machine Learning to improve the customer experience, improve retail efficiency and drive growth. 




  • 12:40

    Hiding Kubernetes Complexity for ML Engineers Using Kubeflow

    Andrey Velichkevich - Senior Software Engineer - Apple


    Kubeflow is an open source project to make ML workflows on Kubernetes simple, portable, and scalable. Kubeflow has various components for different stages of ML. For example, Katib for hyperparameter tuning and neural architecture search, Training Operator for distributed model training, KServe for model serving, Kubeflow Pipelines for ML pipelines and ML metadata. Kubeflow leverages Kubernetes infrastructure for each component to make it very easy to scale the ML environment from local development to production.
    Although, one of Kubeflow core concepts is to provide a simple MLOps platform for ML Engineers and Data Scientists it is still non-trivial to use Kubeflow components, especially for the users who don’t have enough Kubernetes experience. In this talk, we will walk through major Kubeflow components and how users can utilize them to solve MLOps problems. Also, we will see how users can run hyperparameter tuning and distributive training using Kubeflow without knowing about Docker and Kubernetes.

    Andrey Velichkevich is a Senior Software Engineer at Apple and is a contributor to the Kubeflow open-source project. He is a co-chair for the AutoML and Training working groups. Andrey hosts Kubeflow community meetings for the AutoML and Training working group, organises community webinars and writes the blogs. Andrey helps the community to drive the CI/CD infrastructure and contributes to the ML benchmark system for the Kubeflow.

  • 13:05



  • 14:05
    Anna FitzMaurice

    Personalisation at Scale Across the BBC

    Anna FitzMaurice - Senior Architect - BBC


    In the BBC Datalab, our mission is to use responsible machine learning to help audience members find the BBC content that is most relevant to them. We currently work across BBC News, Sport, World Service, Sounds, and iPlayer to deliver high quality onward journeys using machine-driven recommendations. As we scale out across the BBC, an important part of this work is reducing the time, effort, and cost required to make the BBC relevant for each audience member. In this talk, I will discuss the tools and framework that BBC Datalab uses for the continuous training and deployment of onward journey recommenders, reflecting on challenges and lessons learned about MLOps in practice.

    Anna FitzMaurice is a Senior Architect at the BBC, where she is part of the Audience Data team. She previously worked on recommendations and personalisation as a Senior Data Scientist within the BBC's Datalab. Prior to joining the BBC, Anna conducted a postdoctoral research fellowship in Data Science at The Alan Turing Institute. She holds a PhD in Climate Modelling from Princeton University, and an MMath in Mathematics from the University of Oxford.

  • 14:30
    Christopher Yim

    Garbage In Garbage Out - Applying AI to Waste Management

    Christopher Yim - Machine Learning Ops Engineer - Recycleye


    Recycleye is a growing technology company using advanced machine learning, computer vision and robotics to revolutionise waste sorting on a global level. Crushed, shredded, overlapping and dirty – detecting plastic recyclables is not easy with computer vision. And that isn't taking into consideration that waste varies dramatically by geography, time of year, special edition packaging etc. Our MLOps expert and Engineer, Chris Yim will talk you through some of the ways we bring order to this chaos.

    Chris Yim is MLOps Engineer at Recycleye. He graduated from Imperial College London with an MEng in Mechanical Engineering before taking engineering roles in the automotive industry. He then returned to Imperial, gaining an MSc in Computer Science before joining Recycleye. In his role at Recycleye, Chris develops custom ML models for the waste industry. Each recycling facility faces unique challenges when deploying ML models, requiring bespoke solution-oriented architecture to ensure the highest level of efficiency and accuracy in generating output. Applying the use of machine learning to waste and material detection and selection is at the core of what Recycleye does.

  • 14:55

    PANEL: The ROI of MLOps: Do the Pros Outweigh the Cons?

  • Yufei Ai


    Yufei Ai - Insight Manager of Data Science - John Lewis & Partners


    Yufei is Insight Manager of Data Science at John Lewis Partnership. Since she transferred her career from academic researcher to a data scientist, her focus has been on building and implementing data models to solve business problems. She is now leading a group of data scientists working closely with various business and technical roles to support data related R&D work in domains of search engine, recommendation engine, personalisation, product data enrichment, SEO optimization, online fraud detection etc. Their work improves customers' online shopping experience as well as drives millions of sales every year through John Lewis omnichannels. Her team is the pioneer of MLOps practise in the partnership that helped shape the MLOps roadmap for the company.

  • MLOps Headshots (3)-1


    Biswa Sengupta - Managing Director - JP Morgan


    Dr. Sengupta has been a senior technical executive with broad-spectrum expertise in leading various ventures – from Artificial Intelligence start-ups to Fortune 500 (JP Morgan, AXA, Huawei and Zebra) companies' AI divisions. He has hands-on experience in leading teams with expertise in incubating commercially viable products using computer vision, natural language processing, reinforcement learning and robotics. He is currently a Managing Director at JP Morgan Chase & Co. with responsibilities across AI Products and Cloud Platform. Quite recently at Zebra Technologies, he spearheaded special projects (retail, warehouse and healthcare verticals) that were positioned at the intersection of sequential decision making and robotics (incl. collaborative and autonomous robotics). Biswa obtained his PhD in dynamical systems/optimisation/energy efficiency from the University of Cambridge, attaining further training on Bayesian statistics and differential geometry at the University College London.

  • Anders Blankholm


    Anders Blankholm - Lead MLOps Engineer - Charlotte Tilbury Beauty


    Anders has spent the last 5 years designing AI tech stacks while heading up MLOps at Charlotte Tilbury Beauty and ASOS.com. He's an evangelist for inclusive culture and loves to talk about how we can ship science faster.

  • 15:30