Discover how to optimize the machine learning lifecycle & streamline your pipeline for better production
CONFIRMED SPEAKERS INCLUDE
Clemence Burnichon
Director of Data Innovation
ITV
Clemence has been working in the data and AI space for over a decade. She has helped major retailers and commerce organisations like Sainsbury’s and Net-a-porter realise value by leveraging innovative and smart capabilities. More recently, she spent over 3 years at Depop, a social marketplace for the Gen Z, where she built and led the data science and machine learning function and delivered impactful capabilities in the domain of personalisation, search engine, moderation and marketing. Clemence joined ITV in January 2021 and is currently the Director of Data Innovation, a newly formed function inside the Data and Insight Group. Her role covers all areas of data technology. She provides expertise and leadership to build and develop a team of data scientists, data engineers, platform engineers and machine learning specialists creating value across the whole of ITV.
Anna FitzMaurice
Senior Data Scientist
BBC
Personalisation at Scale across the BBC
In the BBC Datalab, our mission is to use responsible machine learning to help audience members find the BBC content that is most relevant to them. We currently work across BBC News, Sport, World Service, Sounds, and iPlayer to deliver high quality onward journeys using machine-driven recommendations. As we scale out across the BBC, an important part of this work is reducing the time, effort, and cost required to make the BBC relevant for each audience member. In this talk, I will discuss the tools and framework that BBC Datalab uses for the continuous training and deployment of onward journey recommenders, reflecting on challenges and lessons learned about MLOps in practice.
Anna FitzMaurice is a Senior Data Scineitst at BBC. Previously, Anna was a postdoctoral research fellow on The Alan Turing Institute’s Women in Data Science and AI project. Her research at the Turing sits at the intersection of technology and society, taking a data-driven approach to investigating the systematic exclusion of women from tech, and the impact this is having on the development of AI. As well as industry experience in data science, she holds a PhD from Princeton University in Atmospheric and Oceanic Sciences, with a focus on modelling ice-ocean interactions under future climate change scenarios, and an MMath in Mathematics (first class) from the University of Oxford.
Pavlos Mitsoulis-Ntompos
MLOps Engineering Manager
King
APIs: They Matter More Than You Think in Machine Learning
There’s been a great deal of work in ML research during the last decade. What we've seen the last 3 years is a new movement orthogonal to ML, ML Ops. ML Ops is still in its infancy, but its premise is how to create an ML infrastructure that will promote best practices and expedite ML projects. ML practitioners can be seen as chefs; they need the proper tooling to unveil their talent. However, the interface of this tooling is crucial to make them useful to ML practitioners. In this talk, I’ll provide my view on the importance of APIs and interfaces in Machine Learning.
Pavlos Mitsoulis has 10 years of Machine Learning and Software Engineering experience. Currently, he is an Engineering Manager of ML Ops team at King (part of Activision Blizzard), leading King's central ML Platform. Additionally, he is the creator of Sagify, an open-source library that simplifies training, tuning, evaluating, and deploying ML models to SageMaker.
Detlef Nauck
Head of AI & Data Science Research
BT
Implementing a Company-Wide Framework for Responsible AI
The BT Group Manifesto explains how our responsible tech principles aim to ensure that while our tech is commercially viable and profitable, it is always for good, accountable, fair and open. When we build AI solutions we follow a company-wide governance framework that helps us to build AI ethically and supports developers in following best practice. The framework is underpinned by BT’s AI research programme and I will present results from our work on fairness evaluation of machine learning models and model monitoring.
Detlef Nauck is the Head of AI & Data Science Research for BT’s Applied Research Division located at Adastral Park, Ipswich, UK. Detlef has over 30 years of experience in AI and Machine Learning and leads a programme spanning the work of a large team of international researchers who develop capabilities underpinning modern AI systems. A key part of Detlef’s work is to establish best practices in Data Science and Machine Learning for conducting data analytics professionally and responsibly. Detlef has a keen interest in AI Ethics and Explainable AI to tackle bias and to increase transparency and accountability in AI. Detlef is a computer scientist by training and holds a PhD and a Postdoctoral Degree (Habilitation) in Machine Learning and Data Analytics. He is a Visiting Professor at Bournemouth University and has published 3 books, over 120 papers, and holds over 20 AI patents.
Ioannis Mesionis
Lead Data Scientist
EasyJet
EasyJet's Holistic Data Science Operations Model - From Ideation to Scaled Product Delivery
The data scientist profession has been dubbed as "The Sexiest Job of the Twenty-First Century" by Harvard Business Review. While some businesses thrive by capitalizing on data's transformative power, many more succumb to the hype as a result of a failure to manage and expand their data strategy. The mismatch of stakeholders has been acknowledged by enterprises as the driving force behind poor performance and lack of impact. Such issues can be directly traced back to process flaws and organizational structure, which result in long model delivery periods with little impact. The EasyJet Operating Model addresses the lack of understanding of end-to-end processes for implementing data science solutions and bridges the gap between accelerating the data science discipline and the absence of some of the required stakeholders and governance that would allow a proactive rather than reactive approach. By fostering a collaborative environment between business and data science, EasyJet was able to take a rigorous approach to iterative data science initiatives. The EasyJet Data Science Ops model is comprised of end-to-end documentation that describes a timeline of procedures and actions to be carried out, allowing key stakeholders to influence data product development in order to generate incremental business value. While transitioning from a "lean" start-up to a mature data-driven organisation might take time, the EasyJet Data Science Ops Model accelerates the process and guides numerous teams through the transformative "marathon" to generate momentum toward the goal of developing a strong data-driven company where data solutions can be produced more quickly and with greater scalability. Working with numerous stakeholder groups over time enables EasyJet to achieve its goal of becoming the world's most data-driven airline. The EasyJet Ops Model reveals areas where the aim is being missed and provides an honest appraisal, allowing EasyJet to take solid steps to improve the system. The efficiency of the EasyJet Ops Model is based on the involvement of all necessary stakeholders while ensuring the necessary support and governance are in place to guarantee that it is suitable for its purpose (from both a functional and non-functional perspective).
Ioannis has worked as an ambitious data scientist expert and a trusted member of EasyJet's Data Science & Analytics community since 2019. In his current role as a Lead Data Scientist, he is on a mission to support EasyJet in reaching its ambition of becoming the world's leading data-driven airline. The famous Sherlock Holmes quote—"You see, but you do not observe"—was enough for him to end up holding an M.Sc in Data Science and counting over three years of experience in the field. Analogously to Sherlock Holmes and John Watson, Ioannis teams up with EasyJet's Digital, Customer & and Marketing departments to form an A-team that thinks beyond the obvious and delivers data-driven solutions under uncertainty through effective data products.
Chris Sarakasidis
Lead Machine Learning Engineer - MLOps
ITV
Modern MLOps: Simplifying and automating ML pipelines using Databricks and Kubernetes in AWS
A challenging problem in modern MLOps is reducing technical debt. Since each environment has certain requirements, oftentimes this problem becomes extremely complex very quickly. Therefore, choosing the right toolset is of fundamental importance. Traditional DevOps has certain axioms for SDLC. MLOps attempts to transfer these axioms into ML context. However, model products have different needs and requirements from software products. Main differences include the need for Continuous Training (CT), Continuous Monitoring (CM) and model version control (MVC). Databricks ML is an integrated machine learning environment that reduces significantly technical debt. It can be used as an end-to-end machine learning solution, or use parts of it according to your needs. In particular, MLflow is a well-known API provided by Databricks ML that allows Data Scientists to apply MVC at scale. This enhances collaboration, automation, visibility and can fit nicely in any modern CI/CD/CT/CM pipeline. During this talk, we will discuss about the significance of choosing the right MVC topology and how it affects the entire design. Also, explain how MLflow fits in ITV’s ecosystem CI/CD/CT/CM pipelines and release strategies for model artefacts on Kubernetes
Christos Sarakasidis is an experienced Machine Learning engineer with a keen interest in modern DevOps, software engineering and cloud computing. He has previously worked in research developing algorithms to solve problems in algebra & topology and helped major tech-driven organisations to build in cloud robust ML solutions. Christos joined in February 2022 ITV and is currently the Lead Machine Learning engineer. His role includes the creation of automated ML solutions to enable data-driven decisions across various ITV departments.
Harpal Sahota
Lead Data Scientist
MATCHESFASHION
The journey of MLOps at MATCHESFASHION
MLOps at MATCHESFASHION started to gain traction approximately two years ago. Fast-forward those two years and we’re developing our own bespoke framework to monitor data/model drift, reproduce exact datasets using CDC/delta and we now have several models in production serving customers in real-time. This is scratching the surface of what we’re currently working on in the MLOps space. This talk will walk you through the conception of the framework, its uses and how MLOps is used within the company
Harpal started working as a Lead Data Scientist at MATCHESFASHION in July 2020. As a Lead, he was involved in building the Data Science team from scratch, educating stakeholders on the value of Data Science and was a key contributor to the digital transformation of business operations utilising Machine Learning and Artificial Intelligence. He has a PhD in Computation Biology and has 10 years experience in solving complex business problems with innovative data solutions.
Ghida Ibrahim
Lead Quantitative Engineer
Meta
An Intro To AIOps: How To Scale IT Operations With AI
In this talk, the speaker covers how to leverage AI and quantitative techniques in order to manage and optimize IT operations at scale. In particular, they will explain how techniques like time series forecasting, operations research, and statistical and causal inference could be leveraged to optimize infrastructure investments and resource allocation, enable predictive maintenance, and allow building infrastructure that is more aware of the needs of products such as video, real time calls and the metaverse
Ghida is a lead quantitative engineer at Meta (previously Facebook) where she designs and builds quantitative systems to help scale and optimize Meta internal cloud and edge infrastructure, used to serve billions of people across Meta family of apps and products. Prior to joining Meta, Ghida worked for 6+ years in the Telco and media industries in multiple analytics and engineering roles, mainly focusing on optimizing large scale distributed systems. She holds a PhD and master’s (Diplome d’Ingénieur) in computer engineering from Institut Polytechnique de Paris.
Ghida also teaches a course on using AI for scaling IT operations at the university of Oxford. She is a TEDx speaker and an Expert of the World Economic Forum, providing expertise on the future of computing. In the past, Ghida prepared and delivered the first online course on data science in Arabic attracting 30k+ learners, and built an award-winning platform for connecting refugees to opportunities, among others.
Dejan Golubovic
Software Engineer, MLOps
CERN
Deploying and Managing a Machine Learning Platform with Kubeflow at CERN
CERN is a particle physics research organisation based in Geneva, Switzerland, whose mission is to advance human knowledge by understanding the nature of the universe. CERN operates The Large Hadron Collider (LHC), world’s largest particle accelerator, measuring 27 km in circumference. The LHC accelerates beams of particles in opposite directions almost to the speed of light before making them collide. Collisions generate short-lived particles that are otherwise impossible to detect and observe in nature. Machine learning has been growing as a solution for challenges in different areas of development and operations at CERN. Areas where ML is being applied include particle classification using graph neural networks, 3DGANs for faster generation of simulation data, or reinforced learning for beam calibration. In this talk, we will present a centralised machine learning service based on Kubeflow that has been deployed at CERN with the goal of improving overall resource usage and providing researchers with a better user experience. We will discuss the open-source technology stack for deployment and management of the service on our private cloud, with the option of expanding to public clouds. We will talk about integration with other CERN services. We will present examples of ML in high-energy physics and talk about how Kubeflow facilitates workload scaling.
Dejan Golubovic is a software engineer working in the CERN IT department. Dejan works on the CERN private cloud machine learning infrastructure with Kubernetes and Kubeflow. His primary responsibilities include the upkeep of the ML platform, collaboration with scientific users, and the development of Kubeflow features. Interests include containerised applications, Python programming, and applying technology to socially and scientifically impactful projects. Prior to joining CERN IT, Dejan worked at the CMS experiment where he developed deep learning models for particle reconstruction. Previously, he developed software for the automotive and retail industries. He holds a Master's degree from the University of Belgrade.
Natalia Koupanou
Senior Lead Data Scientist
Huge Inc
Faster Operationalisation of Machine Learning Models with a Feature Store
At Huge Inc, we invest in third-party products that can provide enriched data and predictions to our clients. This is possible because of an underlying MLOps infrastructure with a common feature store. In this talk, we will explore how a feature store, like Vertex provided by Google, can be a game changer for your data team to operationalise machine learning models faster. We will cover what's a feature store and its benefits, how can it be adopted, who can contribute to it and what you need to consider beforehand.
Natalia is an experienced Senior Manager, Lead Data Scientist with a demonstrated history of working in the financial services industry and building established delivery teams. Before starting a career in fintech, Natalia worked in data across a number of tech companies in various sub-sectors such as ecommerce (Zoro) and proptech (Zoopla). Skilled in Strategy, Machine Learning, Data Modeling and Analytical Skills. Natalia enjoys mentoring and blogs about her technical and personal experiences on Medium (medium.com/@nataliakoupanou).
Andy McMahon
Machine Learning Engineering Lead
NatWest Group
Delivering an enterprise-scale MLOps capability to optimize time to value
As businesses embrace machine learning across their organizations, manual workflows for building, training, and deploying ML models tend to become bottlenecks to innovation. MLOps is helping companies streamline their end-to-end ML lifecycle and boost productivity of data scientists while maintaining high model accuracy and enhancing security and compliance. In this session, we will share some of the capabilities we have built and deployed at NatWest Group in order to accelerate our development processes and deploy valuable solutions to production more robustly, reliably and repeatably.
Andy is ML Engineering Lead in the Data Science and Innovation team in NatWest Group. He helps develop and implement best practices and capabilities for taking machine learning products to production across the bank. Andy has several years’ experience leading data science and ML teams and speaks and writes extensively on MLOps. He has written the book “Machine Learning Engineering with Python”, published in 2021 by Packt and has received several awards and accolades, including the 'Rising Star of the Year' category at the British Data Awards in 2022.
Massimo Belloni
Data Science Manager
Bumble
There Is No Such Thing As MLOps
After the huge hype and investments in data science and machine learning in the last decade, the word "MLOps" is on the lips of every data executive and - therefore - recruiter, trying to sell it as the new shiny thing. But do we really know what it's actually about? Since the 90s, DevOps specialists have always been a key part of every solid engineering team, taking care of all the messy and obscure aspects of delivering real things to real people; MLOps isn't much different, working both on cultural and technical sides of delivering data products at scale. Why do we need a different naming and skillset? In this talk, Massimo Belloni (Lead Machine Learning Engineer at Bumble) will give an overview of the current MLOps space, advices and best practices on building an engineering driven Data Science team and a lot of random opinions.
Massimo Belloni is a machine learning engineer and data science manager currently working at Bumble (London) as Machine Learning Engineering Lead. He previously was Team Lead of the Data Engineering team at HousingAnywhere (Rotterdam, NL) and graduated in computer engineering at Politecnico di Milano (Italy). He has quite a broad and random set of interests within and outside the AI space, with a focus on consciousness, weak vs strong AI debate and football. In his spare time, also burgers and kebabs evangelist.
Christopher Yim
Machine Learning Ops Engineer
Recycleye
Garbage In Garbage Out - Applying AI to Waste Management
Recycleye is a growing technology company using advanced machine learning, computer vision and robotics to revolutionise waste sorting on a global level. Crushed, shredded, overlapping and dirty – detecting plastic recyclables is not easy with computer vision. And that isn't taking into consideration that waste varies dramatically by geography, time of year, special edition packaging etc. Our MLOps expert and Engineer, Chris Yim will talk you through some of the ways we bring order to this chaos.
Chris Yim is MLOps Engineer at Recycleye. He graduated from Imperial College London with an MEng in Mechanical Engineering before taking engineering roles in the automotive industry. He then returned to Imperial, gaining an MSc in Computer Science before joining Recycleye. In his role at Recycleye, Chris develops custom ML models for the waste industry. Each recycling facility faces unique challenges when deploying ML models, requiring bespoke solution-oriented architecture to ensure the highest level of efficiency and accuracy in generating output. Applying the use of machine learning to waste and material detection and selection is at the core of what Recycleye does.
Michael Phelan
Global MLOps and Data Science Leader
Johnson & Johnson Consumer Health
Delivering Data Science: Better, Faster, Cheaper for a new healthcare CPG with 1.2 billion customers
The failure of AI and Data Science projects to deliver value for organisations is well documented. A quick online search will throw up stark statistics from Garter, Accenture and many other research groups reporting 85% of projects don’t deliver value, 71% of companies struggle to scale AI and only 53% of AI research makes it into production. While MLOps is critical to the scalability of Data Science research, we present the beginning of an evolving holistic journey to deliver Data Science: Better, Faster, Cheaper for a new healthcare CPG with 1.2 billion customers.
Dr. Michael Phelan is an award-winning, innovative, passionate full-stack AI and digital transformation leader with over 20 years’ experience delivering solutions, developing people and strategies across healthcare, service, electronics and educational sectors. He currently leads a new global MLOps & Data Science team for Johnson & Johnson Consumer Health.
Casper da Costa-Luis
Product Manager
Iterative
Painless cloud orchestration without leaving your IDE
Full lifecycle management of computing resources (including GPUs and auto-respawning spot instances) from several cloud vendors (AWS, Azure, GCP, K8s)... without needing to be a cloud expert. Move experiments seamlessly between a local laptop, a powerful cloud machine, and your CI/CD of choice.
Iterative's open source, modular tools aim to deliver the best developer experience for ML teams. This talk will demonstrate how easy it is to use spot instances (with transparent data checkpoint/restore), switch between cloud providers (so no vendor lock-in), and keep costs down (with auto-cleanup of unused resources). This works whether you prefer to experiment in an IDE, a Jupyter Notebook, or even a raw terminal.
Casper is a scientist at heart with a passion for maximising tooling ease of use (including simplifying installation, testing robustness, minimising breaking API changes, and maintaining impeccable documentation). He champions open source, and maintains projects receiving millions of daily downloads. In addition to 10+ years of experience in industry, he also has strong links to academia — with a PhD in Deep Learning for Medical Imaging (KCL), an MSc in Computing Science & BSc in Physics (both Imperial) — and has taught ML & programming at most universities in London.
Christian Rehm
Senior Machine Learning Engineer
Wayfair
How Wayfair Leverages Google Vertex AI Towards MLOps Excellence
At Wayfair, we use machine learning to create experiences that our customers love, from search over personalization and recommendation to delivery time predictions. Supporting these high-impact use cases at scale requires robust and agile systems. Just as our recent growth necessitated a move from on-prem to public cloud based services, it required us to re-think our ML engineering strategy and move from home-grown custom solutions to industry-proven fully managed cloud based solutions. As we already made the decision to move our core systems to Google's Cloud Platform, it was natural to follow lead for our ML platform and tooling, which led us to invest heavily into Vertex AI, Google's end-to-end ML platform. In this talk, I will present the experience we had onboarding Vertex AI as well as the tooling and processes we have built to integrate and support it into our existing systems.
Christian is a seasoned Machine Learning Engineer with experience in ML research and engineering across the automotive, adtech and e-commerce industries. Most recently, he uses his broad experience to build processes and solutions that bring the latest cutting edge models from ML research to production at Wayfair.
Aerin Booth
Cloud Sustainability Advocate
Genesis Cloud
“Is Dall-E ethical? The Real-World Impacts of Machine Learning.”
A hammer is just a tool, but you could use it to make something or break something. Put yourself in the shoes of the scientists who discovered how to turn oil into plastic. Software Engineers and Developers are now at the forefront of how we turn electricity and algorithms into work that might even have a worse impact on the world than single use plastic. Machine Learning & Diffusion algorithms have changed the way we can create art and have given us new tools, like Dall-E, that can lower the barrier to entry so that anyone could change the world and tell stories in art... or it could cause more harm than good.
But if you have to build that new model, Cloud Computing allows us to be more efficient than ever by using intensive compute resources like GPUs that still produce Carbon Emissions (even if they are offset!). I'll help you understand how you can host things more efficiently. And how you could even send your batch jobs to providers like Genesis Cloud who offer GPU resources in Iceland and Norway that are 100% powered by renewables!
Aerin Booth is a Cloud Sustainability Advocate, AWS Community Builder, Genesis Cloud collaborator & the founder of Cloud Sustainably. They’re on a mission to get more developers talking about Sustainability and Green IT. Aerin started their career working with digital teams in the UK government before working their way to become the Head of Cloud at the UK Home Office. While at the Home Office they led the creation of their Cloud Centre of Excellence, training 100’s of staff, introducing automated governance tools and even saved the department over £10m over 3 years through optimisation and reducing usage. Aerin is on a mission to help make businesses more sustainable & ethical and to do so, they have recently set up a podcast focused on Cloud Sustainability, Public Cloud for Public Good.
Li Rong
Software Engineer
Yelp
Who are the MLOps in Yelp: from prototype to production
Machine Learning is a field which continues to grow, it is used pretty much everywhere, from image recognition to fraud detection to item recommendations to protein structure predictions. And what do these tasks have in common? To accomplish them, they all require robust machine learning models which need to be trained, tested and served. In this session we will be discussing having a dedicated ops team for this: MLOps. Who are the MLOps in Yelp? How do they differ from DevOps? And how building a horizontal layer of support across teams and providing infrastructure for experimentation, testing, training, model serving, data storage and everything in between can make a huge difference to the efficiency and quality of life of Machine Learning teams within Yelp. We will dive into details about how Yelp prototype ML projects and bring them into production with the help of MLOps to power thousands of small businesses.
Li is an engineer currently in the Core-ml team(Applied Machine Learning group) . Li was training for a career in the dance industry until the pandemic hit, when Li went back to software engineering and specialized in Machine Learning and Artificial Intelligence. Before that Li has worked across stacks in various companies, including Goldman Sachs, Skyscanner, Transferwise, just to name a few. Li was also a coding instructor for the non-profit "Code First Girls".
Tom Steenbergen
ML Platform Lead
Picnic Technologies
Prediction Archiving at Picnic Asing Kafka and Snowpipe
Logging and storing predictions is crucial for developing and maintaining machine learning models. It provides an invaluable feedback loop for your models to investigate where model performance can be improved. But more importantly, it helps you to go back and see what prediction was made by which model and features. In a time where more and more decisions are made by models, prediction archiving is a critical part of your machine learning platform. At Picnic we built an in-house solution to archive each and every prediction made, both by batch processes as wel by real-time services. In this talk we will present our solution that uses Kafka and Snowpipe to ingest all predictions in Snowflake, our analytical data warehouse.
Tom Steenbergen is the ML Platform Lead at Picnic. Together with his team members, they are responsible for supporting data scientists working across the machine learning lifecycle. They build and provide the infrastructure and tooling used by all the machine learning models that are running in production at Picnic. Previously at Picnic, Tom worked as a full-stack Data Scientist building machine learning models in a variety of domains, ranging from time-series forecasting to natural language processing.

NEW THIS YEAR
-
Solve shared problems with expert AI attendees during networking sessions & Q&As with speakers
-
Tailor Your Experience by creating personalised schedules, keeping up to date on any news and connecting with others before and after the event in the Event App
-
Receive access to post-event presentation recordings to make the most of the sessions during the event
-
Join a Networking Reception at the end of Day 1 to further connect with the cross-industry attendees at the event
Topics we cover
Machine Learning
Troubleshooting
Automation
Anomaly Detection
Collaboration
Architectural Practices
Cloud Computing
End-To-End Delivery
Machine Learning Pipelines
Machine Learning Workflows
Effective Training
Data Efficiency
WHY ATTEND
Our events bring together the latest technology advancements as well as practical examples to apply AI to solve challenges in business and society. Our unique mix of academia and industry enables you to meet with AI pioneers at the forefront of research, as well as exploring real-world case studies to discover the business value of AI.
Extraordinary Speakers
Discover advances in MLOps from the world’s leading innovators. Learn from pioneering researchers in areas such as continuous delivery, pipeline efficiency, scaling model inference, team roles & managing ML model portfolios. Explore how these agile techniques can optimize ML workflows to minimize the costs of ML production, management & deployment.
Discover Emerging Trends
The summit will highlight cutting-edge trends in MLOps from using agile management techniques to end-to-end delivery and reducing operational complexity. Hear best practices for designing & managing the ML production process through the automation of lifecycles for training & testing data sets.
Expand Your Network
A unique opportunity to interact with industry leaders, data engineers, MLOps specialists & IT decision-makers leading the efficient ML production revolution. Learn from & connect with industry innovators sharing best practices to streamline your ML lifecycles.
Who Should Attend
- Cloud Engineers
- CTOs
- Data Engineers
- Data Scientists
- Developers
- DevOps Managers
- Heads of Digital Transformation
- IT Decision-Makers
- Software Engineers
- ML Engineers
Join the discussion
- Influential AI Pioneers
- Leading technologists & innovators
- Keynote presentations & panel sessions
- Roundtable discussions + speaker Q&A
- Access to filmed presentations & slides
- Unlimited networking & connections
- Discover technology shaping the future
Downloads
View the summit brochure and all the information you need to convince your boss that attending the summit will help future-proof your business.
CONFIRMED ATTENDEES INCLUDE




























WHAT PEOPLE SAY ABOUT RE•WORK





.png)

Event Organiser /
Katie Pollitt
Head of Events
Our events are all carefully created from scratch. The whole process from research to post-production is crafted by our team, so we are always available to assist with any queries! We look forward to meeting you at the event!
CONFIRMED PARTNERS
iterative
You may know us as the creators of DVC, CML, MLEM, and Iterative Studio.
Our mission is to deliver the best developer experience for machine learning teams by creating an ecosystem of open, modular machine learning tools. All of our tools are GitOps-based, matching with software teams using Git as the source of truth for all applications.
This GitOps approach aligns the ML model development lifecycle with that of your apps and services so you get faster time-to-market with transparent collaboration between your software development and ML teams.
Visit websiteValohai
Models are temporary; pipelines are forever. Valohai is the only MLOps platform that automates everything from data extraction to model deployment.
The Valohai platform makes machine learning in production easy. Data scientists and machine learning engineers can work together to build end-to-end machine learning pipelines that take in new data, train a model, and deploy to production automatically. Everything trained on Valohai is automatically stored and versioned, so every model is always reproducible, and work is never lost.
The platform is technology agnostic and ready for any cloud or on-premise setup, so even the strictest security requirements can be met.
Visit websiteGenesis Cloud
Genesis Cloud is the Green GPU Accelerator Cloud.
We make Accelerated Cloud Computing more sustainable, efficient, affordable, scalable and accessible. We are committed to providing simple and cost effective solutions that enable developers to create, build and scale applications with ease and satisfaction in a highly competitive cloudscape. And we don’t stop there: Our performance is powered by nature. We provide solutions that increase efficiencies in a secure enterprise-grade data center facility powered exclusively by hydroelectric and geothermal resources.
Whether you’re creating machine learning models or conducting complex data analytics, Genesis Cloud provides the accelerators for any size application - without harmful emissions to the environment.
Visit websiteMEDIA PARTNERS & PRESS
Unite.AI
Unite AI offers detailed analysis and news on the latest advancements in machine learning and AI technology. Visit websiteDatafloq
Datafloq is the one-stop-shop around Big Data. Our vision is Connecting Data and People and we aim to achieve that by spurring the understanding, acceptance and application of Big Data in order to drive innovation and economic growth. Visit websiteCIOReview
Published in Fremont, California, CIOReview (www.cioreview.com) is one of the leading print magazines in the US. It is the knowledge platform where C-suite executives deliberate on critical market challenges and current technological trends across industries. We are a unique magazine because all of our contributors are senior executives from the industry. Visit websiteCIO Applications
CIO Applications (www.cioapplications.com) magazine stands out with its unique approach to learning from industry leaders offering professionals the most comprehensive collection of technology trends. CIO Applications is enabling the businesses to move a step ahead and guiding them towards adopting the best in technology that can assist them in providing seamless and convenient solutions for enhanced customer experience. Visit websiteAI Time Journal
We explore how Artificial Intelligence and Exponential Technologies bring opportunities for people, organizations, and societies to increase their wealth and health.
Our audience is anyone who wants to improve in their career, their business, their investments; who wants to live a healthier, more productive, and fulfilling life; who wants to simplify and improve the education systems of their communities, or who simply wants to understand how Exponential Technologies are changing the world.
We publish articles, podcast interviews, and ebooks with insights from industry leaders and experts, and use cases of exponential technologies across multiple fields, including finance, healthcare, and education.
Visit website