ScalewayPublié il y a environ 1 mois
Logo Scaleway

ML Engineer

> 3 années d'expérience
CDI
Ingénieur ML

Scaleway 🌐

Machine Learning Engineer - Large Language Models 🤖

About Scaleway

Founded in 1999, Scaleway is the cloud subsidiary of Iliad Group, a leading telecommunications provider in Europe. Our mission is to foster a more responsible digital industry by helping developers and businesses build, deploy, and scale applications on any infrastructure.

With offices in Paris and Lille, we continuously improve our cloud ecosystem, which we are the first to use.

Our 25,000+ customers choose us for our multi-AZ redundancy, seamless user experience, carbon-neutral data centers, and native multi-cloud management tools. Our products include fully managed solutions for bare metal, containerization, and serverless architectures, offering a responsible choice in cloud computing.

Join our dynamic team of nearly 600 people from diverse backgrounds in a stimulating and international environment that combines technical excellence, creativity, and sharing.

About the job

The newly established Inference team at Scaleway is on a mission to revolutionize how Machine Learning (ML) is deployed and scaled in the cloud. We are seeking a talented ML Engineer to join us in developing and deploying Large Language Model (LLM) endpoints on both dedicated instances and serverless environments. As we plan to broaden our offerings to include various types of ML models later this year, this role offers a unique opportunity to be at the forefront of ML technology and its application in the cloud.

Reporting to our Manager, Grégoire de Turckheim, you will play a crucial role in building and optimizing ML model deployments, ensuring high performance, scalability, and reliability.

Minimum qualifications

  • Proficient in Python and familiar with other programming languages such as Go.
  • Strong background in Machine Learning, including experience with LLMs, NLP, or other ML model types.
  • Experience with ML frameworks (e.g., TensorFlow, PyTorch) and understanding of MLOps principles.
  • Knowledge of deploying ML models in cloud environments, including serverless architectures.
  • Familiarity with container technologies (Docker, Kubernetes) and orchestration systems.
  • Understanding of REST and gRPC APIs for integrating ML models into applications.
  • Excellent command of English, both written and verbal.

Preferred qualifications

  • Good understanding of Linux system administration and cloud ecosystems.

Responsibilities

  • Optimize ML models for high performance and low latency in cloud environments.
  • Design, develop, and maintain scalable and efficient ML model deployments, focusing on LLMs initially and expanding to other models.
  • Collaborate with the Inference team to architect and implement serverless solutions for ML model hosting.
  • Ensure the reliability, availability, and security of ML model deployments.
  • Stay abreast of the latest ML technologies and cloud trends to continuously improve our offerings.

Technical Stack

  • Programming Languages: Python, Go
  • ML Frameworks: TensorFlow, PyTorch
  • Container Technologies: Kubernetes, Docker
  • Cloud and Serverless Technologies
  • Linux Systems
  • Data Storage: S3, PostgreSQL, Redis
  • Version Control: Git

Location

This position is based in our offices in Paris or Lille (France)

Recruitment Process

  • Screening call - 30 mins with the recruiter
  • Manager Interview - 45 mins
  • Technical Interviews / or Home Assignment
  • Team Interview
  • HR Interview - 45 mins
  • Offer sent

Don't be shy!

If you don't think you tick all the boxes, don't hesitate to apply anyway. Don't limit yourself to a job description - you never know!

🌐 Scaleway | Scaleway Blog | Scaleway on Twitter

Skills

Data
Machine Learning
Pytorch
TensorFlow
PostgreSQL
Redis
Cloud
Serverless
Cloud Computing
Ops
Back-end
Tooling
Git
Autres
Linux System
Gestion de projet
Management