Estimated based on role seniority, company stage (Series C), and industry benchmarks. Actual compensation may vary.
Based on Web3 & AI industry compensation data. Seniority is inferred from role title keywords. Company stage affects ranges: early-stage (−15%), late-stage/public (+10%).
Our mission is to automate coding. The first step in our journey is to build the best tool for professional programmers, using a combination of inventive research, design, and engineering. Our organization is very flat, and our team is small and talent dense. We particularly like people who are truth-seeking, passionate, and creative. We enjoy spirited debate, crazy ideas, and shipping code.
The ML Infrastructure team builds large-scale compute, storage, and software infrastructure to support Cursor’s work building the world’s best agentic coding model. We’re looking for strong engineers who are interested in building high-performance infrastructure and the software to support it. This role works closely with ML researchers and engineers to enable their work through improvements to our training framework, systems reliability/performance, and developer experience.
Collaborate with ML researchers to improve the throughput and reliability of training
Work with OEMs, cloud service providers, and others to plan and build cutting-edge GPU infrastructure
Improve the density and scalability of compute environments to enable increasingly large RL workloads
Create software and systems to automate building, monitoring, and running GPU clusters
Build workload scheduling and data movement systems to support Cursor’s growing training footprint
A strong background in systems and infrastructure-focused software engineering, particularly in Python, Typescript, Rust, and Golang
Experience with distributed storage and networking infrastructure, particularly on Linux systems across cloud and bare metal environments
Exposure to large-scale systems and their unique challenges, ideally across thousands of nodes with significant resource footprints.
Production use of infrastructure-as-code and configuration management, across hosts and Kubernetes
Operational exposure to Nvidia GPUs with Infiniband or RoCE, particularly with Blackwell and Hopper-class hardware
Exposure to Ray, Slurm, or other common compute and runtime schedulers
#LI-DNI
AI-powered code editor built on frontier language models.
View company profileYou'll be redirected to the company's application page
Get roles like this daily
Join our Telegram channels for curated job alerts
Hey! Looking for your next role in Web3, AI, or Robotics? I can help.
Sign up to save jobs and access them across all your devices.