Estimated based on role seniority, company stage (Series B), and industry benchmarks. Actual compensation may vary.
Based on Web3 & AI industry compensation data. Seniority is inferred from role title keywords. Company stage affects ranges: early-stage (−15%), late-stage/public (+10%).
We are looking for an AI Inference engineer to join our growing team. Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes. You will have the opportunity to work on large-scale deployment of machine learning models for real-time inference.
Responsibilities
Develop APIs for AI inference that will be used by both internal and external customers
Benchmark and address bottlenecks throughout our inference stack
Improve the reliability and observability of our systems and respond to system outages
Explore novel research and implement LLM inference optimizations
Qualifications
Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)
Familiarity with common LLM architectures and inference optimization techniques (e.g. continuous batching, quantization, etc.)
Understanding of GPU architectures or experience with GPU kernel programming using CUDA
AI-powered answer engine combining search and language models.
View company profileYou'll be redirected to the company's application page
Get roles like this daily
Join our Telegram channels for curated job alerts
Hey! Looking for your next role in Web3, AI, or Robotics? I can help.
Sign up to save jobs and access them across all your devices.