Research Engineer

Location: Nashville, TN

About Targun

Targun is an applied-AI company on a mission to eliminate repetitive, high-stakes operational bottlenecks. From logistics coordination to ERP automation, we design intelligent systems that turn fragmented data into decisive action—empowering teams to move faster, cut waste, and build at the pace of ambition.

We believe the next decade belongs to organizations that pair deep industry knowledge with advanced AI research. At Targun, developers and domain experts collaborate to push large-language-model capabilities beyond generic chat and into rigorous, real-world workflows. Our vision is simple: every worker armed with an AI partner that makes shipping freight, running a factory, or launching a new product as frictionless as sending a text.

The Role

As a Research Engineer you’ll build and scale the infrastructure that powers our Reinforcement Learning from Code Execution Feedback (RLCEF) pipeline. Your work will enable Targun’s large-language models to learn by completing millions of real-world coding tasks—turning raw execution feedback into smarter, more reliable AI.

Your Mission

Build a robust, distributed platform that executes, analyzes, and stores feedback from large volumes of code runs—laying the technical foundation for Targun’s next-generation fine-tuning efforts.

Responsibilities

  • Design and implement a container-based code-execution service with strong sandboxing guarantees.
  • Integrate static analyzers, parsers, and linters to capture rich signals from each run.
  • Develop data pipelines for large-scale processing, fault tolerance, and performance optimization.
  • Operate distributed message queues (Kafka, Pub/Sub, etc.) and event-driven architectures.
  • Collaborate with ML engineers to surface execution feedback for RLCEF fine-tuning.
  • Instrument, monitor, and tune the platform for reliability and throughput.

Skills & Experience

  • Expertise in Go, C/C++, or Python (polyglot engineers welcome).
  • Hands-on experience building and operating distributed systems at scale.
  • Proficiency with data pipelines, message queues, and cloud storage (AWS, GCP, or Azure).
  • Strong testing discipline: unit, integration, and performance testing.
  • Bonus points for knowledge of container runtimes, Linux system programming, and CI/CD tooling.
  • Familiarity with DevOps staples: Git, Docker, Kubernetes, Terraform, Grafana/Prometheus.

Hiring Process

  • Introductory conversation with our team.
  • Technical deep-dive with the hiring manager.
  • Practical exercise (live pairing).
  • Final chat with our founder.

Got what it takes?