
Turing
About Us
Turing is one of the world’s fastest-growing AI companies, pushing the boundaries of AI-assisted software development. Our mission is to empower the next generation of AI systems to reason about and work with real-world software repositories. You’ll be working at the intersection of software engineering, open-source ecosystems, and frontier AI.
Project Overview
We’re building high-quality evaluation and training datasets to improve how Large Language Models (LLMs) interact with realistic software engineering tasks. A key focus of this project is curating verifiable software engineering challenges from public GitHub repository histories using a human-in-the-loop process.
Why This Role Is Unique
- Collaborate directly with AI researchers shaping the future of AI-powered software development.
- Work with high-impact open-source projects and evaluate how LLMs perform on real bugs, issues, and developer tasks.
- Influence dataset design that will train and benchmark next-gen LLMs.
- What does day-to-day look like:
- Review and compare 3-4 model-generated code responses for each task using a structured ranking system.
- Evaluate code diffs for correctness, code quality, style, and efficiency.
- Provide clear, detailed rationales explaining the reasoning behind each ranking decision.
- Maintain high consistency and objectivity across evaluations.
- Collaborate with the team to identify edge cases and ambiguities in model behavior.
Required Skills
- 7+ years of professional software engineering experience, ideally at top-tier product companies (e.g., Stripe, Datadog, Snowflake, Dropbox, Canva, Shopify,Intuit,PayPal, Research at IBM/GE/Honewell/Scheinder etc. ).
- Strong fundamentals in software design, coding best practices, and debugging.
- Excellent ability to assess code quality, correctness, and maintainability.
- Proficient with code review processes and reading diffs in real-world repositories.
- Exceptional written communication skills to articulate evaluation rationale clearly.
- Prior experience with LLM-generated code or evaluation work is a plus.
Bonus Points
- Experience in LLM research, developer agents, or AI evaluation projects.
- Background in building or scaling developer tools or automation systems.
Engagement Details
- Commitment: 20 hours/week (partial PST overlap required)
- Type: Contractor (no medical/paid leave)
- Duration: 1 month (starting next week; potential extensions based on performance and fit)