Research Engineer, Safety
Location: San Francisco
Posted on: June 13, 2021
The Applied Safety team is building safety specifications,
processes, and measurement tools for general-purpose AI. We're
looking for safety-focused research engineers to work on
measurement tools with us: we aim to develop a world-class toolkit
for measuring the safety-relevant characteristics of our datasets,
models, and algorithms. This is high-impact work that will help
teams across OpenAI meet safety goals.
This is not about safety for narrow AI systems like autonomous
vehicles: this is about safety for general-purpose AI systems that
have large, uncharted surface areas of potential risk. Given that
the field is quite young, your work may be foundational for future
standards and professional duties.
In this role, you will:
- Take ambiguous, open-ended problems in measuring safety for
general-purpose AI, make them tractable and well-posed, and build
- Write clean, performant code for tools you build, with a focus
on making them usable for researchers and engineers across the
- Engage with literature and experts across many different
research domains (social sciences, physical sciences, economics,
politics, etc.) to reason about impact from or interactions with
general-purpose AI. Figure out how to measure safety concerns
relevant for those domains.
- Build tools that help us proactively prevent a wide range of
potential AI safety issues from the mundane to the
- Develop with a focus on scale and data. For example, if you
want to measure whether a model has a certain qualitative behavior,
you may need to build a dataset of hundreds or thousands of
examples of that behavior to check against.
- Contribute to building a safety culture at OpenAI by shipping
safety tools internally and helping everyone make the most use of
This role might be a good fit for you if you:
- Have strong programming skills.
- Are a fast learner who can quickly spin up on risks and impacts
from bleeding-edge tech that will profoundly change the world in
the near future.
- Are sincerely interested in reducing harms from AI.
Nice to haves:
- Experience working on large-scale natural language datasets or
- Experience building and productionizing classifiers.
- Research experience in ML / AI.
We're building safe Artificial General Intelligence (AGI), and
ensuring it leads to a good outcome for humans. We believe that
unreasonably great results are best delivered by a highly creative
group working in concert. We are an equal opportunity employer and
value diversity at our company. We do not discriminate on the basis
of race, religion, color, national origin, gender, sexual
orientation, age, marital status, veteran status, or disability
This position is subject to a background check for any
convictions directly related to its duties and responsibilities.
Only job-related convictions will be considered and will not
automatically disqualify the candidate. Pursuant to the San
Francisco Fair Chance Ordinance, we will consider for employment
qualified applicants with arrest and conviction records.
We will ensure that individuals with disabilities are provided
reasonable accommodation to participate in the job application or
interview process, to perform essential job functions, and to
receive other benefits and privileges of employment. Please contact
us to request accommodations via firstname.lastname@example.org.
- Health, dental, and vision insurance for you and your
- Unlimited time off (we encourage 4+ weeks per year)
- Parental leave
- Flexible work hours
- Lunch and dinner each day
- 401(k) plan with matching
Keywords: OpenAI, San Francisco , Research Engineer, Safety, Other , San Francisco, California
Didn't find what you're looking for? Search again!