Job title: AI Policy Advisor
Reports to: Head of AI Policy, Dr Jess Whittlestone
Salary: £60k to £75k, depending on skill set and experience.
Location and travel requirements: Flexible location within the UK, with the expectation of attending our central London office at least two days a week, including Mondays.
If you are interested in this position, but salary or location is a barrier to you applying, please get in touch with our team to discuss. We may have some flexibility in exceptional circumstances.
Application form: Apply Here
Deadline: 2 April 2023
For any questions about the role, please contact email@example.com
The Centre for Long-Term Resilience (CLTR) is a UK-based think tank with a mission to transform global resilience to extreme risks: high-impact threats with global reach. We do this by working with governments and other institutions to improve relevant governance, processes and decision-making, with a particular focus on the UK government.
We focus on three main areas, where effective governance today could substantially mitigate both current and future threats: the safe development and use of AI; biosecurity and pandemic preparedness, and improving how governments manage these types of extreme risks. We work with experts and policymakers to develop concrete and actionable policy recommendations in these areas, and to advocate for those recommendations in senior policy communities.
You would be joining a small but ambitious team with experience across academia, government, non-profits, and the private sector. Over the last year, we have built a leadership team with expertise across our key policy areas, and are now beginning to expand the organisation to increase our capacity for impact.
Our AI policy team works to develop and advocate for policy recommendations which could mitigate extreme risks from AI development. We work primarily with the UK government, with an eye to how the UK can influence international norms, best practices, and regulations.
The extreme risks from AI we focus on include:
* Risks arising from the deployment of AI in high-stakes contexts, such as defence or critical infrastructure;
* Risks arising from the misuse of AI capabilities, such as to manipulate elections or develop novel weapons;
* Risks arising from the development of certain advanced AI capabilities, such as deceptive or power-seeking behaviour.
Some of our recent work includes:
* Helping the UK Ministry of Defence to develop safety and assurance processes around the use of AI in defence
* Advising the UK Foreign Office on policy proposals for mitigating misuse of large models
* Exploring the feasibility of compute monitoring in the UK government
* Running a session with the Office for AI on the regulatory implications of progress in foundation models
This will be our first AI hire beyond the leadership team, and so will significantly increase our capacity to develop and advocate for policy in this space. Over the next few years, we hope to grow a team of AI policy specialists, and this first hire will have the unique opportunity to help shape the development of that team.
We’re looking for an AI Policy Advisor who would work closely with our Head of AI Policy to develop and advocate for our AI policy recommendations over the coming years. This role would focus primarily on ensuring that our policy priorities and recommendations are as robust and evidence-based as possible, and secondarily on working with the Head of AI Policy to advocate for these recommendations in senior UK and international policy circles. This is a generalist role with scope to specialise over time depending on your interests and the needs of our growing team: for example, focusing more on government advocacy or specialising in a particular area of AI policy.
We want the best people and we don’t want biases holding us back. We strongly encourage people of every colour, orientation, age, gender, origin, and ability to apply. If you are passionate about CLTR’s mission and think you have what it takes to be successful in this role, even if you don’t check all the boxes, please apply. We’d appreciate the opportunity to consider your application. Our intention when hiring is to focus on talent, motivation and potential rather than experience.
Research community engagement and policy development would likely be about 60% of the role and would consist of:
* Working with the Head of AI Policy to develop and refine our process for policy development to ensure the recommendations we advocate for are as robust and evidence-based as possible.
* Working with the Head of AI Policy to identify priority areas for policy development based on strategic thinking.
* Regularly liaising with a number of relevant research communities to better understand research developments that may be relevant to our policy priorities.
* Actively seeking to expand and diversify the range of experts we engage with as part of our work.
* Seeking input and feedback from a variety of experts to refine specific policy recommendations.
* Conducting or commissioning targeted research to resolve important uncertainties related to our policy recommendations.
* Considering and experimenting with different approaches to research community engagement, from one-one or small group conversations and feedback on written documents to larger multi-stakeholder workshops or scenario exercises.
Government engagement and advocacy would likely be about 30% of the role and would consist of:
* Conducting research in response to ad-hoc requests from government stakeholders for research or policy analysis, where aligned with our strategic priorities.
* Writing advice notes tailored to specific decision-makers in response to requests or as part of a broader advocacy plan for specific policy recommendations.
* Helping to plan and lead meetings with senior government stakeholders.
* As relevant, helping to conduct public engagement work around our policy priorities, for example, by giving presentations or engaging with the media.
The remaining 10% of the role would involve working with our Head of AI Policy to develop and refine CLTR’s AI unit strategy.
* Strong motivation in line with CLTR’s core mission.
* At least two years experience in a research, policy or advocacy setting directly related to AI governance, and a good understanding of the AI governance research landscape.
* The ability to think clearly and independently about how AI development might pose extreme risks, and the role of government in mitigating those risks.
* Strong ‘synthesis-style’ research skills: the ability to effectively draw together information from a variety of sources, to reach nuanced conclusions and practical recommendations.
* Excellent interpersonal skills, including:
** The ability to build strong relationships with a diverse range of expert stakeholders;
** The ability to effectively lead and guide discussions in one-one and group sessions, honing in on important points and key areas of uncertainty or disagreement.
* Excellent written and verbal communication skills, including:
** The ability to clearly distil and summarise key information on complex topics;
** The ability to structure information and arguments in a clear and compelling way;
** The ability to understand what context an audience does and does not have, and how to tailor information accordingly.
* Strong prioritisation, project management and organisational skills, including:
** The ability to think about how to effectively break up and sequence projects;
** Ability to set appropriate goals and reliably achieve them, staying focused on priorities.
* The ability to balance deep thinking about our strategy and policy recommendations with the pragmatism needed to execute on their implementation.
* Intellectual humility: willingness to consider a range of perspectives, recognise where you are uncertain, and change your mind in response to evidence.
* Proactivity in identifying opportunities for the team to better achieve its goals, whether big or small.
* Directness and openness in giving and receiving feedback.
* Comfort working within a fast-paced environment and with changing priorities at short notice if a time-critical project comes in.
* Ability to help drive a positive team culture in line with our values: integrity, people first, and targeted real-world impact.
* Excitement about helping to establish a relatively new unit whose strategy, priorities and plans are still evolving.
£60k to £75k, depending on experience. However, we would be open to considering a higher salary for a truly exceptional candidate. If you are interested in this position, but the salary is a barrier to applying, please get in touch with our team to discuss.
The salary comes with a generous benefits package which includes:
* 28 days annual leave, plus public holidays
* £3,000 annual learning and development budget
* £5,000 yearly bonus (before tax) for mental health and wellbeing, for you to spend at your discretion
* A pension contribution scheme (up to 7% employer-matched contribution)
* Private health insurance and group life insurance
* Generous parental leave
It also comes with a commitment from CLTR to care deeply about your well-being, career development and overall experience working with us, and to respect your preferred working patterns wherever possible.
Flexible location within the UK, with the expectation of attending our central London office at least twice a week, including Mondays. However, we may be open to exploring remote working arrangements in exceptional circumstances. If location is a barrier to you applying, please get in touch with our team to discuss.
The Centre for Long-Term Resilience