© Reuters. FILE PHOTO: A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023. REUTERS/Dado Ruvic/Illustration
STOCKHOLM (Reuters) – OpenAI’s head of trust and safety Dave Willner is leaving the company, he said in a LinkedIn post on Friday, citing the pressures of the job on his family life and saying he would be available for advisory work.
OpenAI did not immediately respond to questions about Willner’s exit.
Trust and safety departments have taken on a high-profile role in technology companies such as OpenAI, Twitter, Alphabet (NASDAQ:) and Meta as they seek to limit the spread of hate speech, misinformation and other harmful content on their platforms.
At the same time, fears AI will run out of control have risen.
Willner took over his role at OpenAI in February last year, after working at Airbnb and Facebook (NASDAQ:). He attributed his decision to quit to growing demands from his job affecting his family life.
“Anyone with young children and a super intense job can relate to that tension, I think, and these past few months have really crystallised for me that I was going to have to prioritise one or the other,” he said in the post.
“I’ve moved teaching the kids to swim and ride their bikes to the top of my OKRs (objectives and key results) this summer.”
Microsoft-backed OpenAI, whose AI chatbot ChatGPT, has stormed the world, has said it depends on its trust and safety team to build “the processes and capabilities to prevent misuse and abuse of AI technologies”.