A significant personnel change is afoot at OpenAI, the artificial intelligence juggernaut that has nearly single-handedly inserted the concept of generative AI into global public discourse with the launch of ChatGPT. Dave Willner, an industry veteran who was the startup’s head of trust and safety, announced in a post on LinkedIn last night that he has left the job and transitioned to an advisory role.

  • yip-bonk@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago
    One case in point was a very big dispute, in 2009, played out in the public forum about how Facebook was handling accounts and posts from Holocaust Deniers. Some employees and outside observers felt that Facebook had a duty to take a stand and ban those posts. Others believed that doing so was akin to censorship and sent the wrong message around free discourse.
    
    Willner was in the latter camp, believing that “hate speech” was not the same as “direct harm” and should therefore not be moderated the same. “I do not believe that Holocaust Denial, as an idea on it’s [sic] own, inherently represents a threat to the safety of others,” he wrote at the time. (For a blast from the TechCrunch past, see the full post on this here.)
    
    In retrospect, given how so much else has played out, it was a pretty short-sighted, naïve position.
    
    
    • Bipta@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      In 2009 it was pretty easy to believe that compared to now. But yes, one would hope head of safety would have more foresight than the average bloke.