The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.

  • @hoshikarakitaridia@sh.itjust.works
    link
    fedilink
    English
    646 months ago

    And some people pointed it out even back then. There were signs that the employees were very loyal to Altmann, but Altmann didn’t meet the security concerns of the board. So stuff like this was just a matter of time.

    • deweydecibel
      link
      fedilink
      English
      316 months ago

      People pointed this out as a point in Altmann’s favor, too. “All the employees support him and want him back, he can’t be a bad guy!”

      Well, ya know what, I’m usually the last person to ever talk shit about the workers, but in this case, I feel like this isn’t a good thing. I sincerely doubt the employees of that company that backed Altmann had taken any of the ethics of the tool they’re creating into account. They’re all career minded, they helped develop a tool that is going to make them a lot of money, and I guarantee the culture around that place is futurist as fuck. Altmann’s removal put their future at risk. Of course they wanted him back.

      And frankly I don’t think you can spend years of your life building something like ChatGBT without having drunk the Koolaid yourself.

      The truth is OpenAI, as a body, set out to make a deeply destructive tool, and the incentives are far, far too strong and numerous. Capitalism is corrosive to ethics; it has to be in enforced by a neutral regulatory body.

      • @SuckMyWang@lemmy.world
        link
        fedilink
        English
        36 months ago

        The engineers are likely seeing this from an arms race point of view. Possibly something like the development of an a-bomb where it’s a race against nations and these people at the leading edge can see things we cannot. While money and capitalistic factors are at play, foreseeing your own possible destruction or demise by not being ahead of the game compared to china may be a motivating factor too.