• dRLY [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    16
    ·
    27 days ago

    Just another example as to why companies tripping over themselves to force AI into everything without doing the real work of actually stopping this shit. It seems like it would need very direct rules in the code to just defer to a human tech in the event of not “knowing” the answer. Just like how human level 1 customer help will just say that it seems to need a higher level person to get correct help for the situation. All these bots are trained to focus on sounding correct above everything else is eventually cause much worse problems as greed and hype rule.

    • christian [he/him, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      27 days ago

      It seems like it would need very direct rules in the code to just defer to a human tech in the event of not “knowing” the answer.

      That would require a wholly different technology with some ability to interpret the things it’s saying and assess their validity. It’s a lot more cost efficient to have your AI spew bullshit and do damage control afterwards.

      • dRLY [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        26 days ago

        Well then, it seems that it would be good motivation if more people are able to find ways to force the bots to give deep discounts on things. Would need to be clever in tricking the AI so that it isn’t outright obvious in the prompts to aid in avoiding the companies just accusing the buyers of hacking.