Title explains it all. For people in tech jobs who are being indoctrinated encouraged to use AI, what could be some methods of malicious compliance?

The only one I managed to come up with would be asking the chatbot to write the question you wanted to ask, them prompting it with its own replay to speed up that sweet model collapse.

  • Ekybio@lemmy.world
    link
    fedilink
    arrow-up
    22
    arrow-down
    1
    ·
    4 days ago

    From what I have seen so far, just using the output of the damn thing without double checking is enough to cause errors.

    Automated messages to customers contain errors? Leave them in! Especially fun if you work in insurancd and law.

    Code written by it is buggy? Just copy paste that shit into everything! And let bots check the result as well.

    Have some important math to do? Let the bot rip! Give the guys from accounting so work for once.

    Remember: Just uncritically using the damn thing is already malicious compliance with the ammount of errors they produce. No more cleaning up behind them, no more trying to invest actual work. If corpos decide they want AI, let them choke on it.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 days ago

    You could let it draft some overtime timesheets or expense claims for hallucinated business trips. Maybe a rap diss-track or rant about the boss / project. ChatGPT loves to go nuclear with these things. (Or maybe not so much if they monitor your input.)

    And why do you even ask us? Just let the AI come up with some (subtle) malicious ideas.

  • Jack@slrpnk.net
    link
    fedilink
    arrow-up
    4
    ·
    4 days ago

    This will be very intensive for the bot, and will be just waste of energy.

    I just stopped using AI at work, these companies are entirely, unimaginably unprofitable. If we all stop using them for a few months, they will just implode. ( Or maybe I am too hopefull?)

    • ApeHardware@lemmy.worldOP
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      4 days ago

      Already doing my part then. The only times I used it was at the beginning when they were straight up tracking usage of the damn thing, but not the inputs, so me and a few coworkers asked it random shit to have a laugh. That was before we found about the environmental impacts and model collapse. Right now that “AI adoption team” has been very quiet for several months, so we just do work as normal (it sucks but at least we get paid.).