

Yes, unless that service is the kind of thing you think you might pick up later.
For instance, you might use LinkedIn to find a job, but that can still be something you might need in the future, because it’s unlikely you’ll hold that one job forever, and intermittently posting during your existing job could actually help your future prospects.
By contrast, if you used a random site to create a fancier resume, yeah, that account can go straight in the digital wastebasket when you’re done with it. You can always make a new account if you need to make a new resume, and it probably won’t rely on your old account’s data to get that job done.
To be fair, I do believe their research was based on how convincing it was compared to other Reddit commenters, rather than say, an actual person you’d normally see doing the work for a government propaganda arm, with the training and skillset to effectively distribute propaganda.
Their assessment of how “convincing” it was seems to also have been based on upvotes, which if I know anything about how people use social media, and especially Reddit, are often given when a comment is only slightly read through, and people are often scrolling past without having read the whole thing. The bots may not have necessarily optimized for convincing people, but rather, just making the first part of the comment feel upvote-able over others, while the latter part of the comment was mostly ignored. I’d want to see more research on this, of course, since this seems like a major flaw in how they assessed outcomes.
This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.