For the sake of benefit of the doubt, it’s possible to simultaneously understand the thesis of the article, and to hold the opinion that AI doesn’t lead to higher-quality products. That would likely involve agreeing with the premise that laying off workers is a bad idea, but disagreeing (at least partially) with the reasoning why it’s a bad idea.
I get what you’re saying, but the problem is that AI seems to need way more hand holding and double checking before it can be considered ready for deployment.
I’ve used copilot for Ansible/Terraform code and 40-50% of the time it’s just… wrong. It looks right, but it won’t actually function.
For easy, entry programs it’s fine, but I wouldn’t (and don’t) let it near complex projects.
I’ve seen similar issues with ansible and terraform. It’s much better with more traditional languages though. Works great with core go-lang, Python, Java, Kotlin, etc. Ymmv when it comes to some libraries as well. I think it’s mostly to do with the amount of training data.
Its not about writing easy entry programs, it’s about writing code robustly.
Writing out test code where tests are isolated from each other, cover every edge case, and test every line of code, is tedious but pays dividends. AI makes it far less tedious to write out that test code and practice proper test driven development.
A well run dev team with enough senior people that manages the change properly should increase in velocity if they’re already writing robust code, and increase in code quality if they’re not.
AI makes it far less tedious to write out that test code […]
Completely disagree.
In my experience, LLMs constantly generate bad code that needs to be thoroughly checked, to the point that writing by hand is more practical.
We use copilot literally every day and it’s extremely helpful, literally not a single developer at our company disagreed on the most recent adoption survey.
Maybe you’re trying to use it to do too much, or in the wrong way?
Read the article before commenting.
The literal entire thesis is that AI should maintain developer headcounts and just let them be more productive, not reduce headcount in favour of AI.
The irony is that you’re putting in less effort and critical thought into your comment than an AI would.
For the sake of benefit of the doubt, it’s possible to simultaneously understand the thesis of the article, and to hold the opinion that AI doesn’t lead to higher-quality products. That would likely involve agreeing with the premise that laying off workers is a bad idea, but disagreeing (at least partially) with the reasoning why it’s a bad idea.
I get what you’re saying, but the problem is that AI seems to need way more hand holding and double checking before it can be considered ready for deployment.
I’ve used copilot for Ansible/Terraform code and 40-50% of the time it’s just… wrong. It looks right, but it won’t actually function.
For easy, entry programs it’s fine, but I wouldn’t (and don’t) let it near complex projects.
I’ve seen similar issues with ansible and terraform. It’s much better with more traditional languages though. Works great with core go-lang, Python, Java, Kotlin, etc. Ymmv when it comes to some libraries as well. I think it’s mostly to do with the amount of training data.
Its not about writing easy entry programs, it’s about writing code robustly.
Writing out test code where tests are isolated from each other, cover every edge case, and test every line of code, is tedious but pays dividends. AI makes it far less tedious to write out that test code and practice proper test driven development.
A well run dev team with enough senior people that manages the change properly should increase in velocity if they’re already writing robust code, and increase in code quality if they’re not.
Completely disagree.
In my experience, LLMs constantly generate bad code that needs to be thoroughly checked, to the point that writing by hand is more practical.
We use copilot literally every day and it’s extremely helpful, literally not a single developer at our company disagreed on the most recent adoption survey.
Maybe you’re trying to use it to do too much, or in the wrong way?