Using Copilot as a copilot, like generating boilerplate and then code reviewing it is still “babysitting” it. It’s still significantly less effort than just doing it yourself though
Until someone uses it for a little more than boilerplate, and the reviewer nods that bit through as it’s hard to review and not something a human/the person who “wrote” it would get wrong.
Unless all the ai generated code is explicitly marked as ai generated this approach will go wrong eventually.
Agreed, using LLMs for code requires you to be an experienced dev who can understand what it pukes out. And for those very specific and disciplined people it’s a net positive.
However, generally, I agree it’s more risk than it’s worth
Using Copilot as a copilot, like generating boilerplate and then code reviewing it is still “babysitting” it. It’s still significantly less effort than just doing it yourself though
Until someone uses it for a little more than boilerplate, and the reviewer nods that bit through as it’s hard to review and not something a human/the person who “wrote” it would get wrong.
Unless all the ai generated code is explicitly marked as ai generated this approach will go wrong eventually.
Undoubtedly. Hell, even when you do mark it as such, this will happen. Because bugs created by humans also get deployed.
Basically what you’re saying is that code review is not a guarantee against shipping bugs.
Agreed, using LLMs for code requires you to be an experienced dev who can understand what it pukes out. And for those very specific and disciplined people it’s a net positive.
However, generally, I agree it’s more risk than it’s worth