• 0 Posts
  • 38 Comments
Joined 10 months ago
cake
Cake day: June 26th, 2024

help-circle


  • svtdragon@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    2
    ·
    edit-2
    18 days ago

    I just spent about a month using Claude 3.7 to write a new feature for a big OSS product. The change ended up being about 6k loc with about 14k of tests added to an existing codebase with an existing test framework for reference.

    For context I’m a principal-level dev with ~15 years experience.

    The key to making it work for me was treating it like a junior dev. That includes priming it (“accuracy is key here; we can’t swallow errors, we need to fail fast where anything could compromise it”) as well as making it explain itself, show architecture diagrams, and reason based on the results.

    After every change there’s always a pass of “okay but you’re violating the layered architecture here; let’s refactor that; now tell me what the difference is between these two functions, and shouldn’t we just make the one call the other instead of duplicating? This class is doing too much, we need to decompose this interface.” I also started a new session, set its context with the code it just wrote, and had it tell me about assumptions the code base was making, and what failure modes existed. That turned out to be pretty helpful too.

    In my own personal experience it was actually kinda fun. I’d say it made me about twice as productive.

    I would not have said this a month ago. Up until this project, I only had stupid experiences with AI (Gemini, GPT).













  • As the primary author of my previous org’s GHAs (not GH Enterprise, just the team tier) I found some feature gaps compared to org[n-2]'s Jenkins but they were fairly quickly filled.

    I was initially skeptical but it wasn’t more than a month or two before I was just glad to be off Jenkins. And now that I’m back to a big org with a big Jenkins footprint, I really miss GHA.

    Having everything be contextual in the same place is a huge value add for me.




  • According to some cursory research (read: Google), obstacle avoidance uses ML to identify objects, and uses those identities to predict their behavior. That stage leaves room for the same unpredictability, doesn’t it? Say you only have 51% confidence that a “thing” is a pedestrian walking a bike, 49% that it’s a bike on the move. The former has right of way and the latter doesn’t. Or even 70/30. 90/10.

    There’s some level where you have to set the confidence threshold to choose a course of action and you’ll be subject to some ML-derived unpredictability as confidence fluctuates around it… right?