• 0 Posts
  • 2.2K Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle












  • My understanding is that the train system and automotive sector are kind of opposite.

    For automotive, the government does the roads and private industry does the vehicles.

    Conversely, the rails are largely private industry excluding Amtrak, and Amtrak is mostly responsible for the trains with their government granted monopoly on passenger rail.

    It’s part of what really limits passenger rail, the companies that own the rail mostly want to rail from places like ports, and negligible value for rail between population centers. Also Amtrak has to suck it up if a rail is busy (wasnt supposed to be the case, but cargo operators were allowed to make trains too long to fit on bypass spurs so they can’t get out of the way like they were legally required to).




  • Being able to just cut off access to the application means a customer has little choice.

    For a competitor to pass them, they first have to catch up. To catch up, the customer needs to be able to extract the data from the application to give competition a chance. If they get closer to catching up, they tend to be bought out. Lot of speedbumps to discourage competition. Also, to get funding those competitors have to pretty much promise investors they will also do “as a service”.

    For assets versus expense, I see a pendulum, largely based on how appreciation/depreciation pans out versus acquisition cost and loan interest rates, as well as uncertain start up versus steady business. I’m not sure software is giving enough choice in the matter the let that swing.


  • While “any” is a bit much, I do anticipate a rather dramatic decline.

    One is that there are a large chunk of programming jobs that I do think LLM can displace. Think of those dumb unimaginative mobile games that bleed out a few dollars a week from folks. I think LLM has a good chance at cranking those out. If you’ve seen companies that have utterly trivial yet somehow subtly unique internal applications, LLMs can probably crank out a lot of those to. There’s a lot of stupid trivial stuff that has been done a million times before that still gets done by people.

    Another is that a lot of software teams have overhired anyway. Business folk think more developers mean better results, so they want to hire up to success, as long as their funding permits. This isn’t how programming really works, but explanations that fewer people can do more than more people in some cases can’t crack through how counter-intuitive that is. AI offers a rationalization for a lot of those folks to finally arrive at the efficient conclusion.

    Finally, the software industry has significantly converted transactional purchases to subscription. With perpetual license, you needed to provide some value to drive that customer who bought from you 5 years ago a reason to upgrade. Now with subscription models, you just have to coast and keep the lights on for those customers. Often with effective lock-in of the customers data to make it extra hard or impossible for them to jump to a competitor, even if competitors could reverse-engineer your proprietary formats, the customer might not even be able to download their actual data files. So a company that acheived “good enough” with subscription might severely curtail investment because it makes no difference to their bottom line if they are delivering awesome new capability or just same old same old. Anticipate a log of stagnation as they shuffle around things like design language to give a feeling of progress while things just kinda plateau out.


  • I think AI is a component of the decline.

    For decades, companies have operated under the misunderstanding that more software developers equals more success, despite countless works explaining that’s not how it works. As a result many of these companies have employed an order of magnitude more than they probably should have and got worse results than they would have. However the fact they got subpar results with 10x a good number just convinced them that they didn’t hire enough. Smaller team produce better results made zero sense.

    So now the AI companies come along and give a plausible rationalization to decrease team size. Even if the LLM hypothetically does zero to provide direct value, the reduced teams start yielding better results, because of mitigating the problems of “make sure everyone is utilized, make sure these cheap unqualified offshored programmers are giving you value, communicate and plan, reach consensus along a set up people who might all have viable approaches, but devolved into arguments over which way to go”.

    AI gives then a rationalization to do what they should have done from the onset.