• 2 Posts
  • 635 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle

  • Longer queries give better opportunities for error correction, like searching for synonyms and misspellings, or applying the right context clues.

    In this specific example, “is Angelina Jolie in Heat” gives better results than “Angelina Jolie heat,” because the words that make it a complete sentence question are also the words that give confirmation that the searcher is talking about the movie.

    Especially with negative results, like when you ask a question where the answer is no, sometimes the semantic links in the kndex can get the search engine to make suggestions of a specific mistaken assumption you’ve made.


  • GamingChairModel@lemmy.worldtoLemmy Shitpost@lemmy.worldIn heat
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    5 days ago

    Why do people Google questions anyway?

    Because it gives better responses.

    Google and all the other major search engines have built in functionality to perform natural language processing on the user’s query and the text in its index to perform a search more precisely aligned with the user’s desired results, or to recommend related searches.

    If the functionality is there, why wouldn’t we use it?


  • GamingChairModel@lemmy.worldtoLemmy Shitpost@lemmy.worldIn heat
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    5 days ago

    Search engine algorithms are way better than in the 90s and early 2000s when it was naive keyword search completely unweighted by word order in the search string.

    So the tricks we learned of doing the bare minimum for the most precise search behavior no longer apply the same way. Now a search for two words will add weight to results that have the two words as a phrase, and some weight for the two words close together in the same sentence, but still look for each individual word as a result, too.

    More importantly, when a single word has multiple meanings, the search engines all use the rest of the search as an indicator of which meaning the searcher means. “Heat” is a really broad word with lots of meanings, and the rest of the search can help inform the algorithm of what the user intends.


  • Browsers are configured to pass off tel: links by a designated handler. In Safari on Mac, the default handler is Facetime, or at least was, for a while.

    On a mobile phone most browsers just open the phone/dialer app to handle tel: links. In Chrome, back when I had Google Voice, I had it configured to do Google Voice calls (including for a time, Google Hangouts as the interface for my Google Voice account).



  • Which is is such a high dollar count that this simply cannot be USD

    So I haven’t used Windows on my own machines in about 20 years, but back when I built my own PCs that seemed about right. So I looked up the price history, didn’t realize that Microsoft reduced the license prices around Windows 8.

    I remember 20 years ago, Windows XP Home was $199 and Professional was $299 for a new license on a new computer. Vista and 7 were similarly priced.

    Since Windows 8, though, I just don’t understand their pricing or licensing terms.


  • I think back to the late 90’s investment in rolling out a shitload of telecom infrastructure, with a bunch of telecom companies building out lots and lots of fiber. And perhaps more important than the physical fiber, the poles and conduits and other physical infrastructure housing that fiber, so that it could be improved as each generation of tech was released.

    Then, in the early 2000’s, that industry crashed. Nobody could make their loan payments on the things they paid billions to build, and it wasn’t profitable to charge people for the use of those assets while paying interest on the money borrowed to build them, especially after the dot com crash where all the internet startups no longer had unlimited budgets to throw at them.

    So thousands of telecom companies went into bankruptcy and sold off their assets. Those fiber links and routes still existed, but nobody turned them on. Google quietly acquired a bunch of “dark fiber” in the 2000’s.

    When the cloud revolution happened in the late 2000’s and early 2010’s, the telecom infrastructure was ready for it. The companies that built that stuff weren’t still around, but the stuff they built finally became useful. Not at the prices paid for it, but when purchased in a fire sale, those assets could be profitable again.

    That might happen with AI. Early movers over invest and fail, leaving what they’ve developed to be used by whoever survives. Maybe the tech never becomes worth what was paid for it, but once it’s made whoever buys it for cheap might be able to profit at that lower price, and it might prove to be useful in the more modest, realistic scope.


  • For example, as a coding assistant, a lot of people quite like them. But as a replacement for a human coder, they’re a disaster.

    New technology is best when it can meaningfully improve the productivity of a group of people so that the group can shrink. The technology doesn’t take any one identifiable job, but now an organization of 10 people, properly organized in a way conscious of that technology’s capabilities and limitations, can do what used to require 12.

    A forklift and a bunch of pallets can make a warehouse more efficient, when everyone who works in that warehouse knows how the forklift is best used, even when not everyone is a forklift operator themselves.

    Same with a white collar office where there’s less need for people physically scheduling things and taking messages, because everyone knows how to use an electronic calendar and email system for coordinating those things. There might still be need for pooled assistants and secretaries, but maybe not as many in any given office as before.

    So when we need an LLM to chip in and reduce the amount of time a group of programmers need in order to put out a product, the manager of that team, and all the members of that team, need to have a good sense of what that LLM is good at and what it isn’t. Obviously autocomplete has always been a productivity enhancer for long before LLMs have been around, and extensions of that general concept may be helpful for the more tedious or repetitive tasks, but any team that uses it will need to use it with full knowledge of its limitations and where it best supplements the human’s own tasks.

    I have no doubt that some things will improve and people will find workflows that leverage the strengths while avoiding the weaknesses. But it remains to be seen whether it’ll be worth the sheer amount of cost spent so far.



  • I’m pretty sure every federal executive agency has been on Active Directory and Exchange for like 20+ years now. The courts migrated off of IBM Domino/Notes about 6 or 7 years ago, onto MS Exchange/Outlook.

    What we used when I was there 20 years ago was vastly more secure because we rolled our own encryption

    Uh that’s now understood not to be best practice, because it tends to be quite insecure.

    Either way, Microsoft’s ecosystem on enterprise is pretty much the default on all large organizations, and they have (for better or for worse) convinced almost everyone that the total cost of ownership is cheaper for MS-administered cloud stuff than for any kind of non-MS system for identity/user management, email, calendar, video chat, and instant messaging. Throwing in Word/Excel/PowerPoint is just icing on the cake.


  • “actual image your camera sees” is a term that is hard to define with astrophotography, because it’s kinda hard to define with regular digital photography, too.

    The sensor collects raw data on its pixels, where the amount of radiation that makes it past that pixel’s color filter actually excites the electrons on that particular pixel and gets processed on the image processing chip, where each pixel is assigned a color and it gets added together as larger added pixels in some image.

    So what does a camera “see”? It depends on how the lenses and filters in front of that sensor are set up, and it depends on how susceptible to electrical noise that sensor is, and it depends on the configuration of how long it looks for each frame. Many of these sensors are sensitive to a wide range of light wavelengths, so the filter determines whether any particular pixel sees red, blue, or green light. Some get configured to filter out all but ultraviolet or infrared wavelengths, at which point the camera can “see” what the human eye cannot.

    A long exposure can collect light over a long period of time to show even very faint light, at least in the dark.

    There are all sorts of mechanical tricks at that point. Image stabilization tries to keep the beams of focused light stabilized on the sensor, and may compensate for movement with some offsetting movement, so that the pixel is collecting light from the same direction over the course of its entire exposure. Or, some people want to rotate their camera along with the celestial subject, a star or a planet they’re trying to get a picture of, to compensate for the Earth’s rotation over the long exposure.

    And then there are computational tricks. Just as you might physically move the sensor or lens to compensate for motion, you may just process the incoming sensor data to understand that a particular subject’s light will hit multiple pixels over time, and can get added together in software rather than at the sensor’s own charged pixels.

    So astrophotography is just an extension of normal photography’s use of filtering out the wavelengths you don’t want, and processing the data that hits the sensor. It’s just that there needs to be a lot more thought and configuration of those filters and processing algorithms than the default that sits on a typical phone’s camera app and hardware.








  • Therefore, I think they’d get out a microscope and oscilloscope and start trying to reverse-engineer it. Probably speed up the development of computer technology quite a bit, by giving them clues on what direction to go.

    Knowing what something is doesn’t necessarily teach people how it was made. No matter how much you examine a sheet of printed paper, someone with no conception of a laser printer would not be able to derive that much information about how something could have produced such precise, sharp text on a page. They’d be stuck thinking about movable metal type dipped in ink, not lasers burning powdered toner onto a page.

    If you took a modern finFET chip from, say, the TSMC 5nm process nodes, and gave it to electrical engineers of 1995, they’d be really impressed with the physical three dimensional structure of the transistors. They could probably envision how computers make it possible to design those chips. But they’d had no conception of how to make EUV at wavelengths necessary to make the photolithography possible at those sizes. No amount of the examination of the chip itself will reveal the secrets of how it was made: very bright lasers pointed at an impossibly precise stream of liquid tin droplets against highly polished mirrors that focus that EUV radiation against the silicon and masks that make the 2-dimensional planar pattern, then advanced techniques for lining up 2-dimensional features into a three dimensional stack.

    It’s kinda like how we don’t actually know how Roman concrete or Damascus steel was made. We can actually make better concrete and steel today, but we haven’t been able to reverse engineer how they made those materials in ancient times.


  • Do you have a source for AMD chips being especially energy efficient?

    I remember reviews of the HX 370 commenting on that. Problem is that chip was produced on TSMC’s N4P node, which doesn’t have an Apple comparator (M2 was on N5P and M3 was on N3B). The Ryzen 7 7840U was N4, one year behind that. It just shows that AMD can’t get on a TSMC node even within a year or two of Apple.

    Still, I haven’t seen anything really putting these chips through the paces and actually measuring real world energy usage while running a variety of benchmarks. And the fact that benchmarks themselves only correlate to specific ways that computers are used, aren’t necessarily supported on all hardware or OSes, and it’s hard to get a real comparison.

    SoCs are inherently more energy efficient

    I agree. But that’s a separate issue from instruction set, though. The AMD HX 370 is a SoC (well, technically, SiP as pieces are all packaged together but not actually printed on the same piece of silicon).

    And in terms of actual chip architectures, as you allude, the design dictates how specific instructions are processed. That’s why the RISC versus CISC concepts are basically obsolete. These chip designers are making engineering choices on how much silicon area to devote to specific functions, based on their modeling of how that chip might be used: multi threading, different cores optimized for efficiency or power, speculative execution, various specialized tasks related to hardware accelerated video or cryptography or AI or whatever else, etc., and then deciding how that fits into the broader chip design.

    Ultimately, I’d think that the main reason why something like x86 would die off is licensing reasons, not anything inherent to the instruction set architecture.