Who know their way around programming are ‘less than impressed’ with the current state of LLM AI…
The AI and Internet are garbage. So far, a glorified electronic dictionary.
IMAGE Recognition and Generation has been very good, The rest is a copy cat approach of variously collections of data. Digital DATA, Just like US Postal mail, some stuff is real, required, and others are totally irrelevant and wasteful.
The only difference was the dictionary was based upon REAL (for the most part, and even then was discrepancies) . Now we have a S___show, of the Wild West, with incompetence and greed, trying to make binary decisions, based upon telemetry, historical digital values, and actions of poor autonomous systems based upon erroneous, uncertain, poorly factored inputs and values.
I get his point, and I have to agree with him…
Sigh…
The worst part is companies trying to figure out how they can replace employees with this stuff. The voice recognition in phone trees is bad enough.
Eric S. Raymond summed it up as the current “AI” (LLM really) is NOT a “brilliant programmer” but “that intern who has memorized EVERY manual.” Useful at times, but not genuinely creative.
That description is spot on. LLMs are great for creating a skeleton that a creative developer can then complete and make fully functional.
They are also pretty good at understanding and describing the basics of existing code.
Developers spend a lot of time on both of these things. There is a lot of boilerplate code that gets written around the stuff that really add vale and solves problems.
LLM’s will kill the “Code Monkeys” who are the code by numbers type who are not creative thinkers (and there are a lot of them like that). The creative thinkers will get to spend more time going the creative stuff rather than the boilerplate type code.
I am already seeing this pan out at work.
Way over this Latter Day Luddite’s head.
LLMs generate grammatically correct output but there is no guarantee the output is correct, and the models are programmed to provide a response as often as possible. Hence the news reports of ‘hallucinations’ adding bogus case references to legal filings.
The only way LLMs are actually useful is if they are constrained to answer based on the contents of a database of trusted valid data, a reference library if you will. The process is called Retrieval Augmented Generation (RAG) and you will be hearing more and more about it. The challenge is to gather, validate, and maintain the reference materials.
Yes, companies that implement LLMs need Librarians to do so with any confidence the bots/Agents give useful answers.
Like I told my grandkids who were going on about AI, “Relax, it’s just all ones and zeros.”
Tuvela- Exactly!!!
Orvan- Yes, but ‘which’ manual???
Earl- Thanks!
WSF- You’ll get it figured out…
Rick- Exactly! Unless that RAG is complete AND valid, even then you get GIGO…
Flugel- Yes it is!
SF writer, the late Michael F. Flynn called AI’s “Artificial Stupids”
in his Firestar quadrilogy. Seems about right.
My blog traffic is up in the last month by a factor of about 15 as the competing LLM vacuum up every word on the internet.
This too will pass.
Eyrie- No disagreement here.
ERJ- That it will. Sooner or later. The only ‘issue’ is how big the crater is going to be!
I only half ass understand AI , and have never delved into it , and have never knowingly used it but I feel sure it has used me . With the new VA “myhealthevet” app , a picture was required to get your dotgov sign on . I feel that soon enough when I walk into a VA health care clinic/hosp facial recognition will check in for me . I don’t think that is cool . I suppose many younger vets are okay with it , and that is cool for them , just not for me . My gut tells me this big brother shit is creeping in at an alarming and unchecked pace , and like I said I don’t even know the half of it .