The AI Nothingburger gets more nothingy. But a twist plot may be coming (Revelation 13)

The fundamental failure of Artificial Intelligence (AI) is that adding more inputs that look intelligent but aren’t just makes the signal to noise ratio worse. But it might not plateau at nothing, or, hit the glass ceiling. An unexpected plot twist may have been foreseen by John the Evangelist almost two millennia ago.

LLM’s (Large Laguage Models) don’t help you write. They help you generate more of the mediocre writing you need to learn not to write if you actually want to be a writer.

This applies to art, too. Design. Creative endeavours. And, code is the same story.

This is Sabine Hossenfelder’s latest video on the subject, published on YouTube last night. Not unsurprisingly, she’s quite critical.

So I followed up on Hossenfelder in the wee hours of this morning. Here is the conclusion in the paper she is referring to:

“To summarize, we find that usage of a generative AI code suggestion tool increases software developer productivity by 26.08% (SE: 10.3%). This estimate is based on observing, partly over years, the output of almost five thousand software developers at three different companies as part of their regular job, which strongly supports its external validity.”

READ UP: The Effects of Generative AI on High Skilled Work: Evidence from Three Field Experiments with Software Developers

I worked in IT in the late eighties and nineties (until I left Norway for missions in 1999) but have kept my general skills somewhat up to date in later years.

Back in 1988 I also translated a rather speculative book from English to Norwegian: Computers and the Beast of Revelation. (Norwegian: Computere og Dyret i Åpenbaringen, Hermon forlag.)

READ UP: Computers and the Beast of Revelation: End Times Doom and Gloom from 1985

Buy the book from Amazon: Computers and the beast of Revelation

The book was originally written back in 1985 and was all hype and lies back then, worthy of a science fiction writer. Not quite prophecy, as it claimed the Beast to be a sentient computer already operating out of the EU capital of Brussels. In the eighties. AI was but a thought experiment, albeit the very idea that got John Hopfield and Geoffrey Hinton the Nobel Prize in Physics this year.

READ UP: The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics 2024

I don’t think I’m that narrative-driven. Not anymore. Age may have something to do with that.

Actually, I don’t care one way or another. Narrative or not, if AI works, it works; if it doesn’t, I’ll stop using it. But the question isn’t that simple. It’s not a black-and-white issue.

Most of the studies cited in the paper Hossenfelder shared on YouTube yesterday are more than two years old. So obviously, if our reaction is based on a model from 2019, it’s going to be quite partial towards older LLM’s.

But here is another study, and it talks about how AI is writing insecure code, not necessarily code that doesn’t work.

And, this would go for any output, not only code!

READ UP: PubMed Central: A systematic literature review on the impact of AI models on the security of code generation

“It reviews what security flaws of well-known vulnerabilities (e.g., the MITRE CWE Top 25 Most Dangerous Software Weaknesses) are commonly hidden in AI-generated code. It also reviews works that discuss how vulnerabilities in AI-generated code can be exploited to compromise security and lists the attempts to improve the security of such AI-generated code.”

Here’s another similar post:

READ UP: Humans do it better: GitClear analyzes 153M lines of code, finds risks of AI

“Recall that “churn” is the percentage of code that was pushed to the repo, then subsequently reverted, removed, or updated within 2 weeks. This was a relatively infrequent outcome when developers authored all their own code: only 3-4% of code was churned between 2020 and 2022 every year. By contrast, in 2023, the numbers grew to an average of 5.5%.

”The data strongly correlates “using Copilot” with “mistake code” being pushed to the repository more frequently. If Copilot’s prevalence was 0% in 2021, 5-10% in 2022, and 30% in 2023 (as per GitHub and O’Reilly), the Pearson correlation coefficient between these variables is 0.98.

”The more churn becomes commonplace, the greater the risk of mistakes being deployed to production. If the current pattern continues into 2024, more than 7% of all code changes will be reverted within two weeks, double the rate of 2021.”

Discussing with myself I had a Eureka! moment. Great point! Outside of discussing code security, how much of AI is breaking the applications written?

We’re already seeing that the AI’s are consuming the very product they’re creating, and more output is sourced less on human sources but ever more on second and third degree AI. Almost nothing is really original anymore. The removal of secondary “product” also seems harder to achieve than anticipated. We’re just not being able to keep up with the speed of AI. The percentage of “copy/pasted code” is now increasing faster than “updated,” “deleted,” or “moved” code.

“In this regard, the composition of AI-generated code is similar to a short-term developer that doesn’t thoughtfully integrate their work into the broader project,” said GitClear founder Bill Harding.

READ UP: New study on coding behavior raises questions about impact of AI on software development

But it’s not black and white…

Honestly, people outside of tech and IT software development roles are really just waiting for the big “yay or nay.” We don’t have that yet, and it’s still a very early movement.

LLM’s are changing overnight and so is the quality of the token output from LLM’s.

As I am, most aren’t pro or anti Ai when it comes to IT, they just want the edge. For now, it can be helpful but it takes a tight binding of wisdom and testing.
I think the most important aspect is code security. Perhaps code will become more powerful from AI, we have time to wait. There’s simply no reason to have a strong stance at this time.

However, using AI to help generate content and truth to Christianity should be monitored. I’m connected to a group which already investigates the possibility of using AI to generate a Christian encyclopaedia of “everything true”, a “Truthpedia”, but lacking a business model we’ve yet to find the necessary funding to go forward. We will, however, move forward.


For the time being…

One criticism I read on a discussion group was this:

”LLM’s are a parlour trick made to impress investors. (Don’t worry, the next thing is coming soon. any day now we will have GPT-5…. We’re at 4o now (not 4.0.)

”At first, AI was helpful with things like small code snippets and common tasks. But ask it to do novel, complex tasks, and your mileage might vary substantially.

”Kind of like how some meetings could have been an email, some prompts could have been a google/stack overflow search. I can’t tell you how many times I have prompted AI over and over again trying to get a better answer only to get frustrated and just google the damn thing already.

”But AI is yet to mature. It will gain powers when sentience enters the equation.


Finally, a pivot. The day before yesterday I wrote a very short piece on AI related to the second Beast of Revelation 13.

READ UP: The second Beast may be an artificial intelligence

We’re yet to determine what AI might mean but we’re getting to either a plateau where AI will stop producing anything really new, or we’ll see real breakthrough that may take control away from humanity entirely.

So, behold. This may be what John the Evangelist dreamt on Patmos almost two millennia ago, when he had no ability to understand and put into words what he really observed:

15 The second beast was given power to give breath to the image of the first beast, so that the image could speak and cause all who refused to worship the image to be killed. 16 It also forced all people, great and small, rich and poor, free and slave, to receive a mark on their right hands or on their foreheads,”Revelation 13:15-16 NVI

ALSO READS:
>> AI-Generated Code is Causing Outages and Security Issues in Businesses
>> When AI-produced code goes bad

Front page image generated by AI/@dantesadig