

From the appendix:
TOTAL, COMPLETE, AND ABSOLUTE QUANTUM TOTAL ULTIMATE BEYOND INFINITY QUANTUM SUPREME LEGAL AND FINANCIAL NUCLEAR ACCOUNTABILITY
From the appendix:
TOTAL, COMPLETE, AND ABSOLUTE QUANTUM TOTAL ULTIMATE BEYOND INFINITY QUANTUM SUPREME LEGAL AND FINANCIAL NUCLEAR ACCOUNTABILITY
This week the WikiMedia Foundation tried to gather support for adding LLM summaries to the top of every Wikipedia article. The proposal was overwhelmingly rejected by the community, but the WMF hasnāt gotten the message, saying that the project has been āpausedā. It sounds like they plan to push it through regardless.
The actual pathfinding algorithm (which is surely just A* search or similar) works just fine; the problem is the LLM which uses it.
I like how all of the currently running attempts have been equipped with automatic navigation assistance, i.e. a pathfinding algorithm from the 60s. And thatās the only part of the whole thing that actually works.
levels of glazing previously unheard of
The multiple authors thing is certainly a joke, itās a reference to the (widely accepted among scholars) theory that the Torah was compiled from multiple sources with different authors.
Iām not sure what you mean by your last sentence. All of the actual improvements to omega were invented by humans; computers have still not made a contribution to this.
Yes - on the theoretical side, they do have an actual improvement, which is a non-asymptotic reduction in the number of multiplications required for the product of two 4x4 matrices over an arbitrary noncommutative ring. You are correct that the implied improvement to omega is moot since theoretical algorithms have long since reduced the exponent beyond that of Strassenās algorithm.
From a practical side, almost all applications use some version of the naive O(n^3) algorithm, since the asymptotically better ones tend to be slower in practice. However, occasionally Strassenās algorithm has been implemented and used - it is still reasonably simple after all. There is possibly some practical value to the 48-multiplications result then, in that it could replace uses of Strassenās algorithm.
I think this theorem is worthless for practical purposes. They essentially define the āAI vs learningā problem in such general terms that Iām not clear on whether itās well-defined. In any case it is not a serious CS paper. I also really donāt believe that NP-hardness is the right tool to measure the difficulty of machine learning problems.
As technology advanced, humans grew accustomed to relying on the machines.
honestly the only important difference between them is that emacsās default keybindings can and will give you a repetitive stress injury (ask me how i knowā¦)
Apparently MIT is teaching a vibe coding class:
How will this yearās class differ from last yearās? There will be some major changes this year:
- Units down from 18 to 15, to reflect reduced load
- Grading that emphasizes mastery over volume
- More emphasis on design creativity (and less on ethics)
- Not just permission but encouragement to use LLMs
- A framework for exploiting LLMs in code generation
i might try writing such a post!
When people compile compilers do they actually specialize a compiler to itself (as in definition 3 in the paper) as one of the steps? Thatās super interesting if so, I had no idea. My only knowledge of bootstrapping compilers is simple sequences of compilers that work on increasing fragments of the language, culminating with the final optimizing compiler being able to compile itself (just once).
Iāve been using Anki, it works great but requires you to supply the discipline and willingness to learn yourself, which might not be possible for kids.
Writing āMy Immortalā in 2006 when nothing quite like it had ever been written before, is a (possibly unintentional) stroke of genius. Writing āMy Immortalā after itās already been written is worthless.
are we really clutching our pearls because someone named themselves after a demon
ok but what does this mean for Batman vs Lex Luthor
I did yes :)
Youāre totally misunderstanding the context of that statement. The problem of classifying an image as a certain animal is related to the problem of generating a synthetic picture of a certain animal. But classifying an image of as a certain animal is totally unrelated to generating a natural-language description of āinformation about how to distinguish different speciesā. Moreover, we know empirically that these LLM-generated descriptions are highly unreliable.