ReHacked Newsletter logo

ReHacked Newsletter

Archives
March 24, 2026

ReHacked vol. 363: Billion-Parameter Theories, Chuck Norris Dead and more

Support ReHacked newsletter with one time donation. Thank you very much!

"A good friend is like a four-leaf clover, hard to find and lucky to have." --Irish Proverb

Billion-Parameter Theories – Sean Linehan #ai

#ai generated TL;DR below.

Abstract

The article argues that many of humanity’s hardest problems are genuinely complex rather than merely complicated, so they resist the small, elegant theories that worked in classical science, but modern AI models—especially large neural networks—offer a new “medium of theory” capable of representing these complex systems, with mechanistic interpretability emerging as a potential true science of complexity.

Main idea

The main idea is that for genuinely complex systems (like economies, climates, and bodies), the most compressed workable theories may inherently require billions of parameters, so instead of expecting short equations, we should treat large trained models as operational theories of specific systems and study their architectures and internals to uncover more compact, general principles of complexity.

Key points

  • Classical science succeeded with terse theories (like F=ma and E=mc^2) because it mainly tackled complicated systems that were decomposable and human-comprehensible.
  • Complex systems (poverty, climate change, addiction, markets, ecosystems, immune systems) involve dynamic interactions, feedback loops, and reflexivity that make them resistant to decomposition and simple laws.
  • Institutions like the Santa Fe Institute identified common patterns in complex systems (power laws, self-organized criticality, phase transitions) but mostly produced descriptive, not prescriptive, tools for intervention.
  • Historically, practice has often preceded theory (blacksmiths before metallurgy, cathedral-building before structural engineering, breeding before genetics), and the author claims modern AI is a similar “practice first” phase for complexity.
  • Earlier complexity models (e.g., SFI’s artificial markets and genetic algorithms) failed to become broadly operable partly because the needed theories were too large for human-scale tools and memory.
  • The core claim: for many complex systems, the most compressed possible theory may still be enormous—billions of parameters—so only computers and large models can hold and use them.
  • Large language models are framed as compressed theories of human language use and its underlying cognition and culture: lossy but useful models that enable prediction and counterfactual simulation.
  • David Deutsch’s criterion that good explanations must be compact, general, and hard to vary seems to conflict with billion-parameter models, which look large and parochial.
  • The author resolves this tension by distinguishing between:

    • The architecture (compact, general structure like transformers), and
    • The weights (large, system-specific parameters learned from data).
  • Model architectures can be written on a few pages yet can learn language, protein folding, or weather, suggesting that this architecture-level description may have the kind of “reach” Deutsch demands.
  • The “physics of complexity” may therefore be about describing what structures (architectures) can learn arbitrary complex systems, not about writing closed-form laws of those systems themselves.
  • Work like Karpathy’s nanoGPT is framed as a search for the minimal architecture that still has this universal learning capability, stripping away everything non-essential.
  • Mechanistic interpretability is cast as a new methodology for complexity science: by dissecting trained networks (via ablation, activation analysis, circuit tracing), we treat models as specimens to study internal representations of complex phenomena.
  • Theory can then be extracted from the compression: we first train a large model to capture behavior, then mine its internal structure for more compact truths about the underlying system.
  • If this framing is correct, many “intractable” problems (chronic disease, addiction, poverty, climate) were simply too complex for human-only theoretical media, but now become approachable via large models.
  • This shifts epistemology: instead of fully explicit causal mechanisms, we rely more on rich models to simulate interventions and get probabilistic distributions of outcomes—a different but fitting kind of knowledge for complex systems.
  • The article concludes that while models of specific complex systems will likely stay very large, the universal structure that can learn all of them might be surprisingly small, re-framing what we mean by a good theory of reality.

Make a donation - support Ukraine. Щира подяка. Разом до перемоги!


Like what you read? Subscribe now! Please share if you like what you read here, subscribe (if not yet) and leave a comment. Any form of feedback is very important. Thank you very much!

RSS feed available if you don’t want to clutter your inbox.

You can also support ReHacked newsletter with one time donation.

Thank you for being a part of the community. Together, let's continue fostering a culture of knowledge-sharing and making a positive difference in the digital landscape.


The IBM scientist who rewrote the rules of information just won computing’s highest prize #computers #history #quantumcomputing

Digital security, as Bennett and Brassard wrote, held “even against an opponent with superior technology and unlimited computing power.” BB84 attracted little notice at first. The internet was emerging simultaneously, and the mathematical systems securing it seemed, for the moment, sufficient.

That changed in 1994, when mathematician Peter Shor, then at Bell Labs, showed that a quantum computer could crack the mathematical locks protecting most internet communications. Suddenly the method Bennett and Brassard had developed, by then used experimentally over distances of up to 1,200 kilometers between a satellite and Earth, according to Britannica, looked urgent.

The first working demonstration had come years earlier. In 1989, according to IBM, Bennett built the first quantum cryptography machine in his office at IBM, a two-meter-long device assembled from mirrors, polarizers and photon detectors, with software written by Brassard and his students. Four years after that came a paper introducing quantum teleportation: not the science-fiction kind, but the transfer of a quantum state from one location to another using entanglement, a phenomenon in which measuring one particle instantly affects another regardless of the distance between them.

Still keeping an office at IBM, where Landauer recruited him more than 50 years ago, Bennett is the seventh IBM-affiliated researcher to receive the Turing honor.

Jay Gambetta, Director of IBM Research and an IBM Fellow, said the legacy of that early work runs directly into what the company’s quantum teams are building now.


Blocking the Internet Archive Won’t Stop AI, But It Will Erase the Web’s Historical Record | EFF #copyrights #internet #ai

Archiving and Search Are Legal

Making material searchable is a well-established fair use. Courts have long recognized it’s often impossible to build a searchable index without making copies of the underlying material. That’s why when Google copied entire books in order to make a searchable database, courts rightly recognized it as a clear fair use. The copying served a transformative purpose: enabling discovery, research, and new insights about creative works. 

The Internet Archive operates on the same principle. Just as physical libraries preserve newspapers for future readers, the Archive preserves the web’s historical record. Researchers and journalists rely on it every day. According to Archive staff, Wikipedia alone links to more than 2.6 million news articles preserved at the Archive, spanning 249 languages. And that’s only one example. Countless bloggers, researchers, and reporters depend on the Archive as a stable, authoritative record of what was published online.

The same legal principles that protect search engines must also protect archives and libraries. Even if courts place limits on AI training, the law protecting search and web archiving is already well established.

The Internet Archive has preserved the web’s historical record for nearly thirty years. If major publishers begin blocking that mission, future researchers may find that huge portions of that historical record have simply vanished. There are real disputes over AI training that must be resolved in courts. But sacrificing the public record to fight those battles would be a profound, and possibly irreversible, mistake. 


Chuck Norris Dead: 'Walker Texas Ranger' Action Icon Was 86 #inmemoriam

“To the world, he was a martial artist, actor, and a symbol of strength. To us, he was a devoted husband, a loving father and grandfather, an incredible brother, and the heart of our family,” the statement continued. “He lived his life with faith, purpose, and an unwavering commitment to the people he loved. Through his work, discipline, and kindness, he inspired millions around the world and left a lasting impact on so many lives.”

As an action star, Norris had a degree of credibility that most others could not match.. Not only did he appear opposite the legendary Bruce Lee in 1972 film “The Way of the Dragon” (aka “Return of the Dragon”), but he was a genuine martial arts champion who was a black belt in judo, 3rd degree black belt in Brazilian Jiu-Jitsu, 5th degree black belt in Karate, 8th degree black belt in Taekwondo, 9th degree black belt in Tang Soo Do and 10th degree black belt in Chun Kuk Do.


Daring Fireball: ‘Your Frustration Is the Product’ #internet

The web is the only medium the world has ever seen where its highest-profile decision makers are people who despise the medium and are trying to drive people away from it. As Bose notes, “A lot of websites actively interfere the reader from accessing them by pestering them with their ‘apps’ these days. I don’t know where this fascination with getting everyone to download your app comes from.” It comes from people who literally do not understand, and do not enjoy, the web, but yet find themselves running large websites.

The people making these decisions for these websites are like ocean liner captains who are trying to hit icebergs.


Google details new 24-hour process to sideload unverified Android apps - Ars Technica #software

Google is planning big changes for Android in 2026 aimed at combating malware across the entire device ecosystem. Starting in September, Google will begin restricting application sideloading with its developer verification program, but not everyone is on board. Android Ecosystem President Sameer Samat tells Ars that the company has been listening to feedback, and the result is the newly unveiled advanced flow, which will allow power users to skip app verification.

With its new limits on sideloading, Android phones will only install apps that come from verified developers. To verify, devs releasing apps outside of Google Play will have to provide identification, upload a copy of their signing keys, and pay a $25 fee. It all seems rather onerous for people who just want to make apps without Google’s intervention.


If you would like to propose any interesting article for the next ReHacked issue, just hit reply or “Leave a comment” link below. It’s a nice way to start a discussion.

Thanks for reading this digest and remember: we can make it better together, just leave your opinion or suggestions after pressing this button above or simply hit the reply in your e-mail and don’t forget - sharing is caring ;) Have a great week!

Dainius

Don't miss what's next. Subscribe to ReHacked Newsletter:

Add a comment:

Share this email:
Share on Hacker News Share on Reddit Share via email Share on Mastodon Share on Bluesky
mastodon.social
Powered by Buttondown, the easiest way to start and grow your newsletter.