ReHacked Newsletter logo

ReHacked Newsletter

Subscribe
Archives
June 9, 2025

ReHacked vol. 324: Menstrual tracking app data is a ‘gold mine’ for advertisers that risks women’s safety; ‘EchoLeak’, the First Zero-Click AI Vulnerability and more

Support ReHacked newsletter with one time donation. Thank you very much!

Menstrual tracking app data is a ‘gold mine’ for advertisers that risks women’s safety – report | University of Cambridge #privacy #health #humanrights

Smartphone apps that track menstrual cycles are a “gold mine” for consumer profiling, collecting information on everything from exercise, diet and medication to sexual preferences, hormone levels and contraception use.

This is according to a new report from the University of Cambridge’s Minderoo Centre for Technology and Democracy, which argues that the financial worth of this data is “vastly underestimated” by users who supply profit-driven companies with highly intimate details in a market lacking in regulation.

The report’s authors caution that cycle tracking app (CTA) data in the wrong hands could result in risks to job prospects, workplace monitoring, health insurance discrimination and cyberstalking – and limit access to abortion.


Make a donation - support Ukraine. Щира подяка. Разом до перемоги!


Get your discount at Hostinger and Support ReHacked


Don’t forget to share if you like what you read here, subscribe (if not yet) and leave a comment. Any form of your feedback is very important to me. Thanks!

Subscribe now

RSS feed available if you don’t want to clutter your inbox.


I'm excited to offer you an opportunity to support my work as the sole contributor to ReHacked. Your contribution will play a crucial role in covering server expenses. Rest assured, my commitment to keeping the primary content accessible to everyone remains unwavering.

As the sole contributor, your support is truly invaluable. Feel free to become a paid subscriber, and remember, you have the flexibility to cancel or switch to the "Free" option at any time.

Thank you for being an essential part of our community. Together, let's continue fostering a culture of knowledge-sharing and making a positive difference in the digital landscape.


For All That Is Good About Humankind, Ban Smartphones #society #longread

We Built Loneliness Machines and Called Them Smart

An outright ban on smartphones would be, to say the least, heavy handed — and likely unconstitutional in both the United States and in Canada, depending on how it was enacted. But let’s think through the proposition, beginning with the premise that smartphone use is a collective problem, not a personal one. It represents a pickle we need to get out of together. After all, an individual’s ability to unplug is shaped by social norms and expectations. It’s almost impossible to put down your smartphone if no one else will.

That collective dimension is already acknowledged in schools, where cell phones are increasingly banned. Officials cite a growing body of evidence that show the devices are bad for kids. Even some tech bigwigs are sending their children to “anti-tech” schools. But scaling that up to the rest of us is tough work, especially when you’re talking about taking on an industry worth hundreds of billions of dollars each year, and still growing.


Thesis Commons | How To Build Conscious Machines #ai #ebook

How to build a conscious machine? For that matter, what is consciousness? Why is my world made of qualia like the colour red or the smell of coffee? Are these fundamental building blocks of reality, or can I break them down into something more basic? If so, that suggests qualia are like an abstraction layer in a computer. A simplification. Some say simplicity is the key to intelligence. Systems which prefer simpler models need fewer resources to adapt. They ``generalise'' better. Yet simplicity is a property of form. Generalisation is of function. Any correlation between them depends on interpretation. In theory there could be no correlation and yet in practice, there is. Why? Software depends on the hardware that interprets it. It is made of abstraction layers, each interpreted by the layer below. I argue hardware is just another layer. As software is interpreted by hardware, hardware is by physics. There is no way to know where the stack ends. Hence I formalise an infinite stack of layers to describe all possible worlds. Each layer embodies policies that constrain possible worlds. A task is the worlds in which it is completed. Adaptive systems are abstraction layers are polycomputers, and a policy simultaneously completes more than one task. When the environment changes state, a subset of tasks are completed. This is the cosmic ought from which goal-directed behaviour emerges (e.g. natural selection). ``Simp-maxing'' systems prefer simpler policies, and ``w-maxing'' systems choose weaker constraints on possible worlds. I show w-maxing maximises generalisation, proving an upper bound on intelligence. I show all policies can take equally simple forms. Simp-maxing shouldn't work. To explain why it does, I invoke the Bekenstein bound. It means layers can use only finite subsets of all possible forms. Processes that favour generalisation (e.g. natural selection) will then make weak constraints take simple forms. I perform experiments. W-maxing generalises at 110-500% the rate of simp-maxing. I formalise how systems delegate adaptation down their stacks. I show w-maxing will simp-max if control is infinitely delegated. Biological systems are more adaptable than artificial because they delegate adaptation further down. They are bioelectric polycomputers. As they scale from cells to organs, they go from simple attraction and repulsion to rich tapestries of valence. These tapestries classify objects and properties that cause valence, which I call causal-identities. I propose the psychophysical principle of causality arguing qualia are tapestries of valence. A vast orchestra of cells play a symphony of valence, classifying and judging. A system can learn 1ST, 2ND and higher order tapestries for itself. Phenomenal ``what it is like'' consciousness begins at 1ST-order-self. Conscious access for communication begins at 2ND-order-selves, making philosophical zombies impossible. This links intelligence and consciousness. So why do we have the qualia we do? A stable environment is a layer where systems can w-max without simp-maxing. Stacks can then grow tall and complex. This may shed light on the origins of life and the Fermi paradox. Diverse intelligences could be everywhere, but we cannot perceive them because they do not meet preconditions for a causal-identity afforded by our stack. I conclude by integrating all this to explain how to build a conscious machine, and a problem I call The Temporal Gap.


Saab achieves AI milestone with Gripen E #technology #ai

During the flights, the Gripen E gave control to Centaur which successfully autonomously executed complex manoeuvres in a Beyond Visual Range (BVR) combat environment and cued the pilot to fire.

“This is an important achievement for Saab, demonstrating our qualitative edge in sophisticated technologies by making AI deliver in the air. The swift integration and successful flight testing of Helsing’s AI in a Gripen E exemplifies the accelerated capability gain you can get from our fighter. We are excited to continue developing and refining how this and other AI agents can be used, while once again showing how our fighters will outperform faster than the opponent can evolve,” said Peter Nilsson, head of Advanced Programmes, from Saab’s Aeronautics Business Area.


The first big AI disaster is yet to happen | sean goedecke #ai

The first big AI disaster will probably involve an AI agent. Any other use of AI has to involve a human-in-the-loop - the AI can provide information or suggestions, but a human has to actually take the actions. AI agents can thus go truly off the rails in a way that a human can’t. If I had to bet, I’d guess that some kind of AI-powered Robodebt might be the most plausible case: some government or corporate entity wires up an AI agent to a debt-recovery, healthcare or landlord system, and the agent goes off the rails and hassles, denies coverage, or evicts a bunch of people.

As we move towards robotic AI, the chances of a kinetic disaster go up as well. Early prototypes of general-purpose robots have a LLM driving a smaller, non-language model that actually moves the servos. That’s an AI agent, and like all AI agents it can fail in surprising and dangerous ways.


How much EU is in DNS4EU? :: Techlog #security #privacy

TL;DR

So final score, depending on how you count:

EU:NON-EU 1:3


Sam Altman's Lies About ChatGPT Are Growing Bolder #ai

In a Tuesday blog post, Altman cited internal figures for how much energy and water a single ChatGPT query uses. The OpenAI CEO claimed a single prompt requires around 0.34 Wh, equivalent to what “a high-efficiency lightbulb would use in a couple of minutes.” For cooling these data centers used to process AI queries, Altman suggested a student asking ChatGPT to do their essay for them requires “0.000085 gallons of water, roughly one-fifteenth of a teaspoon.”

Altman did not offer any evidence for these claims and failed to mention where his data comes from. Gizmodo reached out to OpenAI for comment, but we did not hear back. If we took the AI monger at his word, we only need to do some simple math to check how much water that actually is. OpenAI has claimed that as of December 2025, ChatGPT has 300 million weekly active users generating 1 billion messages per day. Based on the company’s and Altman’s own metrics, that would mean the chatbot uses 85,000 gallons of water per day, or a little more than 31 million gallons per year. ChatGPT is hosted on Microsoft data centers, which use quite a lot of water already. The tech giant has plans for “closed-loop” centers that don’t use extra water for cooling, but these projects won’t be piloted for at least another year.


What if the Big Bang wasn’t the beginning? Our research suggests it may have taken place inside a black hole | University of Portsmouth #nature #science #physics #longread

Today’s standard cosmological model, based on the Big Bang and cosmic inflation (the idea that the early universe rapidly blew up in size), has been remarkably successful in explaining the structure and evolution of the universe. But it comes at a price: it leaves some of the most fundamental questions unanswered.

For one, the Big Bang model begins with a singularity – a point of infinite density where the laws of physics break down. This is not just a technical glitch; it’s a deep theoretical problem that suggests we don’t really understand the beginning at all.

To explain the universe’s large-scale structure, physicists introduced a brief phase of rapid expansion into the early universe called cosmic inflation, powered by an unknown field with strange properties. Later, to explain the accelerating expansion observed today, they added another “mysterious” component: dark energy.

<...> new model tackles these questions from a different angle – by looking inward instead of outward. Instead of starting with an expanding universe and trying to trace back how it began, we consider what happens when an overly dense collection of matter collapses under gravity.

This is a familiar process: stars collapse into black holes, which are among the most well-understood objects in physics. But what happens inside a black hole, beyond the event horizon from which nothing can escape, remains a mystery.

In 1965, the British physicist Roger Penrose proved that under very general conditions, gravitational collapse must lead to a singularity. This result, extended by the late British physicist Stephen Hawking and others, underpins the idea that singularities – like the one at the Big Bang – are unavoidable.

The idea helped win Penrose a share of the 2020 Nobel prize in physics and inspired Hawking’s global bestseller A Brief History of Time: From the Big Bang to Black Holes. But there’s a caveat. These “singularity theorems” rely on “classical physics” which describes ordinary macroscopic objects. If we include the effects of quantum mechanics, which rules the tiny microcosmos of atoms and particles, as we must at extreme densities, the story may change.


Breaking down ‘EchoLeak’, the First Zero-Click AI Vulnerability Enabling Data Exfiltration from Microsoft 365 Copilot #security #ai

Executive Summary

  • Aim Labs has identified a critical zero-click AI vulnerability, dubbed “EchoLeak”, in Microsoft 365 (M365) Copilot and has disclosed several attack chains that allow an exploit of this vulnerability to Microsoft's MSRC team.
  • This attack chain showcases a new exploitation technique we have termed "LLM Scope Violation" that may have additional manifestations in other RAG-based chatbots and AI agents. This represents a major research discovery advancement in how threat actors can attack AI agents - by leveraging internal model mechanics.
  • The chains allow attackers to automatically exfiltrate sensitive and proprietary information from M365 Copilot context, without the user's awareness, or relying on any specific victim behavior.
  • The result is achieved despite M365 Copilot's interface being open only to organization employees.
  • To successfully perform an attack, an adversary simply needs to send an email to the victim without any restriction on the sender's email.
  • As a zero-click AI vulnerability, EchoLeak opens up extensive opportunities for data exfiltration and extortion attacks for motivated threat actors. In an ever evolving agentic world, it showcases the potential risks that are inherent in the design of agents and chatbots.
  • Aim Labs continues in its research activities to identify novel types of vulnerabilities associated with AI deployment and to develop guardrails that mitigate against such novel vulnerabilities.
  • Aim Labs is not aware of any customers being impacted to date.

TL;DR

Aim Security discovered “EchoLeak”, a vulnerability that exploits design flaws typical of RAG Copilots, allowing attackers to automatically exfiltrate any data from M365 Copilot’s context, without relying on specific user behavior. The primary chain is composed of three distinct vulnerabilities, but Aim Labs has identified additional vulnerabilities in its research process that may also enable an exploit.


The Beach Boys’ Brian Wilson Dies at 82 | Pitchfork #promemoria

Brian Wilson, the co-founder and primary songwriter of the Beach Boys, has died, his family announced. While an official cause of death was not disclosed, the beloved musical auteur, who helped pioneer the studio-as-instrument, influencing generations of musicians in pop and beyond, was revealed, in early 2024, to be living with a neurocognitive disorder akin to dementia. Wilson’s family also did not disclose the musician’s date or location of death. Wilson was 82 years old.


John Graham-Cumming's blog: Low-background Steel: content without AI contamination #ai #internet #information

Low-background Steel (and lead) is a type of metal uncontaminated by radioactive isotopes from nuclear testing. That steel and lead is usually recovered from ships that sunk before the Trinity Test in 1945. The site is about uncontaminated content that I'm terming "Low-background Steel". The idea is to point to sources of text, images and video that were created prior to the explosion of AI-generated content that occurred in 2022.


Talabat, Botim, and Careem expand beyond food and rides - Rest of World #privacy

The race for super-apps is intensifying in the Middle East.

Unlike Western markets, where Google, Apple, and Meta maintain separate app ecosystems with strict integration limits, in countries like the United Arab Emirates and Saudi Arabia, tech giants are following in the footsteps of China’s WeChat.

Dubai-based Careem, which started as a ride-hailing company, has evolved into a comprehensive app handling transportation, food delivery, grocery shopping, payments, and home cleaning. Another local app, Talabat, has expanded beyond food delivery into groceries, health and beauty, and dine-out deals. Communications and fintech app Botim now offers international remittances and bill payments alongside messaging features.


Filipino workers face abuse in Taiwan’s booming chip industry - Rest of World #economy

  • Filipino workers in Taiwan’s booming semiconductor industry face long hours, low pay, and second-class treatment.
  • Workers describe overnight shifts of up to 16 hours, verbal abuse, and threats of deportation as they produce the high-end chips that end up in iPhones, Teslas, and data centers.
  • Authorities say they are trying to better educate migrant tech workers about their rights.


If you would like to propose any interesting article for the next ReHacked issue, just hit reply or push this sexy “Leave a comment” (if not subscribed yet) button below. It’s a nice way to start a discussion.

Leave a comment

Thanks for reading this digest and remember: we can make it better together, just leave your opinion or suggestions after pressing this button above or simply hit the reply in your e-mail and don’t forget - sharing is caring ;) Have a great week!

Dainius

Don't miss what's next. Subscribe to ReHacked Newsletter:
Start the conversation:
https://mastodon.so…
Powered by Buttondown, the easiest way to start and grow your newsletter.