The last summary you need about the 400-page thesis the U.S. Department of Defense didn’t want you to read
There’s been a growing buzz in the Bitcoin space around Softwar — the 400-page MIT thesis by Major Jason Lowery that, according to the author, was swiftly pulled from circulation after its initial release and placed under security and policy review by the U.S. Department of Defense (DoD) due to its potential national security implications.
For those unfamiliar, Lowery is a U.S. Space Force officer and National Defense Fellow at MIT who, in 2023, released Softwar as part of his graduate work. But unlike most analyses that treat Bitcoin as a monetary technology within an economic framework, Lowery takes a dramatically different route.
He draws on concepts from military theory, political science, evolutionary biology, metacognition, and even nuclear deterrence strategy to develop his own theoretical framework — one he then uses to analyse the implications of Bitcoin from a completely new vantage point.
While the thesis ultimately builds toward an analysis of Bitcoin and proof-of-work as a revolutionary tool for non-lethal physical power projection in the cyber age, it’s fundamentally a thesis about how humans operate — how belief systems form, how trust is exploited, and how different substrates for power (physical vs. abstract) give rise to very different emerging orders, each with their own internal logic, risks, and consequences.
This is the first of a three-part article series where I break down the thesis and explore its implications:
- Part 1 (this piece) and Part 2 covers Chapters 1–4, where Lowery introduces the foundations of his Power Projection Theory. He does so by examining evolutionary history and human civilisation — applying his novel framework across multiple domains to reveal the recurring logic of how organisms, societies, and empires survive, cooperate, and collapse.
- Part 3 and Part 4 will cover Chapter 5, where Lowery applies Power Projection Theory to cyberspace. Here, he weaves together the insights from Chapters 1–4 and brings them into the realm of computer science and software engineering, culminating in a focused examination of Bitcoin — and more specifically, proof-of-work — as a revolutionary tool to connect cyberspace to the physical world.
- Part 5 will be my own analysis: what I believe Lowery gets right, where his framework falls short, and why it matters. (Release May 14th)
Parts 1 to 4 are primarily focused on summarising what I believe are the most essential takeaways from Softwar. The goal is to present Lowery’s thesis on its own terms while reserving my own reflections for Part 3. That said, while I’ve done my best to stay true to the spirit and logic of the work, I do at times take some liberties in how I frame and interpret certain aspects — so any misreadings or misrepresentations are entirely my own. All quotes and diagrams are taken directly from the thesis, unless something else is stated.
If you’re even remotely interested in history, civilisation, warfare, or Bitcoin — or if you’re gearing up for a Twitter debate or planning to tackle the full 400-page thesis yourself at some point — consider this a solid mental warm-up. This article series will give you a strong foundation for engaging with Softwar and the conversations it’s sparking.
Chapter 1: Introduction
Chapter 1 sets the historical and philosophical groundwork for the thesis. Lowery begins by highlighting a recurring mistake in human history: the assumption that the next war will resemble the last. This theme is embodied in the story of General Billy Mitchell — now considered the father of the U.S. Air Force — who was dismissed for insisting after WWI that air power would dominate future conflicts. His warnings were ignored by a military establishment still stuck in old paradigms.
Lowery argues that this failure to adapt is not unique. Time and again, civilisations become complacent during peace, failing to recognise the evolving arenas in which future conflicts will play out. The form of war changes, but its function remains constant. Whoever recognises and adapts to the new form of power projection first gains a critical advantage.
“Some have argued that expecting a domestic society to see the functionality of emerging power projection technology (i.e. weapons technology) is like expecting a golden retriever to understand the functionality of a wolf collar.” (Softwar, p. 22)
Lowery will revisit the concept of “domestication” many times throughout the thesis, exploring it not only as a power projection tactic humans have used to control other species, but also as a tool we’ve increasingly applied to ourselves, many time unknowingly.
Lowery frames physical power as the “base layer” of dispute resolution — a raw, trustless mechanism that remains when law fails. While legal systems are energy-efficient and rely on mutual trust, they are also vulnerable to corruption and breakdown. War, on the other hand, is energy-intensive, indiscriminate, and impossible to fake. In times of societal stress or institutional collapse — when trust completely erodes — disputes revert to this more primal method of resolution.
Lowery further challenges the reader to recognise that Bitcoin — like most technologies — is ultimately a metaphor embodied in software. Just because its creator framed it as a monetary technology doesn’t mean it must be understood solely through that lens. In fact, limiting our analysis to monetary theory may obscure its deeper significance.
Throughout the thesis, he will build the case that Bitcoin — and more fundamentally, proof-of-work — should be understood as a non-lethal, physics-based method of projecting power (more about this later).
Chapter 2: Grounded Theory as Method
In Chapter 2, Lowery outlines the structure of the thesis and introduces the methodological foundation for his analysis: grounded theory. Unlike traditional top-down approaches that begin with a fixed hypothesis or framework, grounded theory is a bottom-up method that begins with open-ended observation. The goal is to allow theories to emerge organically from the data, rather than forcing the data to fit within preexisting models.
Lowery explains that instead of applying established academic or theoretical lenses to Bitcoin, he will develop a completely new theory. His method follows three steps:
- Step 1: Collect observations across disciplines — history, military strategy, nature, and technology — without applying a predetermined interpretive filter.
- Step 2: Derive a new theory based on the patterns and relationships revealed in that data.
- Step 3: Use the new theory to analyse Bitcoin from a fresh and independent perspective.
This is a fundamentally different approach than most academic analyses, which typically begin within a preexisting discipline to either validate or invalidate a specific hypothesis. By contrast, Lowery’s framework for analysis is designed to emerge from the data — not be imposed on it. Once this new theory is constructed, it will enable the formulation of fresh hypotheses that can then be tested, both conceptually and empirically, going forward.
Chapter 3: Lowery’s Power Projection Theory
Chapter 3 introduces Lowery’s novel theory — Power Projection Theory — which becomes the theoretical lens for the remainder of the thesis. At its core, the theory proposes that from the level of single cells to complex human societies, survival and prosperity are governed by an organism’s ability to project power. In every layer of life, the dynamic boils down to whether an entity can defend itself, secure resources, and deter or overcome attacks.
This chapter focuses on Power Projection Theory in nature. Lowery emphasises that ownership — in the natural world — has always been determined by an organism’s capacity to project power. A wolf showing its teeth is a clear example of this principle in action.
However, Lowery doesn’t begin his analysis with modern animals or human society. Instead, he takes us back nearly 4 billion years, tracing the origins of power projection all the way to sub-cellular life. From single-celled organisms to multicellular structures, the earliest power projection tactic wasn’t claws or sharp teeth — it was the development of pressurised membranes. These primitive biological “bubbles” allowed early life forms to displace surrounding mass and contain resources.
“The emergent behavior of life is something remarkable. By projecting lots of physical power to capture and secure access to resources, life is miraculously able to turn the inexorable chaos of the Universe into something more structured.” (Softwar, p. 66)
So in essence, if the Universe naturally trends toward ever-increasing entropy — toward greater chaos and dispersion — then life is the temporary defiance of that trend. It is the force that gathers, encloses, and organises through creativity, to hold entropy at bay for the brief miracle of a lifetime.
“What specifically is the function of life? This is impossible to know. Perhaps it is simply to countervail the entropy of the Universe.” (Softwar, p. 91)
Primordial Economics
After introducing the foundational idea of Power Projection Theory, Lowery lays out a conceptual framework he calls Primordial Economics. This framework becomes the bedrock for much of the analysis that follows. To truly grasp what Lowery is building toward, it’s essential to understand this model — it defines the basic dynamics that govern survival and power relationships in both nature and human systems.
Let’s break it down step by step.
BA (Benefit of Attack) and CA (Cost of Attack)
At its core, every potential attack in nature comes with a cost and a potential reward.
- BA refers to the Benefit of Attack — what an aggressor stands to gain by initiating conflict (such as food, territory, or mates). As an organisms Resource Abundance (RA) grows, its BA grows.
- CA is the Cost of Attack — the energy, risk of injury, or retaliation that must be endured to attempt the attack.
By dividing the Benefit of Attack (BA) by the Cost of Attack (CA), we arrive at what Lowery calls BCRA.
- BCRA is the Benefit-to-Cost Ratio of Attack. This ratio serves as a simple metric: the higher your BCRA, the more attractive you become as a target to predators or attackers.
From this lens, survival is a probability game. Take a lion, for example — it must hunt to survive, but to do so efficiently, it will instinctively seek out prey with the highest possible BCRA. In other words, it looks for targets that offer the greatest potential reward for the least amount of risk. This strategic behavior minimises energy expenditure and danger while maximising the likelihood of success.
The same logic applies when you shop at a supermarket. You instinctively look for the product with the highest Benefit of Attack (BA) — nutritional value, taste, utility — at the lowest possible cost to your wallet. It’s the same fundamental calculation: maximise return, minimise risk.
To visualise this concept, Lowery introduces what he calls “Bowtie Notation”. By representing the Benefit of Attack (BA) as a green bubble and the Cost of Attack (CA) as a red bubble — both attached to a central point — he creates a visual that resembles a bowtie. The size and proportion of each bubble help illustrate the relative appeal or risk of an attack.
In the image below, we see a comparison between a high BCRA organism and a low BCRA organism. The high BCRA organism has a much larger green bubble (Benefit of Attack) relative to its red bubble (Cost of Attack), making it an attractive target. In contrast, the low BCRA organism has a smaller benefit and/or a higher cost, making it less appealing to attack.
Now, put yourself in the shoes — or paws — of a hungry lion. Which one would you go after? The high BCRA organism might be, for example, an injured antelope — its cost to attack is significantly reduced, while the nutritional benefit remains the same. From the lion’s perspective, it’s a no-brainer: less risk, same reward.
But Lowery’s framework doesn’t stop there.
To complete the picture, he introduces the concept of the Hazardous BCRA Level in the Environment and Prosperity Margin (PM).
- Hazardous BCRA Level in the Environment is a constantly shifting threshold that represents the point at which an organism becomes so appealing to attack that it’s almost certain to be devoured.
- The Prosperity Margin (PM) is simply the distance between an organism’s own BCRA and the Hazardous BCRA Level in the environment. The greater the margin, the safer it is; the narrower it gets, the closer it is to becoming prey.
The Hazardous BCRA Level isn’t fixed; it evolves over time based on the dynamics of the surrounding environment.
Think of it like this: in a herd of antelope, it’s not enough to just be fast in some arbitrary sense — you need to be faster than the slowest one. That slowest antelope, with the highest BCRA, is the one most likely to get devoured.
An organism’s BCRA can therefore be thought of as an absolute measure, whereas the Hazardous BCRA Level is a relative measure, determined by how that organism’s BCRA compares to others in its environment. If your BCRA drifts too close to the hazardous level in your environment, you become an easy target.
To complete the framework, Lowery introduces the concept of CCCH environment.
- CCCH stands for Congested, Contested, Competitive, and Hostile, and represents the natural condition of environments where resources are limited and threats are constant (such as our planet); Lowery uses this to highlight that, because entropy is always increasing in the Universe and organisms are continually adapting, the Hazardous BCRA Level in the Environment is also steadily rising — meaning survival requires constant improvement just to maintain the same level of safety.
This final piece helps clarify the core objective of any organism: to increase its Resource Abundance (RA) — which will naturally raise its Benefit of Attack (BA) — while simultaneously keeping its BCRA as low as possible.
Put simply, an organism wants to be rich, but not become prey. This, Lowery calls the “Survivor’s Dilemma”.
Three Power Projecting Strategies
To achieve the aforementioned objective — growing Resource Abundance (thus BA) while keeping BCRA low — Lowery outlines three possible survival strategies an organism can pursue:
- Grow resource abundance (thus BA) faster than CA. This expands wealth and access to resources, but also increases BCRA to infinity.
- Grow resource abundance (thus BA) and CA at the same rate. This maintains a stable BCRA, but since the Hazardous BCRA Level in the Environment is constantly rising, the organism risks falling below this threshold over time, eventually becoming an easy target.
- Grow CA faster than BA. This is the only long-term viable strategy. By making attacks increasingly costly relative to potential benefit, the organism reduces its BCRA over time.
Lowery points out that option 3 is the only sustainable strategy, as it reduces an organism’s BCRA over time. However, this still offers no guarantee of survival, since the Hazardous BCRA Level is a moving target, determined by how quickly all other organisms raise or lower their own BCRA.
Cooperation
So far, we’ve looked at BCRA from the perspective of a single organism and what it must do individually to survive. But there’s another powerful way to gain a step-function increase in one’s CA — one that appears across all levels of nature: cooperation.
Lowery once again takes us back billions of years to the age of single-celled organisms, describing how cooperation first emerged — not through conscious design, but as an unconscious evolutionary phenomenon. He identifies two primary forms of early cooperation: colonisation and clustering.
- Colonisation occurs when limited space forces different organisms to occupy the same physical environment. While acting in their own self-interest, these organisms unintentionally form colonies that, over time, generate mutually reinforcing benefits at a collective level.
- Clustering, on the other hand, refers to organisms being physically grouped or “stuck” together — initially by chance or environmental pressure — but eventually “discovering” that collective behavior enhances survival.
Without delving into the intentions behind why an organism chooses — or is forced — into cooperation, the Primordial Economics framework allows us to clearly understand the effect: the BCRA of individual organisms is effectively merged, forming a larger, collective entity with its own BCRA. This is demonstrated below.
While combining, say, three individual organisms with the same BCRA into a cooperative entity might result in the same ratio (since BCRA is a proportion), both the BA and CA increase in absolute terms. This matters because, in practice, it’s the relative BCRA within an environment that determines vulnerability.
In the visualisation above, entities 2, 9, and 12 clearly stand out as easy targets. Meanwhile, although entities 3 and 11 appear to have similar BCRA values, entity 3 has a significantly higher CA in absolute terms — meaning it’s more costly to attack. And that difference alone can be enough to move it out of the hazard zone.
Cooperation has, for billions of years, proven to be an extraordinary strategy for organisms — whether of the same species or not — to increase their resource abundance while keeping their BCRA as low as possible. Once single-celled organisms began evolving into multicellular life forms, it became a matter of adapt or die for those that remained solitary. That’s how significant the evolutionary pressure toward cooperation was.
“[Those who cooperate] enjoys a step-function increase in CA, a substantial reduction in their individual BCRA, and an increase in their prosperity margin at virtually no individual cost to themselves. In many ways, cooperation is a survivor’s life hack.” (Softwar, p. 93)
But while the benefits of cooperation are easy to appreciate from a satellite perspective, it comes with its own internal challenges. As cooperation scales and more organisms depend on one another for survival (such as in a pack), the question of internal resource control and ownership grows increasingly complex. Thus, maintaining internal stability becomes just as crucial as defending against external threats.
“To cooperate at a large scale, pack animals must learn how to negotiate between their individual needs and the needs of the pack as a whole. Navigating this becomes especially tricky when it comes to feeding and breeding. Compromises must be made between the individual’s needs and the pack’s needs regarding resource control and ownership. Packs must adopt heuristics for determining the state of ownership and chain of custody of the pack’s collective resources between and among pack members.” (Softwar, p. 93–94)
Pecking Order Heuristics
To understand how a given group — such as a pack of wolves — manages internal order and resource distribution, Lowery turns to heuristics: the simple rules or decision-making shortcuts that determine pecking orders and dominance hierarchies within cooperative systems.
He emphasises that the dominance structures we observe in nature today are not random — they are the ones that have proven most effective over billions of years of evolution. Their continued presence is, in Lowery’s view, self-evidence of their survival utility.
He further points out that virtually any imaginable pecking order that doesn’t exist in nature very likely has been tested at some point through evolutionary trial — and failed. If it had offered a survival advantage, we would see it in practice. Its absence is strong evidence that it simply wasn’t effective in the long run.
Lowery again invokes the Survivor’s Dilemma, stressing that any organisation — such as a pack of wolves — must establish a pecking order that ensures the pursuit of Option 3: growing their Cost of Attack (CA) faster than their Benefit of Attack (BA). If they fail to do this as an organisation, their BCRA will either rise — or be overtaken by the Hazardous BCRA Level — and they will, inevitably, be devoured and lost to history.
To demonstrate this, Lowery presents two simplified pecking order heuristics. The first he calls “Feed and Breed the Powerful First”, and the second, “First Come, First Served”. The latter represents an organisation built around what Lowery appears to deem an “arbitrary sense of fairness,” while the former reflects a more straight-forward power-based logic. Lowery’s point is that while the fairness-based model might feel morally superior, it will fail if it results in growing BCRA.
Here, I want to briefly note something I’ll return to in part 5 of this article series, which is that while I understand Lowery is using simplified models to make his point, what ultimately matters is the resulting BCRA, not the moral framing of the heuristic. It’s not entirely clear to me that Lowery acknowledges the possibility that a “First Come, First Served” model, while appearing on the surface to neglect CA, might actually lower BCRA through other mechanisms — perhaps by promoting an organisational culture in which individuals are encouraged to act beyond their own self-interest in service of the group — potentially increasing resilience and cohesion in ways that are more effective than a purely power-based approach. Sapiens, as he’ll explore in much greater detail later, are undoubtedly the most successful species in terms of power projection relative other species— by Lowery’s own framework. Yet, somewhat surprisingly, he doesn’t seem to make this connection explicitly.
Domestication
Continuing on, Lowery reinforces his core argument — that a pecking order oriented toward prioritising Cost of Attack (CA) is superior to any alternative — by stating: “The dominant species on the planet is the one with pets.”
What Lowery is essentially trying to show moving forward is what happens when a pecking order is artificially altered — no longer following the ethos of “might is right” (or strategy option 3), where individuals earn their place through the ability to project power, but instead being organised around some alternative heuristic, such as selecting for “peacefulness” or “compliance”.
To illustrate this, Lowery draws on a dataset with abundant evidence: the domestication of animals by humans. Within his own framework, domestication is effectively a predatory intervention — an evolutionary experiment in which humans deliberately reshape the natural pecking order of another species. By selectively breeding animals to be more docile, controllable, and dependent, we’ve systematically increased their BCRA making them more useful to us — but also less capable of resisting domination.
The domestication of animals by humans is, in essence, the process of systematically lowering their CA while increasing their BA— thereby raising their BCRA and making them easy targets for sapiens.
The point Lowery is trying to make is that, from the perspective of the wild animals, the artificial reordering of their pecking order has had tremendous consequences for their species. Their natural hierarchies have been dismantled, and their ability to project power systematically bred out — to the point that many now walk willingly into the slaughter machine, unable to comprehend what’s happening, let alone resist or project any form of physical power to prevent it.
“If you entrap a herd of aurochs and then feed and breed the obese and docile ones, you get a herd of cows. If you entrap a litter of boar and then feed and breed the obese and docile ones, you get a litter of pigs. If you entrap a flock of junglefowl and then feed and breed the obese and docile ones, you get a flock of chickens.” (Softwar, p. 99)
From the oppressor’s perspective — the human perspective — this dynamic has led to a significant increase in our own Resource Abundance (RA), but it has come at the direct expense of the animals’ autonomy.
It’s hardly a stretch to describe the human domestication of wild animals as predatory. In fact, it’s so predatory that we’ve used selective breeding to systematically interfere with — and ultimately eliminate — an animal’s ability to resist or protest its subjugation. Without placing a moral judgment on this process, the provocative question Lowery raises at the end of this chapter is whether the same thing could happen — or is already happening — to humans themselves.
In other words, are we in the process of self-domesticating, in ways largely invisible to us?
As will be explored in the next chapter, Lowery points out that human societies have increasingly adopted a cultural attitude that looks down on physical power projection, often treating it as “primitive” or “beneath us” — something associated with lesser intelligence.
Lowery’s question is whether this attitude, like the suppressed CA in domesticated animals, might actually be a symptom of our own subtle oppression — whether imposed by ourselves through culture, or by those in power within our internal social hierarchies.
The beauty of antlers
Lowery ends the chapter with a glimpse of what’s to come — a reflection on the peculiar design of antlers: the elaborate, seemingly impractical structures worn by deer. Why are they shaped in such an opaque and unwieldy way?
His explanation is the following: antlers serve as a form of non-lethal power projection within the species, while still retaining their lethal potential against external threats.
When two individuals clash over dominance, their antlers interlock and tangle, forcing a physical contest that allows them to measure resolve — without resorting to fatal violence. It’s a mechanism that preserves the group while still reinforcing hierarchy. However, when facing outside predators, those same antlers can still be wielded lethally with full force.
Lowery closes with a provocative idea: humans may be missing their antlers. That is, we lack a built-in, non-lethal way to physically project power and establish internal pecking orders. What Lowery is truly getting at is not a call for violence, but a warning about our lack of a neutral, incorruptible process for determining the best ideas and most effective strategies.
Instead, our pecking orders tend to be abstract, indirect, and social in nature — built on trust, reputation, credentials, popularity, or perceived virtue. As a result, power is often assigned not based on proven merit or objective reliability, but on signals that are easily manipulated by favoritism, social bias, or hidden influence.
This leaves human societies uniquely vulnerable to internal corruption, and the majority of individuals exposed to forms of exploitation that are subtle, persistent, and difficult to detect — not unlike a domesticated chicken, unaware that it is being shaped to serve a system it doesn’t understand.
Chapter 4: Power Projection in Human Society
Before diving into whether Lowery’s concerns about internal corruption and vulnerability hold true within human systems, he takes a step back to examine metacognition, which is thinking about how humans think.
Abstract Reality vs Objective Physical Reality
At the start of the chapter, Lowery draws a foundational — if somewhat simplified — distinction between sapiens and other animals. This distinction becomes the starting point for building a framework to understand human metacognition, which he will later use to analyse societal behavior through the lens of his Power Projection Theory.
Lowery points out that, unlike most (if not all) other animals, humans operate in two distinct realities at the same time.
“Because of their [sapiens] ability to think abstractly and find imaginary patterns, sapiens operate in two different realities simultaneously: one in front of their eyes and one behind them” (Softwar, p. 122)
- Objective Physical Reality is the first one. It refers to the physical domain of energy, matter, space, and time that is in front of our eyes. Lowery defines it as the reality that leaves a trace in the physical domain — meaning it can be measured, interacted with, and validated through physical sensory inputs (more soon). It is governed by the unchanging laws of nature, and exists independently of belief or perception as a shared objective reality.
- Subjective Abstract Reality is the second. It refers to the non-physical domain of ideas, believes, symbols and narrative that exist purely behind our eyes. It leaves no trace in the physical domain. Abstract reality can exist within a single sapiens mind or be shared collectively among many sapiens, forming what Lowery refers to as shared abstract realities.
Lowery states the following:
“Humans are so skilled at using their habitually overenergized brains to perform bi-directional and dualuse abstract thinking that it happens automatically without being conscious of it. It appears to be extraordinarily difficult for humans to turn off this behavior unless the brain becomes physically damaged or chemically impaired” (Softwar, p. 123)
What Lowery is effectively saying is that it’s impossible for sapiens to “turn off” their abstract reality-rendering machine and perceive objective physical reality in its raw, unfiltered form. In other words, we are conditioned over-thinkers by nature — wired to overlay meaning, symbolism, story, and interpretation onto everything we see. For better or worse, this mental layering is inescapable. Lowery demonstrates this idea by presenting the image below.
The two images — one on top and one beneath — contain the exact same lines, simply arranged in different ways. The upper image holds little to no symbolic meaning for us, but the lower one carries clear significance. Why? Because it forms recognisable letters that we have assigned abstract meaning to through a shared abstract reality. It’s thus impossible for a sapiens fluent in English (thus reading this article) not to perceive the symbolic meaning of the letters. We can’t unsee it.
“Ironically, this implies humans can’t do what other animals can do effortlessly: experience objective physical reality for what it is, without skewing sensory inputs through a neocortical lens of abstract biases. Whereas most non-human species can’t perceive symbols and abstract meaning in the first place, sapiens can’t not perceive symbolic patterns and abstract meaning once a given pattern has been committed to memory.” (Softwar, p. 123)
This is also underlines an important distinction which is a reoccurring theme throughout the thesis: that abstract reality and objective physical reality are two separate things. Words, symbols, and stories do not exist in objective physical reality — they only appear to exist because those who subscribe to the shared abstract reality of the English language have agreed on what they mean. But to mistake them for something that exists in objective physical reality is plain and simple wrong. Lowery uses the term ‘hypostatisation’ to describe this mental error: the act of mistaking an abstract idea for a physically real thing. It’s a form of what he also calls “large-scale consensual hallucination.”
This doesn’t mean abstract reality is useless — far from it. It can coordinate behavior, structure civilisations, and even guide us toward truth. But it must be understood for what it is: a shared imagined reality that exists only in the minds of sapiens, not in the objective physical world.
To understand this more clearly, imagine sending the same image earlier to an alien civilisation with no cultural overlap. Even if they were biologically 100% identical to us, they would see the lines but not the meaning — because the meaning isn’t embedded in the physical structure of the letters; it exists purely in the shared abstract reality of our minds. Even if their written language looked visually similar to ours (had the same trace in the physical domain), it would be a coincidence. And even if it did, what we call an “O” would likely mean “A” — or something else entirely — to them. What we call red would be their blue. We can see this on our own planet. The word “gift” in English means a present. But in German, “gift” means poison.
The physical sensory inputs could be identical, but the symbolic interpretation is entirely dependent on belief.
Determining what is real through cross-examination
With the concepts of abstract and objective physical reality in mind, a natural question arises: If sapiens simultaneously operate within two separate realities, how do they determine what is objectively true?
Lowery offers a framework he calls cross-examination to answer this. At its core, all forms of intelligence — human or otherwise — boil down to pattern recognition (think of how IQ tests function).
On one side, we have imaginary pattern generation (abstract reality) — loaded with the stories, symbols, and beliefs we’ve accumulated. On the other, we have physical sensory inputs (objective physical reality) — sight, sound, touch, smell, and taste.
The brain then cross-examines these two sources against each other to assess whether what we imagine corresponds to anything tangible in the physical world. If the imaginary pattern aligns with physical sensory evidence, we register it as “objectively true”. If not, we don’t.
For example, if we see something round and orange in the dark, our brain might generate an imaginary pattern: “orange”. But let’s say vision alone isn’t enough to confirm it. To validate the pattern, we might reach out and touch the object, using physical power (our muscles) to manually generate a physical sensory input to confirm whether what we imagined aligns with the objective physical world.
While Lowery doesn’t state it explicitly, the same logic applies to scientific inquiry. A scientist begins by imagining a hypothesis — an abstract construct — and then seeks to confirm or falsify it through experiments grounded in objective physical reality.
The same mechanism applies to dreamlike or confusing experiences. If you’ve ever found yourself unsure whether something is real —like waking up from a vivid dream— you may have pinched yourself to generate a physical sensory input. That act is the brain’s way of manually producing an objective physical pattern to cross-examine against the imaginary one, in order to determine what’s true.
At least, that’s the idea. The brain is supposed to validate imaginary patterns against physical sensory inputs. But the problem, as Lowery will point out throughout this chapter, is that we have a tendency to frequently generate false positives — perceiving something as physically real simply because it prompted an imaginary pattern, even when there’s no physical sensory evidence to support it. Sometimes this works in our favor. It’s often better to be safe than sorry — to mistake a pile of branches for a snake one time too many, than to miss the real snake just once.
But as Lowery hints, the real danger arises when we begin to form and act upon entire abstract realities — belief systems — that lack any mechanism for physical validation. In these cases, there’s no way to “pinch” the idea, no test to anchor it in objective physical reality. And when these untestable beliefs begin to guide individual or collective behavior on a worldwide scale, the consequences can quietly compound in the background — unnoticed until it’s too late.
But before turning to the dangers of abstract reality, Lowery first outlines the many ways it benefits sapiens — especially in how it allows them to increase their resource abundance while keeping their BCRA low through large-scale cooperation.
Abstract Thinking means Cooperation on Steroids
Abstract thinking offers enormous advantages when it comes to the stated objective of any organism: to increase resource abundance while keeping BCRA low.
Within Lowery’s broader framework, we’ve already seen that cooperation is one of the most powerful ways to achieve this. However, for cooperation to work at scale, organisms must overcome a key challenge: the internal order of resource control and ownership. In other words, who gets what — and why — must be resolved in a way that doesn’t lead to internal collapse.
This is where abstract thinking becomes a critical evolutionary tool. Sapiens can leverage it in countless ways: advanced pattern recognition, symbolism, complex language, long-term planning, and storytelling — all of which enable coordination among individuals who might not be physically related or even directly connected.
Lowery illustrates this through a comparison with hunting caribou. On a one-to-one basis, the CA for a human without tools is much higher than the BA — the caribou is faster, stronger, and more resilient. Sapiens wouldn’t stand a chance.
But through abstract thought, humans can imagine and construct tools — like spears — that raise their CA relative to the caribou. Even more strategically, they can use collaborative planning and mental modelling to manipulate the terrain — for example, herding the caribou into a canyon with no escape, and positioning themselves on the cliffs above, armed and coordinated. In doing so, they temporarily lower the caribou’s CA, flipping the power dynamics to their advantage.
This is an extreme evolutionary edge that abstract thinking provides: not brute strength, but the ability to mentally simulate, coordinate, and act collectively in ways that restructure the battlefield itself.
From a Primordial Economics perspective — and using Lowery’s “Bowtie Notation” visualisation —hunting by sapiens can be seen as a strategic effort to lower the CA of the prey, thereby increasing its BCRA and making the pursuit of high-value targets more viable.
It’s hard to overstate the benefits of abstract thinking when framed this way — but, as Lowery points out, it comes with an Achilles’ heel: empathy, thus a whole suite of cognitive baggage.
Abstract Thinking and Empathy
When sapiens use their abstract thinking skills to plan an event — such as a hunt or any strategic interaction — they are essentially simulating the perspective of another being, anticipating its next move in order to act preemptively. This cognitive ability to model another’s intentions is a profound evolutionary advantage.
Lowery makes a reference to Yoda here, noting that part of what makes him so difficult to fight is that he already knows your next move. For a caribou, humans are like Yoda — it’s beyond their comprehension how we can anticipate their actions with such accuracy. This gives sapiens immense power. But it also introduces a burden.
Because to imagine another’s next move is, by definition, to place oneself in their position. And once sapiens do that, they can’t help but too also feel their pain, their fear, and the emotional guilt of what they are planning.
This is the “double-edged sword” of abstract thinking: it enables unimaginable power, but it also introduces empathy. And with empathy comes guilt. With guilt, ego. And with ego, the possibility of self-deception, as well as a natural disinclination toward direct violence.
At first glance, this might not seem like a trade-off at all — it might even appear as a sign of “moral progress”. But Lowery is quick to point out that the consequence — which is the formation of abstract power — can be a very dangerous phenomenon.
This, and its complications, is what we’ll delve into in Part 2.
Softwar by Jason Lowery: Hard-Forking Human Civilisation, Part 1/5 was originally published in The Capital on Medium, where people are continuing the conversation by highlighting and responding to this story.