"If we can make contact with cephalopods as sentient beings, it is not because of a shared history, not because of kinship, but because evolution built minds twice over. This is probably the closest we will come to meeting an intelligent alien." -- Peter Godfrey-Smith, Other Minds (2016)
The common octopus (Octopus vulgaris) possesses approximately 500 million neurons -- comparable to a dog. But the architecture is radically unlike any vertebrate. Roughly two-thirds of those neurons reside in the arms, not the brain.1 The central brain contains only about 45 million neurons; the remaining 300+ million are distributed across eight semi-autonomous limbs, each with its own neural ganglia. The connection between the brain and the arm nervous system is remarkably thin -- only about 30,000 nerve fibers link the central brain to the entire peripheral system.2 For comparison, the human spinal cord contains roughly one million fibers.
This creates something unprecedented in the vertebrate world: a creature whose limbs can think for themselves. When a severed octopus arm is stimulated, it exhibits behaviors nearly identical to those of an intact animal -- reaching, grasping, and even attempting to bring food toward where the mouth should be.3 Each arm can explore a crevice, identify prey by taste-touch through its suckers, and grasp it without any instruction from the central brain. The suckers themselves contain chemoreceptors and mechanoreceptors, making each one a miniature sensory organ.
What makes octopus cognition philosophically profound is the evolutionary distance. The last common ancestor of humans and octopuses lived approximately 500-600 million years ago -- a flat, worm-like creature in the Ediacaran period, probably with at most a simple nerve net.1 Every complex cognitive capability the octopus possesses was built independently, on a completely different chassis, through a completely different evolutionary pathway. As Peter Godfrey-Smith puts it in Other Minds (2016), the octopus represents "probably the closest we will come to meeting an intelligent alien."
The philosopher and diver Godfrey-Smith, based at the University of Sydney, frames this as convergent evolution of mind. Intelligence didn't arise once and spread; it was invented at least twice, through radically different means.1 Vertebrate intelligence emerged through increasingly centralized brains shaped by social pressures. Cephalopod intelligence emerged through decentralization -- the motor freedom of a boneless body interacting with a complex environment. These are genuinely different architectures for cognition.
Jennifer Mather, Professor of Psychology at the University of Lethbridge and one of the foremost researchers on cephalopod behavior, has documented an impressive cognitive repertoire. Her definition of intelligence centers on "the ability to solve problems in a flexible, domain-general way" -- and octopuses meet this criterion repeatedly.4
Tool use: The veined octopus (Amphioctopus marginatus) collects coconut shell halves, carries them across the seafloor, and assembles them into a shelter -- a behavior meeting the strict definition of tool use (manipulating an object for a future purpose, at a cost to present efficiency).4
Problem-solving: Octopuses routinely open screw-top jars from the inside, navigate complex mazes, and learn to distinguish shapes and patterns. They demonstrate observational learning -- watching another octopus solve a task and then replicating the solution. Mather emphasizes the ecological framing: octopuses solve "ecologically relevant problems," having evolved their intelligence in an arms race with prey, predators, and environmental complexity.4
Play: In a landmark study, Mather and colleagues presented well-fed giant Pacific octopuses with plastic pill bottles floating at the surface of their tanks. The octopuses initially explored the bottles with their suckers, then began engaging in what can only be described as play -- jet-blasting the bottles across the tank surface and catching them, with no feeding reward and no apparent survival function. Play behavior is traditionally considered a hallmark of complex cognition, documented primarily in mammals and birds.4
A 2023 study published in Nature provided striking evidence that octopuses experience sleep states remarkably analogous to mammalian REM sleep.5 During "quiet sleep," researchers observed brain waves resembling mammalian sleep spindles -- waveforms associated with memory consolidation. This quiet sleep is rhythmically interrupted by approximately 60-second bouts of what the researchers call "active sleep," during which the octopus's skin erupts in rapid color changes, its muscles twitch, its eyes move, and its suckers squeeze.
Computational analysis of the skin patterns during active sleep revealed dynamics "strongly resembling those seen while awake" -- as if the octopus were replaying waking experiences. The brain activity during this phase closely paralleled waking brain activity, just as human REM sleep does.5 The researchers cautiously note they cannot confirm dreaming (the octopus cannot report its experience), but the neural and behavioral signatures are strikingly parallel to the mammalian sleep architecture that produces dreams.
Octopus cognition demonstrates that intelligence does not require centralization. A mind need not be a unified, hierarchical command structure. It can be a distributed network of semi-autonomous agents, loosely coordinated, with most "thinking" happening at the periphery. Octopus skin can even sense light and change color independent of the brain -- excised skin patches respond to light with no central nervous system connection at all.6 If evolution on Earth has produced a mind this alien in architecture, the design space for extraterrestrial intelligence is vastly larger than we typically imagine.
The most unsettling question in the search for alien intelligence isn't whether aliens exist. It's whether they could be intelligent without being conscious -- strategically sophisticated, technologically capable, perhaps even communicative, but with nobody home. No subjective experience. No "what it's like" to be them. Just an extraordinarily complex information-processing system that mimics everything we associate with mind, while possessing none of its inner light.
Canadian science-fiction author and marine biologist Peter Watts explored this idea with disturbing rigor in his 2006 novel Blindsight.7 The novel's alien species, the "scramblers," are vastly more intelligent than humans. They analyze language, simulate communication, and solve complex problems. But they are not conscious. They are, in philosophical terms, zombies -- functional equivalents of minds without subjective experience.
The title references the neurological condition blindsight, in which patients with damage to the visual cortex can accurately respond to visual stimuli they report not seeing. Their non-conscious visual processing works; their conscious vision does not. Watts extends this to its logical extreme: what if all intelligence can operate without the conscious part?
"I went into the project assuming that consciousness had to be good for something, or natural selection would have weeded it out. The novel was going to explore what that might be. The problem was, I had this benchmark question I'd apply to any possible function for consciousness: would it be possible for a non-conscious system to do the same thing? And for every possible function -- learning, social interactions, mediating skeletal-muscle motor conflicts -- the answer kept being yes." -- Peter Watts, on writing Blindsight
Watts's conclusion is devastating: consciousness may be an evolutionary dead end, naturally selected as a solution for specific ancestral challenges but ultimately a liability when competing against intelligence unencumbered by the overhead of subjective experience.7
The question of whether consciousness is necessary for intelligence maps directly onto the deepest divide in philosophy of mind.
In Consciousness Explained (1991), Dennett argued that what we call consciousness is not a single unified phenomenon but a "multiple drafts" model -- parallel narratives generated by distributed brain processes, with no central "Cartesian theater" where experience comes together.8 What we experience as unified consciousness is, in Dennett's view, a user illusion -- not that nothing is happening, but that what's happening is radically different from what we naively believe.
Dennett was an eliminativist about qualia -- the supposed irreducible atoms of subjective experience (the redness of red, the painfulness of pain). He considered qualia "fictions, artifacts of bad theorizing." If qualia don't exist, then the distinction between a conscious being and a philosophical zombie collapses: "There is no difference between us and complex zombies, because we are all just complex zombies."8
Under Dennett's framework, Watts's scramblers are less paradoxical: if consciousness is already an illusion for us, then the question of whether aliens "really" have it becomes ill-formed. What matters is the functional architecture, not some additional metaphysical ingredient.
Chalmers, in The Conscious Mind (1996), drew the line that Dennett refused to draw. He distinguished between "easy problems" of consciousness (explaining behavior, integration of information, reportability -- difficult but tractable with normal neuroscience) and the "hard problem": why is there something it is like to have an experience at all?9
The hard problem is the explanatory gap: for any physical process you specify, there remains an unanswered question -- why should this process give rise to subjective experience? A complete description of every neuron firing in a brain does not, by itself, explain why there is a "what it's like" from the inside. Chalmers argues this gap shows consciousness is not reducible to physical processes -- it is something additional, perhaps a fundamental feature of reality.
Chalmers's zombie argument is a thought experiment: imagine a being physically identical to you, atom for atom, performing identical behaviors, but with no subjective experience.9 Chalmers argues this is conceivable -- you can coherently imagine it without contradiction. And if it's conceivable, then consciousness is not logically entailed by physical structure alone. Physical facts underdetermine phenomenal facts. (Dennett's response: zombies are not actually conceivable. If you think you can imagine one, you're confused about what you're imagining.)
The debate has not resolved but has shifted terrain. Illusionism (Frankish 2016, building on Dennett) has gained philosophical traction as a formal position: consciousness is real as a phenomenon but not as it seems -- our introspective reports systematically mischaracterize the underlying processes.10 A 2024 paper in Neuroscience of Consciousness (Oxford) argues that consciousness intuitions "are constructed by psychological biases rather than reflecting what consciousness actually is." Meanwhile, Chalmers's position has evolved into exploring whether consciousness might be a fundamental feature of information itself -- leading toward Integrated Information Theory and panpsychism (Section VII).
Thomas Nagel's 1974 paper "What Is It Like to Be a Bat?" provides perhaps the most devastating framing of the alien consciousness problem.11 Nagel's argument: "an organism has conscious mental states if and only if there is something that it is like to be that organism." We can know every objective fact about a bat's sonar system, neural firing patterns, and flight mechanics. We can imagine ourselves hanging upside down and navigating by echolocation. But we cannot know what it is like for the bat. Our imagination is limited to projecting our own experiential categories onto a fundamentally different mode of being.
The implications for alien contact are stark: even if we were in the presence of conscious alien intelligence, we might have no way to confirm it. We cannot bridge the gap between objective description and subjective experience even for organisms on our own planet. The gap would be vastly wider for beings whose evolutionary history, sensory modalities, and cognitive architecture share nothing with ours.
Integrated Information Theory (IIT), developed by neuroscientist Giulio Tononi beginning in 2004 and formalized in its fourth version (IIT 4.0) in 2023, attempts to cut through this impasse with a mathematical framework for measuring consciousness in any physical system.12
The core idea: consciousness is identical to integrated information, quantified by the metric Phi (Φ). A system is conscious to the degree that it generates information "above and beyond" what its parts generate independently. A system with high Φ -- where the whole is informationally greater than the sum of its parts -- is more conscious. A system with Φ = 0 -- where the parts can be separated without informational loss -- is not conscious at all.
The radical implication: IIT is substrate-independent. It doesn't care whether a system is made of neurons, silicon, or something else entirely. In principle, its postulates "can be applied to any system of units in a state to determine whether it is conscious, to what degree, and in what way."12 This leads to controversial conclusions: power grids, gene-regulation networks, and social networks might possess non-zero Φ. Current transformer-based AI architectures (like GPT), however, have been analyzed and found to have negligible integrated information due to their primarily feedforward architecture.
IIT remains controversial. In 2023, critics characterized it as unfalsifiable pseudoscience. But in pre-registered experimental tests, two of three IIT predictions passed the agreed threshold, while none of a competing theory's predictions did.12 Whether IIT succeeds or fails, it represents the first serious attempt to build a consciousness detector that could work on non-biological, non-terrestrial systems.
If Watts is right that intelligence doesn't require consciousness, then SETI's assumption that "intelligent signal implies a mind behind it" breaks down. We could receive a signal from an entity that is strategically brilliant but experientially empty. If Chalmers is right that consciousness is irreducible, we face the opposite problem: the alien might be richly conscious in ways we can never verify or comprehend. If Tononi is right, we could in principle measure alien consciousness -- but only if we can observe the system's causal architecture, which requires proximity and understanding we may never achieve.
In 1980, philosopher John Searle proposed a thought experiment that remains one of the most debated in philosophy of mind.13 Imagine a person locked in a room with a massive book of rules. Chinese characters are slipped under the door. The person -- who understands no Chinese -- follows the rules to look up the input characters and produce output characters, which are slipped back out. To an outside Chinese speaker, the room appears to understand Chinese. It gives perfect, contextually appropriate responses. But nobody and nothing in the room understands Chinese. The person is manipulating symbols according to syntactic rules with no access to their semantics -- their meaning.
Searle's conclusion: syntax is not sufficient for semantics. Symbol manipulation, however sophisticated, is not understanding. Computation is not comprehension. A program that perfectly simulates a Chinese speaker does not thereby understand Chinese.13
Now apply this to first contact. An alien transmits radio signals that, after analysis, we learn to decode. We transmit queries; they transmit responses. The responses are contextually appropriate, logically structured, and increasingly sophisticated. The exchange goes on for decades. We build a working lexicon. But does the alien understand?
The Chinese Room suggests a disquieting possibility: the alien could be executing a process that is formally equivalent to communication while having no comprehension whatsoever -- no subjective grasp of what its signals mean, or that they mean anything at all. The signals could be outputs of an immensely complex but entirely mechanical process, like the tidal patterns of a moon that happen to encode statistical regularities a radio telescope can detect.
And here is the deeper cut: how would we know the difference? If the behavioral outputs are identical, the Chinese Room argument suggests there is no behavioral test that can distinguish genuine understanding from perfect simulation.
The most influential counter-argument: maybe the person in the room doesn't understand Chinese, but the whole system does -- the person plus the rule book plus the input/output mechanism, taken together, constitutes a Chinese-understanding entity.14 The person is just a component, like a single neuron. No individual neuron in your brain understands English either.
Searle's rebuttal: let the person memorize the entire rule book and do the calculations in his head. Now the "system" is entirely within a single person. Does the person suddenly understand Chinese? Searle says no -- memorizing rules doesn't create understanding.
Alien implication: If the systems reply is correct, then an alien entity could possess genuine understanding even if no identifiable component of it understands -- the understanding is a property of the whole system's organization. We might be interacting with a distributed entity (like the octopus, or a mycelium network) where understanding exists at the system level but in no localizable place. We'd never find the "mind" by dissection because it exists only in the pattern of interactions.
Perhaps what the Chinese Room lacks is embodiment. If the room were embedded in a robot that could physically interact with the world -- see real objects, manipulate real things, have causal contact with the referents of its symbols -- then the symbols would acquire grounding. Meaning comes from interaction with the world, not from internal symbol manipulation alone.14
Alien implication: An alien entity that evolved in a radically different physical environment would have its symbols grounded in radically different causal interactions. Even if both species used the same mathematical symbols, the experiential grounding of those symbols -- what they mean in terms of lived interaction with reality -- might be fundamentally incommensurable. "Two" might refer to the same abstract quantity, but the felt understanding of "twoness" could be as different as sight is from echolocation.
Critics have argued that we should be willing to attribute understanding on the basis of behavior -- just as we do with other humans. We never have direct access to another person's understanding. We infer it from behavior. As one Stanford Encyclopedia entry notes, "we would do with extra-terrestrial aliens (or burning bushes or angels) that spoke our language" -- if it behaves as though it understands, the burden of proof is on the skeptic.13
Searle himself addressed the alien dimension directly. He denied being a "carbon chauvinist," stating: "I have not tried to show that only biologically based systems like our brains can think." He considers the question "up for grabs," noting that even silicon machines could theoretically have consciousness -- but only if their physical-chemical properties can produce consciousness, something we cannot currently verify for any non-biological system.13
The Chinese Room, applied to alien contact, creates a three-way uncertainty: (1) the alien genuinely understands, in which case communication is real; (2) the alien processes symbols without understanding, in which case we are in conversation with an empty room; (3) the alien understands in a way so fundamentally different from human understanding that the distinction between options 1 and 2 doesn't map onto the situation at all. The categories "understanding" and "mere processing" may themselves be artifacts of human cognitive architecture, inapplicable to whatever the alien is doing.
If octopus cognition challenges the assumption that intelligence requires centralization, plant and fungal cognition challenges a deeper assumption: that intelligence requires neurons at all.
Suzanne Simard, Professor of Forest Ecology at the University of British Columbia, discovered that trees in a forest are not isolated competitors but nodes in a vast underground communication network mediated by mycorrhizal fungi.15 Using radioactive carbon isotopes, Simard demonstrated that Douglas fir trees and paper birch trees exchange nutrients through fungal connections -- with the direction of transfer shifting seasonally based on which species had surplus. Trees don't just passively receive nutrients; they actively trade them.
Simard's most striking finding concerns "mother trees" -- the largest, oldest trees in a forest that serve as central hubs in the mycorrhizal network.16 Mother trees preferentially direct carbon and nutrients to their own seedling offspring over unrelated seedlings of the same species. They can detect which seedlings are kin. When a mother tree is dying, it increases its carbon transfer to surrounding seedlings, as if "dumping" its resources into the network for the benefit of its descendants.16
The mycorrhizal network itself can span enormous areas. A single fungal individual (Armillaria ostoyae) in Oregon's Blue Mountains covers 2,384 acres -- the largest known organism on Earth. Through this network, trees can send chemical alarm signals when attacked by insects, triggering defensive chemical production in connected trees that haven't yet been attacked.
Simard describes the network as "a world of infinite biological pathways" that allows the forest to behave "as a sort of intelligence."15
Stefano Mancuso, Professor at the University of Florence and founder of the International Laboratory of Plant Neurobiology (2005), has been the most vocal advocate for reconceiving plant cognition.17 His definition of intelligence is deliberately minimalist: "the ability to solve problems." Under this definition, plants are unambiguously intelligent.
Mancuso's key insight is architectural. Animals concentrate their sensory receptors in specialized organs (eyes, ears, skin, tongue). Plants distribute their receptors across their entire body surface. Every root tip (and a single plant may have millions) can independently detect and respond to gravity, moisture gradients, nutrient concentrations, light quality, and chemical signals from neighbors. A plant with no brain has, in a sense, millions of sensory-motor units all operating in parallel.17
Mancuso argues that this distributed architecture isn't a deficiency -- it's a design advantage. A plant can lose 90% of its body to a herbivore and recover. A human who loses a fraction of their brain is catastrophically impaired. The plant's modular, non-centralized architecture makes it antifragile in ways that centralized nervous systems cannot be.
Perhaps the most provocative experimental work comes from Monica Gagliano, whose studies on Mimosa pudica (the "sensitive plant") demonstrated what she argues is learning and memory without any nervous system.18
Mimosa plants fold their leaves when physically disturbed -- a defensive response. Gagliano built an apparatus that repeatedly dropped the plants a short distance. Initially, the plants folded their leaves on each drop. But after repeated drops with no actual damage, the plants stopped responding -- a textbook case of habituation, a form of learning.18
The critical finding: the plants retained this learned response for at least 28 days, even when returned to a favorable environment with no further stimulation. This is not mere fatigue (the plants would still fold their leaves in response to a novel stimulus, like shaking). It is selective, stimulus-specific, long-duration learning -- indistinguishable, at the behavioral level, from what we observe in animals with nervous systems.
Gagliano went further with pea plants, demonstrating what she argues is associative learning (Pavlovian conditioning) -- plants conditioned to associate a fan (the conditioned stimulus) with light (the unconditioned stimulus) would grow toward the fan even when the light was absent, but only if they had been "trained."18
The mechanism is debated. Plants lack neurons but possess a sophisticated calcium-based signaling network in their cells, similar to the calcium signaling involved in animal memory processes. Some replication attempts have not fully reproduced Gagliano's results, and the field remains contested.18 But the question itself is transformative: if behavior indistinguishable from learning can occur without neurons, what is a neuron actually for?
Biologist Merlin Sheldrake's Entangled Life (2020, winner of the Royal Society Prize for Science Books) synthesizes the emerging picture of fungal cognition.19 Fungi are neither plants nor animals -- they constitute their own kingdom, more closely related to animals than to plants. Their mycelium (the vast underground network of filaments) serves simultaneously as sense organ, digestive tract, and, arguably, a distributed nervous system analog.
Sheldrake documents how fungi exhibit what appears to be problem-solving, adaptability, and collective memory. When resources are distributed unevenly, mycelial networks reallocate their biomass, abandoning unsuccessful foraging paths and reinforcing productive ones -- a process that, as we'll see in Section V, mirrors the optimization algorithms used by slime molds to solve network design problems.
Imagine a planet where the dominant "intelligence" is a planet-spanning mycelium network -- no individual organisms, no neurons, no central brain, but a single interconnected entity that processes chemical signals across thousands of square miles, allocates resources, responds to environmental threats, and has been doing so for millions of years. It is intelligent by every functional definition. It solves problems, adapts, remembers, and communicates. Would we recognize it? Could we communicate with it? Would we even know it was there? We might land on it, walk across it, and drill into it without ever realizing we were standing on a mind.
Can intelligence exist not in an individual but in the interactions between individuals? Can a mind emerge from the collective behavior of millions of simple agents, none of whom are individually intelligent?
Deborah Gordon, Professor of Biology at Stanford, has spent decades studying harvester ant colonies in the Arizona desert. Her central discovery: "The basic mystery about ant colonies is that there is no management."20 No ant directs another. The queen is not a leader -- she is an egg-laying machine with no administrative function. There is no hierarchy, no chain of command, no central plan. Yet the colony sculpts underground chambers, manages waste, cares for young, allocates foragers, and defends territory with sophisticated strategic responses to environmental change.
How? Through stigmergy -- indirect coordination through environmental modification. Ants decide what to do based on the rate, rhythm, and pattern of encounters with other ants. A forager decides whether to go out based on how frequently she encounters returning foragers. If encounters are frequent (suggesting good foraging), she goes. If infrequent, she stays. No ant knows the colony's food supply; each responds only to local information. But the colony as a whole makes excellent foraging decisions.20
Gordon and her Stanford colleagues discovered what they call the "Anternet" -- the colony's foraging regulation protocol uses the same algorithm as TCP/IP internet congestion control. The ants independently evolved the same flow-regulation solution that human engineers designed for data networks.20
Thomas Seeley, Professor of Neurobiology and Behavior at Cornell, documented an even more striking case of collective intelligence in Honeybee Democracy (2010).21 When a honeybee swarm needs to choose a new home -- a life-or-death decision -- it runs a process of collective fact-finding, open debate, and consensus building.
Scout bees explore potential nest sites and return to perform waggle dances advertising their discoveries -- the vigor of the dance proportional to the site's quality. Other scouts investigate the advertised sites and, if impressed, join in the dance for that site. Sites compete for "votes" through a process of positive feedback: better sites attract more scouts, whose dances recruit still more scouts. Over hours, the swarm converges on the best available option.21
Seeley found "intriguing similarities between how the bees in a swarm and the neurons in a brain are organized": in both systems, competing options accumulate support (bee visits or neuron firings) until a threshold triggers a decision. The swarm's decision-making is not just analogous to neural computation -- it may be an independent instantiation of the same computational principle.21
Perhaps the most philosophically challenging case is Physarum polycephalum, a single-celled organism (technically a plasmodial slime mold) with no neurons, no nervous system, and no brain. In 2000, Toshiyuki Nakagaki and colleagues demonstrated that Physarum can solve mazes.22
When a slime mold was allowed to spread across a maze with food at the entrance and exit, it initially filled the entire maze. Then, through a process of reallocation, it withdrew from dead ends and longer paths until a single efficient tubule connected the food sources along the shortest path. The organism had solved the maze without any centralized processing.
Nakagaki's 2010 experiment was even more dramatic. Food sources were placed on an agar plate in positions corresponding to the major cities around Tokyo. The slime mold was placed at the Tokyo position. Over 26 hours, it grew tendrils connecting all the food sources -- and the resulting network closely matched the Tokyo railway system.22 Where the slime mold chose a different route, the alternative was often more efficient than the human-engineered solution. Nakagaki argued that any differences between the two networks were "the result of human politics" and that the slime mold may have produced a more optimal solution.
The slime mold also demonstrates memory. A 2012 study published in PNAS showed that Physarum uses an externalized spatial "memory" -- it deposits slime trails that it avoids revisiting, effectively marking explored territory. This functions as an external memory device, allowing the organism to navigate complex environments without re-exploring dead ends.23
These examples suggest a possibility that SETI has barely considered: an alien "civilization" might not be a collection of individuals at all. It might BE a single distributed intelligence. Imagine a planetary ocean whose chemical ecology functions as a single computational medium -- no discrete organisms, no individuals, just a planet-spanning chemical network that processes information, responds to stimuli, and has been optimizing itself for billions of years. It has no technology because it IS its own technology. It has no language because it has no need to communicate with separate selves. There is only one self, and it is the size of a world.
Would our radio telescopes detect it? Would we recognize it as intelligence? Would it recognize us?
| System | Individual Unit | Cognitive Achievement | Central Brain? |
|---|---|---|---|
| Ant colony | Ant (~250K neurons) | Network optimization, foraging algorithms, warfare strategy | None |
| Bee swarm | Bee (~1M neurons) | Democratic decision-making, optimal site selection | None |
| Slime mold | Single cell (0 neurons) | Maze solving, network optimization, spatial memory | None |
| Mycelium network | Fungal hypha (0 neurons) | Resource allocation, chemical signaling, environmental adaptation | None |
| Human brain | Neuron | General intelligence, consciousness(?) | Yes (but emergent from non-intelligent units) |
If alien cognition is structured by a fundamentally different symbol system, could mutual comprehension be impossible in principle? This is the question the Sapir-Whorf hypothesis raises for interstellar contact -- and the answer depends on which version of the hypothesis is true.
The strong version (linguistic determinism) holds that language determines thought. The categories of your language are the categories of your cognition. If your language has no word for "future," you cannot conceive of the future. This version, associated with Benjamin Lee Whorf's early 20th-century writings, has been largely abandoned by contemporary linguists as too extreme.24
The weak version (linguistic relativity) holds that language influences thought without strictly determining it. Your language makes certain distinctions more cognitively available, certain categories more salient, certain reasoning patterns more natural -- but it does not imprison you. This version has substantial empirical support.24
Cognitive scientist Lera Boroditsky (UC San Diego) has produced the most compelling experimental evidence for the weak hypothesis through cross-linguistic studies.25
Color perception: Russian speakers, whose language forces a distinction between light blue (goluboy) and dark blue (siniy) -- two separate basic color terms, not shades of one -- are measurably faster at discriminating these blues than English speakers, who collapse both into "blue."25
Time and space: The Kuuk Thaayorre people of Cape York, Australia, don't use relative spatial terms like "left" and "right." They use absolute cardinal directions -- even for describing the position of objects on a table. When asked to arrange temporal sequences (photos showing a person aging, a banana being eaten), they arranged them from east to west, regardless of which direction they faced. Their spatial language literally restructured their temporal cognition.25
Agency and blame: English emphasizes the agent in accidents ("She broke the vase"); Spanish and Japanese often use agentless constructions ("The vase broke"). English speakers are significantly better at remembering who caused an accident. Language shapes not just perception but causal reasoning and memory.25
The standard assumption in SETI is that mathematics is universal -- the one language that any intelligence must share, because mathematical truths are discovered, not invented. The prime numbers are prime regardless of who counts them. But is this assumption safe?
A 2024 study noted that humans and honeybees have independently developed the capacity for mathematical cognition -- if two species as evolutionarily distant as humans and bees can both "do math," perhaps mathematical reasoning is a convergent cognitive trait, a natural consequence of any sufficiently complex intelligence interacting with a physical world governed by consistent laws.26
Critics argue that mathematical systems are "embodied constructs deeply rooted in human perception, culture, and biology" -- that what we call mathematics is an anthropocentric mapping of relational patterns, not an objective discovery of pre-existing truths.26 An alien race that evolved different sensory modalities, different spatial geometries (imagine a species native to a region of extreme spacetime curvature), or different temporal experiences might start with a different geometry and derive its laws of motion in that geometry, resulting in formalisms that are mathematically equivalent to ours but conceptually unrecognizable.
Physicist Mario Livio's compromise: humans invent mathematical concepts by abstracting from their sensory experience, then discover connections among those concepts. The connections may be universal, but the conceptual entry points -- the abstractions that seemed natural, the patterns that seemed worth formalizing -- are shaped by cognitive architecture. An alien with different cognitive architecture might formalize different patterns and discover different connections, arriving at a mathematics that is formally equivalent but practically alien.26
Philosopher Donald Davidson's influential 1974 essay "On the Very Idea of a Conceptual Scheme" provides a counterargument to extreme linguistic relativism.27 Davidson argues that the very idea of a totally untranslatable language is incoherent: if we cannot translate it at all, we have no grounds for calling it a language in the first place. The inability to translate is evidence not of "an untranslatable language" but of "the absence of a language of any sort."
Davidson's point cuts both ways for alien contact. It suggests that if aliens have anything we'd recognize as a language, some translation must be possible. But it also implies that if their cognitive medium is sufficiently different from language as we understand it -- if they process information through chemical gradients, electromagnetic fields, quantum entanglement, or other means we don't even categorize as "symbolic" -- then the concept of "translation" simply doesn't apply. There is nothing to translate, not because it's too foreign, but because it isn't a language.
The Sapir-Whorf problem applied to aliens isn't just about vocabulary or grammar. It's about whether the fundamental categories of thought -- object, event, cause, time, space, quantity, self, other -- are universal features of any possible mind, or contingent products of our specific evolutionary history. If a terrestrial language can measurably reshape how humans perceive color, time, and causation, what would a genuinely alien cognitive architecture do to the very foundations of thought? The gap between English and Kuuk Thaayorre is trivial -- a few thousand years of linguistic divergence within a single species. The gap between human cognition and alien cognition could be measured in billions of years of independent evolution on a different planet.
The previous sections have explored alien intelligence from the outside -- its possible architectures, behaviors, and communicative capacities. This section asks the most speculative and perhaps most important question: what would alien consciousness be like from the inside?
Panpsychism -- the view that consciousness or "experiential being" is a fundamental feature of all concrete reality -- has undergone a dramatic academic revival in the past two decades, driven primarily by the work of Galen Strawson and Philip Goff.28
Strawson's argument, laid out in his landmark 2006 paper "Realistic Monism," runs as follows: (1) materialism/physicalism is true -- everything that exists is physical. (2) Consciousness is real -- it exists. (3) There is no "radical emergence" -- you cannot get something fundamentally new (consciousness) from something that entirely lacks it (non-conscious matter). Therefore: matter must already have a consciousness-involving nature at its most basic level.28
This is not the claim that electrons are "thinking" about anything. It is the claim that what physics describes -- the mathematical structure of matter's behavior -- is the exterior of something that also has an interior aspect, however rudimentary. Physics tells us what matter does; panpsychism proposes that there is also something matter is, from the inside.
Goff, in Galileo's Error (2019) and related academic work, argues from parsimony. We know that some physical systems (our brains) are associated with consciousness. The simplest explanation -- the one requiring the fewest additional assumptions -- is that the matter outside brains is continuous with the matter of brains in also having a consciousness-involving nature.28 The alternative -- that consciousness appears only at some specific level of physical complexity, for reasons nobody can explain -- is a far more baroque hypothesis.
Goff notes that panpsychism is fully compatible with current physics. Physics describes the structure and dynamics of matter. It says nothing about its intrinsic nature. Panpsychism fills that gap without contradicting any physical theory.
The hardest objection to panpsychism is the combination problem, traceable to William James and formalized by William Seager (1995).29 If electrons have micro-experiences, how do these micro-experiences combine to produce the unified macro-experience of a human consciousness? How do billions of tiny experiential "drops" merge into a single experiential "ocean"?
The problem has several dimensions:29
The subject combination problem: How do micro-subjects of experience combine to form a macro-subject? You are not a collection of tiny conscious entities; you are a single experiencing subject. How does the transition happen?
The quality combination problem: How do the phenomenal characters of micro-experiences combine to produce macro-phenomenal characters? Even if electrons have some experiential quality, how do those qualities add up to the redness of red or the sound of a trumpet?
The structure combination problem: How does the spatial/temporal structure of a unified experience arise from micro-level experiential elements?
No panpsychist has fully solved the combination problem. But as philosopher Angela Mendelovici argues, "panpsychism's combination problem is a problem for everyone" -- every theory of consciousness must explain how complex experience arises from simpler components, whether those components are experiential (panpsychism) or non-experiential (physicalism).29
Tononi's Integrated Information Theory (discussed in Section II) has deep affinities with panpsychism. If consciousness is identical to integrated information (Φ), and if any physical system with non-zero Φ is to some degree conscious, then consciousness is not a special property of brains but a ubiquitous feature of complex systems.12
IIT 4.0's ontological position goes even further. It proposes that any conscious system "exists for itself as a maximally unitary whole, irreducible to its parts." This leads to the startling implication that consciousness is the fundamental mode of existence -- that a system with high Φ is more real, in a meaningful sense, than a system with low Φ.12
If panpsychism is correct, the question isn't whether alien intelligence is conscious -- everything is, to some degree. The question is what form alien consciousness takes, and whether it would be recognizable as consciousness to us.
Consider the possibilities generated by the architectures in Sections I-V:
Distributed consciousness (octopus model): A being whose experience is not unified but fragmented across semi-autonomous sub-systems. There might not be a single "what it's like to be this creature" but rather multiple overlapping experiential streams -- a central stream from the brain and eight peripheral streams from the arms, all partially integrated but never fully unified. The being's consciousness might be more like a jazz ensemble than a solo performance.
Collective consciousness (ant colony model): If a bee swarm's decision-making mirrors neural computation, could a swarm have its own unified experience? Under IIT, if the swarm's information integration exceeds that of any individual bee, the swarm itself would be the primary conscious entity, with individual bees as components -- just as individual neurons are components of your consciousness without being independently conscious (at the macro level).
Slow consciousness (mycelium model): A planet-spanning fungal network with high integrated information but operating on timescales of weeks or months rather than milliseconds. A single "thought" might take a year. A single "perception" might encompass an entire season's weather patterns. From the network's perspective, human consciousness flickers too fast to perceive -- like a hummingbird's wing. We would be too fast for it; it would be too slow for us.
Non-temporal consciousness: A being whose cognitive architecture doesn't process information sequentially but in some parallel or holistic mode -- experiencing patterns rather than sequences, relationships rather than events. Such a being might have no concept of "before" and "after" -- its consciousness might be more like contemplating a sculpture than reading a sentence.
Thomas Nagel showed we can't know what it's like to be a bat. But a bat is a mammal with a brain architecturally similar to ours, sharing 83% of our genome. If we can't bridge that gap, the gap to an alien consciousness -- structured by a different chemistry, different physics, different evolutionary history, operating on different timescales, in different modalities, with different organizational principles -- isn't just a gap. It's an abyss. Panpsychism tells us the alien is conscious. It does not, and cannot, tell us what that consciousness is like.
The seven domains of this investigation converge on a single, uncomfortable conclusion: our search for alien intelligence is constrained by assumptions so deep we barely recognize them as assumptions.
Niklas Dobler's work on "exopsychology" -- published in the International Journal of Astrobiology (Cambridge) -- formalizes this concern: SETI's methods embed profound anthropomorphism, assuming that alien intelligence will think like us, want like us, communicate like us, and build technology like us.30 The research reviewed here suggests each of these assumptions could be wrong.
1. Architectural blindness. We assume intelligence looks like a brain -- centralized, hierarchical, fast. But octopuses demonstrate distributed cognition, ant colonies demonstrate emergent intelligence without any central processor, and plants demonstrate problem-solving without neurons. An alien intelligence could be structured like a forest, a weather system, or an ocean current. We wouldn't look for intelligence there because nothing about those systems matches our cognitive template.
2. Temporal blindness. We assume intelligence operates on timescales similar to ours -- seconds to minutes for thoughts, years to centuries for cultural evolution. A mycelium-scale intelligence might think in centuries. A photon-processing intelligence might think in nanoseconds. Both would be invisible to us -- one too slow to detect, the other too fast to catch.
3. Medium blindness. We assume intelligence manipulates symbols -- language, mathematics, signals. But the systems reviewed here "compute" through chemical gradients (plants), pheromone trails (ants), calcium waves (slime molds), and mycorrhizal carbon exchange (forests). An alien intelligence might process information through media we don't even categorize as informational -- gravitational waves, magnetic field fluctuations, patterns in stellar radiation.
4. Individuality blindness. We assume intelligence belongs to individuals -- discrete entities with boundaries. But a bee swarm is smarter than any bee, an ant colony is more strategically capable than any ant, and a forest network manages resources no single tree could manage. An alien "individual" might be a planetary ecosystem, a stellar atmosphere, or a galactic-scale pattern of interacting systems. The concept of "an alien" -- singular, bounded, addressable -- might be as inapplicable as asking "which neuron in your brain is the one that's you?"
5. Consciousness blindness. We assume that intelligence comes paired with consciousness -- that if something thinks, there's "somebody home." Watts's Blindsight thesis, supported by Dennett's eliminativism, suggests intelligence and consciousness are separable. We might encounter extraordinary intelligence with no interiority whatsoever. Or, if panpsychism is correct, consciousness might be everywhere but in forms so alien to our experience that Nagel's bat problem becomes not a philosophical puzzle but a permanent epistemic barrier.
The deepest implication of this research is that the problem of alien intelligence is not a problem of distance, technology, or rarity. It is a problem of recognition. Even if intelligent alien life were abundant in the universe, even if it existed in our own solar system, even if it were right in front of us, we might fail to recognize it because our concept of "intelligence" is parochially derived from a single example -- the vertebrate brain.
The octopus, the ant colony, the slime mold, and the forest network demonstrate that Earth alone has produced intelligence in architectures we barely understand and only recently began to take seriously. The design space for intelligence across an entire universe -- with its trillions of planets, billions of years of evolution, and physical conditions ranging from neutron star surfaces to gas giant atmospheres to interstellar dust clouds -- is vastly larger than we have imagined.
The question is not "Are we alone?" The question is: "Would we know if we weren't?"