Skip to main content

From Filesystems to the Web to Value-Embedded Collective Memory

Jonathan Saragossi
Collective Memory
jonathan@collectivememory.ai

Abstract

We identify three paradigms of information organization, each defined by its atomic unit, its linking mechanism, and its method of determining relevance. The filesystem (1960s–) organizes files in hierarchical directories, linked by paths, with relevance determined by the user’s own schema. The web (1990s–) organizes pages in a flat network, linked by hyperlinks, with relevance determined by external appraisal (PageRank, algorithmic curation). We propose that a third paradigm — collective memory — is emergent: information organized as memories, geotemporally grounded media artifacts linked by value bonds (economic stakes) and semantic similarities (vector embeddings), with relevance determined by intrinsic valuation through attention markets and AI-augmented retrieval. Unlike the web, where value is assigned externally by platform algorithms, the memory paradigm embeds value directly into the content layer through a bonding-curve staking mechanism. We argue this architecture is structurally more resistant to manipulation, eliminates the need for a trusted centralized appraiser, and — through per-query attention augmentation — enables genuine epistemic pluralism over a shared dataset. We ground these claims in Wittgenstein’s account of meaning-as-use and Nietzsche’s perspectivism, and situate them within the longer tradition of decentralized information architecture running from Vannevar Bush’s memex through Ted Nelson’s Xanadu. We conclude by addressing unresolved tensions: plutocratic bias, cold-start problems, temporal decay, and the political economy of attention commodification. Keywords: information architecture, attention economy, collective memory, decentralized curation, bonding curves, vector search, retrieval-augmented generation, epistemic pluralism, perspectivism, Wittgenstein

1. Introduction

Every era of computing has been defined not by its hardware but by how it answers three foundational questions about information:
  • What is the unit? What constitutes a single, addressable piece of information?
  • What connects units? How are individual pieces related to one another?
  • Who decides what matters? How is relevance determined, and by whom?
The answers to these questions define the epistemic character of an information system — what can be known, by whom, and under what conditions. They are not merely technical specifications. They are, in a precise sense, decisions about the nature of knowledge: what counts as a fact, what counts as a connection, and what counts as importance. We argue that computing has passed through two paradigms and is entering a third. The filesystem, formalized in Multics and Unix in the 1960s and 1970s, established the first answers: the unit is a file, files are connected by hierarchical paths, and relevance is determined by the user’s own organizational schema. The web, proposed by Berners-Lee in 1989 and realized through HTTP and HTML, established the second: the unit is a page, pages are connected by hyperlinks, and relevance is determined by external appraisal — most consequentially, by Google’s PageRank algorithm and its successors. We propose that a third paradigm is now emerging, one in which the unit is a memory: a geotemporally grounded, media-rich artifact. Memories are connected not by hierarchical containment or authorial hyperlinks, but by two novel mechanisms operating simultaneously — economic bonds (financial stakes) and semantic bonds (vector embeddings). Relevance is determined not by an external algorithm but by the aggregate economic commitment of the community, encoded directly within the content itself. This paper does three things. First, it characterizes the three paradigms in detail, identifying the structural properties that distinguish each and explaining why the transition from web to memory represents a genuine paradigm shift rather than an incremental improvement. Second, it develops the philosophical case for the memory paradigm, drawing on Wittgenstein’s account of meaning and Nietzsche’s perspectivism to argue that the memory architecture is the first information system that takes both seriously as engineering constraints. Third, it identifies the unresolved problems and open questions that the memory paradigm inherits and introduces. We note at the outset that this paper is not simply an academic exercise. It describes an architecture currently being implemented in the Collective Memory system, and one of its authors is the system’s designer. We have attempted to distinguish carefully between empirical claims about how the system works, theoretical claims about the significance of that architecture, and normative claims about why it matters. Readers should weigh the last category with appropriate skepticism.

2. Three Paradigms of Information Organization

2.1 Before the Paradigms: The Memex and Its Unrealized Promise

The intellectual prehistory of information architecture begins not with filesystems but with Vannevar Bush’s 1945 essay “As We May Think,” in which he proposed the memex: a hypothetical device in which an individual stores all their books, records, and communications, accessible with exceeding speed and flexibility. The memex was to be organized not hierarchically but through associative trails — links between documents that mimicked the associative leaps of human cognition. Bush’s vision anticipated both the web’s linking mechanism and the memory paradigm’s emphasis on personal, grounded information artifacts. What neither Bush nor the web ultimately realized was the coupling of value to content — the idea that an artifact might carry within itself a signal of its importance to a community. This is the specific contribution of the memory paradigm. Ted Nelson’s Xanadu project (1960–) pushed further, proposing bidirectional links, transclusion, and micropayment systems. The micropayment element — the idea that reading content should entail an economic transaction with its creator — is structurally adjacent to the bonding-curve staking of the memory paradigm, though the mechanisms differ fundamentally. Nelson’s vision remained largely unrealized; we argue that the technical conditions for realizing something like it now exist.

2.2 The Filesystem: Sovereignty and Enclosure

The filesystem, formalized in Multics (1965) and Unix (1971), introduced the foundational computational metaphor: information is a file, files live in directories, and directories nest hierarchically. The path /home/user/photos/berlin/2024/protest.jpg is both an address and a description — it encodes the owner, the subject, the approximate time, and the kind of content. The filesystem is, in this sense, a theory of categories made operational. Relevance in a filesystem is determined by the user’s own organizational schema. There is no external authority on what matters. Search is local: grep, find, filename matching. The filesystem is epistemically sovereign — each user’s directory structure is a private ontology, a personal theory of what categories exist and how they relate. This sovereignty is both the paradigm’s strength and its fundamental limitation. The limitation is structural: no file knows about any other file unless someone explicitly creates that knowledge. A photograph of a civil rights demonstration in one archive has no connection to a photograph of the same event in another. There is no shared semantic layer. Knowledge is not merely private — it is, in a precise sense, imprisoned in the ontology of whoever created the directory structure. The philosophical model implicit in the filesystem is what Wittgenstein called the private language of the Tractatus Logico-Philosophicus (1921): the idea that meaning is a fixed relationship between a linguistic expression and a fact in the world, and that this relationship is established by the individual speaker. The directory structure is a private language — a naming system whose logic is fully transparent only to its author.

2.3 The Web: Public Linking and the Problem of the External Appraiser

The web, proposed by Berners-Lee (1989) and realized through HTTP and HTML, replaced hierarchical paths with hyperlinks — associative, non-hierarchical references from any page to any other page. The atomic unit became the page, and the linking mechanism became the hyperlink. This was a profound structural shift. For the first time, information units could reference each other across organizational and institutional boundaries. But the web immediately introduced a new problem: with billions of pages and no hierarchy, who decides what matters? The filesystem’s answer — the user decides — does not scale to a global information commons. Some mechanism for surfacing relevance across the entire graph was needed. The answer was the external appraiser. Google’s PageRank (Brin and Page, 1998) treated hyperlinks as votes: a page linked to by many pages, especially by pages that are themselves highly linked, is more relevant. This was an elegant solution, but it introduced a structural dependency with far-reaching consequences. Relevance was no longer a property of the content itself, nor of the user’s own judgment, but of the graph of references around the content, as computed by a centralized actor with its own interests and incentives. The consequences of this architecture have been extensively documented and are now well-understood. We summarize them here because understanding them precisely is necessary for understanding what the memory paradigm proposes to solve — and what it does not.
  • Manipulation: The gap between the appraisal mechanism (links, keywords, engagement metrics) and genuine value creates an attack surface. Search engine optimization, link farms, content mills, and bot-driven engagement inflation all exploit this gap. The cost of manipulation is low relative to the benefit, because the signal (links, clicks) is cheap to fake.
  • Centralization of epistemic authority: A small number of platforms — Google, Facebook, Twitter/X, TikTok — become the de facto arbiters of what information reaches whom. Their ranking algorithms are the practical epistemology of the internet for billions of people, yet their internal workings are opaque, proprietary, and subject to change without notice or explanation.
  • Homogenization: Algorithmic optimization for engagement tends to converge on a narrow band of content types. Pariser’s (2011) filter bubble analysis documented this tendency; subsequent research has both confirmed and complicated the picture. The important structural point is that homogenization is an architectural tendency, not a correctable bug.
  • Decoupling of value from content: A page’s discoverability — its effective value to the information ecosystem — lives not in the page itself but in Google’s index, Facebook’s social graph, or Twitter’s recommendation engine. Remove the platform, and the content’s epistemic standing evaporates. This is a fragility of the first order.
The philosophical model implicit in the web’s appraisal architecture is what Nietzsche might call the pretense of a God’s eye view — the claim of a perspective from nowhere, a ranking that is not one ranking among many but The Ranking. We return to this point in Section 4.

2.4 The Memory Paradigm: Intrinsic Value and Semantic Bonds

We propose that a third paradigm is emerging, one in which three structural innovations combine to address the core limitations of both previous paradigms while introducing new ones. First, the unit is a memory: not a file (which is format-defined) or a page (which is link-defined and typically document-structured), but a geotemporally grounded media artifact. A memory has an inherent where and when — it is indexical, pointing to a specific moment and location in the world. It is not merely a document about something; it is a record of a particular instance of the world. It also carries AI-generated semantic metadata (description, tags) and a vector embedding that positions it in a continuous high-dimensional semantic space. Second, links are value bonds and semantic similarities. Memories are connected not by explicit authorial links or hierarchical containment, but by two mechanisms operating simultaneously. Economic bonds: when a user stakes tokens on a memory, they create a value link — a financial claim that connects their identity and resources to that memory. The staking graph is a value network analogous to the web’s link network, but with real economic commitment behind each edge. Semantic bonds: vector embeddings create implicit connections between memories based on content similarity. These connections are not authored by anyone — they emerge from the geometry of meaning in a high-dimensional space. Third, relevance is intrinsic. A memory’s value is not determined by an external algorithm but by the aggregate economic commitment of the community, encoded directly in the memory’s own state. The staked amount — the total attention tokens committed to a memory — is a property of the memory itself. Remove every search engine, every recommendation algorithm, every platform: the memory still carries its value. This portability of value is, we argue, the most significant architectural innovation of the memory paradigm.

3. The Architecture of Intrinsic Value

3.1 Bonding Curves as Epistemic Incentive Structures

In the memory paradigm, value is not assigned to content by an external authority. It is discovered through a market mechanism: a bonding curve that governs the price of staking attention on a memory. The price of staking follows a sublinear power function: P(v)=P0+αvβP(v) = P_0 + \alpha \cdot v^\beta Where P₀ is the base price, α is the curve coefficient, v is the total value locked (principal plus revenue reserves), and β < 1 is a sublinear exponent (empirically set to approximately 0.6 in current implementations). The sublinearity of β < 1 is epistemically crucial, and its importance extends beyond pricing mechanics. It means that early stakes are cheap and late stakes are expensive. This creates a discovery incentive: the economic reward for identifying a valuable memory before the community has recognized its value is structurally greater than the reward for confirming an already-recognized judgment. The bonding curve is not merely a pricing mechanism — it is an incentive structure that rewards original judgment over conformity. Compare this with PageRank’s incentive structure. PageRank rewards linking to pages that are already authoritative — a page linked to by high-PageRank pages passes more value. This creates a conservative, self-reinforcing dynamic: the already-authoritative becomes more authoritative, and the unknown remains unknown. The bonding curve inverts this: the already-staked is expensive to stake further, and the unstaked is cheap. The former incentivizes conformity; the latter incentivizes exploration and contrarianism. This structural parallel to Zahavi’s (1975) handicap principle in evolutionary biology and Spence’s (1973) signaling theory in economics is not incidental. The epistemic power of the staking signal derives precisely from its costliness. A signal that is cheap to produce is easily faked; a signal that requires committing actual capital provides genuine information about the staker’s beliefs.

3.2 Value Portability and the Absence of the Appraiser

In the web paradigm, content and valuation are architecturally separated. A web page exists at a URL; its relevance exists in Google’s index. The page does not know its own PageRank. If Google disappears, the content survives but its discoverability — its epistemic standing within the information ecosystem — evaporates entirely. In the memory paradigm, the valuation is part of the content’s own state. The staked amount, the principal reserve, the revenue reserve, and the ownership structure are all properties of the memory artifact itself, stored alongside the media URL and AI-generated metadata. No intermediary is needed to determine a memory’s worth — any search system can read the staked amount and use it as a relevance signal. The value is portable. This architectural difference has profound implications for resilience and censorship resistance. A regime that wishes to suppress a document in the web paradigm can achieve this by pressuring Google to deindex it — the content survives but becomes effectively invisible. In the memory paradigm, the memory carries its community’s valuation with it, and this valuation is a distributed economic fact that cannot be simply zeroed out by deindexing. The suppression of memory becomes genuinely harder.

3.3 Manipulation Resistance: A Formal Analysis

Web-era manipulation exploits the architectural separation of content from value. SEO manipulates the signals (links, keywords, anchor text) that external appraisers use without changing the content itself. Social media manipulation inflates engagement metrics that feed into recommendation algorithms. In both cases, the cost of the manipulation is low relative to the potential benefit, because the signal is cheap to produce. In the memory paradigm, manipulation requires genuine economic commitment. We identify four structural properties that make this self-limiting: Cost escalation: the sublinear bonding curve means that each additional unit of stake costs more than the last. Inflating a memory’s value from a low to a high level requires capital investment that grows superlinearly with the desired level of inflation. Capital lock-up: staked tokens are committed to the memory’s reserve. The attacker’s capital is trapped until they redeem, and redemption at an artificially inflated price returns less than invested if no other participants validate the stake by staking further. Creator fee leakage: a percentage of every stake accrues to the memory’s creator as an irrecoverable fee. Manipulating a memory one did not create imposes a continuous cost proportional to the size of the manipulation. Transparent stake distribution: other participants can observe the staking pattern. A memory with a single massive stake but no subsequent community interest is a legible signal of manipulation, not genuine value. AI-augmented search can weight staker count and stake distribution alongside raw staked amounts, making concentrated manipulation more easily identifiable.
Signal TypeMechanismCost to FakeEpistemic Quality
Hyperlink (web)Authorial referenceLow (SEO, link farms)Weak — easily gamed
Engagement metric (web)Click / dwell / shareVery low (bots)Very weak — behavioral, not deliberate
Economic stake (memory)Token lock-up on contentHigh (capital at risk)Strong — costly, deliberate, auditable
Semantic embedding (memory)Vector geometryNone — structural, not gamedStructural — encodes meaning not intent
We do not claim that the memory paradigm is manipulation-proof. Coordinated attacks by well-capitalized actors remain possible. What we claim is that the cost of achieving a given level of apparent relevance is substantially higher than in the web paradigm, and that the manipulation itself leaves an auditable economic trace.

4. Epistemic Pluralism Through Attention Augmentation

Traditional search engines make an implicit epistemological commitment: that for any query, there exists a single correct ranking of results. Google’s results for “climate change” are the same whether you are a glaciologist, a fossil fuel lobbyist, a smallholder farmer in Bangladesh, or a politician seeking reelection. The algorithm presents one ordering as the answer — a claim to objectivity that is, on reflection, a philosophical position of considerable audacity. This position has come under sustained critique from the philosophy of science (Longino, 1990; Haraway, 1988) and from critical algorithm studies (Noble, 2018; Benjamin, 2019). The core objection is not that objectivity is impossible or undesirable, but that the pretense of a view from nowhere conceals the particular perspective (institutional, commercial, cultural) from which the algorithm was designed and the data from which it learned. A ranking that presents itself as objective is more dangerous than one that presents itself as a perspective, because the former forecloses the question of whose interests it serves.

4.2 Wittgenstein: Meaning as Use and the Language Game of Relevance

Wittgenstein’s later philosophy, particularly the Philosophical Investigations (1953), provides a rigorous framework for understanding why singular rankings are epistemically impoverished. His central claim is that the meaning of an expression is not a fixed relationship between that expression and a feature of the world, but is constituted by the practices — the language games — in which it is embedded. “The meaning of a word is its use in the language” (PI §43). Applied to information systems, the relevance of a memory is not a fixed property of that memory or of its relationship to a query. It is constituted by the use context: who is searching, what they intend to do with the results, and what community of practice they belong to. A photograph of a street market in Lagos has different relevance for a food anthropologist, an urban economist, an architect studying informal settlements, and someone researching a travel itinerary. These are not merely different preferences over the same ranking — they are different language games in which the same content participates differently. Wittgenstein’s concept of family resemblance (PI §67) is also directly instantiated in vector-based semantic search. When the memory paradigm finds memories similar to a query, the similarity is not defined by a single shared property (as in classical categorization) but by a network of overlapping and criss-crossing features — exactly the “complicated network of similarities overlapping and criss-crossing: sometimes overall similarities, sometimes similarities of detail” that Wittgenstein describes. The high-dimensional embedding space is, in a precise mathematical sense, a space of family resemblances: proximity in the space reflects not identity of a single feature but overlap across many features simultaneously.

4.3 Nietzsche: Perspectivism as Engineering Constraint

Nietzsche’s perspectivism — the view that there are no facts, only interpretations, and that every apprehension of the world is from a particular perspective with particular interests — is frequently dismissed as relativism or nihilism. In the context of information architecture, it is neither. It is a precise and actionable design principle.
“There is only a perspective seeing, only a perspective ‘knowing’; and the more affects we allow to speak about one thing, the more eyes, different eyes, we can use to observe one thing, the more complete will our ‘concept’ of this thing, our ‘objectivity,’ be.” — Nietzsche, On the Genealogy of Morals, III.12
Nietzsche’s claim is not that all perspectives are equally valid, but that richer knowledge comes from holding multiple perspectives simultaneously rather than collapsing them into a single authoritative view. Genuine objectivity, on this account, is not the elimination of perspective but the accumulation of many perspectives. The web paradigm, with its single authoritative ranking, implicitly adopts what Nietzsche would call the dogmatist’s error: the belief that there is a single correct view, and that the task of epistemology is to identify and adopt it. The memory paradigm proposes something structurally different: a shared dataset over which multiple perspectives — each backed by economic commitment — can be held simultaneously and surfaced differently to different participants.

4.4 The Augmentation Mechanism: Perspectivism Implemented as Protocol

The memory paradigm implements Nietzsche’s perspectivism as a concrete technical protocol through what we call attention augmentation. Each search result carries multiple value layers:
Value LayerDefinitionEpistemic Role
Public ValueTotal ATTN staked by all usersThe community’s collective appraisal
Local ValueRelevance weight for this specific queryThe query context
Augmented ValuePersonal adjustment from the searcher’s own staking historyThe individual’s interpreted perspective
Emergent ValueSynthesis of all three via attention market resolutionThe arbitraged, pluralistic result
The emergent value is not a weighted average of the other three. It is the output of an attention market that resolves the tension between collective valuation (what the community has decided), query relevance (what this specific search is about), and individual perspective (what this particular searcher has committed to believing is important). The resolution is economic, transparent, and auditable — not opaque, algorithmic, and proprietary. The consequence is that two users searching for the same term can see genuinely different orderings of the same memories, not because an algorithm is manipulating them for engagement (as in filter bubbles), but because their own economic commitments — their own acts of interpretation — weight the results differently. This difference is not hidden from the user; it is the point. This is Nietzsche’s perspectivism implemented as a protocol: the dataset is shared, the memories are the same, but the view through the dataset is shaped by the viewer’s own expressed commitments. And because those commitments are economic rather than merely behavioral (clicks, dwell time), they are deliberate, costly, and therefore more likely to reflect genuine conviction rather than passive consumption patterns.

4.5 Diversity as Structural Property, Not Policy Intervention

In the web paradigm, diversity of viewpoint is a moderation challenge — something platforms must engineer against their own algorithmic tendencies toward homogenization, typically through explicit editorial policy or regulatory pressure. Diversity is not what the architecture wants; it is what external pressure occasionally achieves. In the memory paradigm, diversity is a structural property. The bonding curve rewards contrarian staking: because early stakes are cheap, there is an economic incentive to find and commit to undervalued memories — content that the majority has overlooked. This structurally promotes epistemic diversity: the system pays you to disagree with the consensus, if your disagreement proves prescient. Creator incentives reward diverse content: the creator fee means that observers in underserved regions, citizen journalists, and practitioners in niche domains have a direct financial incentive to capture and share memories. The more unique and underrepresented the content, the higher its potential to attract contrarian staking.

5. The Semantic Layer: AI as Infrastructure, Not Authority

5.1 The Role Distinction

In the web paradigm, AI (in the form of recommendation algorithms, ranking systems, and increasingly large language models) functions as an epistemic authority: it decides what you see, in what order, and with what framing. The algorithm’s judgment substitutes for the user’s. Even when the system is personalized, the personalization is an algorithmic intervention, not an expression of the user’s own commitments. In the memory paradigm, AI plays a fundamentally different role. It functions as semantic infrastructure: it creates the vector space in which human-driven value curation operates, without itself determining which content is valuable. The pipeline is:
  1. A memory is uploaded — a human act of observation and documentation
  2. A multimodal AI model analyzes the visual content and generates a textual description and semantic tags — AI enrichment
  3. The text is embedded into a high-dimensional vector space — AI infrastructure
  4. The vector is stored alongside the memory’s metadata in a distributed index — semantic indexing
  5. Users stake tokens on memories — human acts of valuation and interpretation
Steps two through four are AI-driven but value-neutral. The embedding model does not judge the memory’s worth — it positions it in a semantic space where similarity can be measured. The valuation is entirely human-driven. This separation is crucial to the paradigm’s epistemic claims.

5.2 Retrieval-Augmented Generation as Transparent Synthesis

When a user searches, the system performs Retrieval-Augmented Generation (Lewis et al., 2020): the query is expanded and embedded, nearest neighbors are retrieved from the vector index, results are enriched with economic data from the distributed store, results are sorted by staked amount and semantic similarity, and a natural language summary is generated from the top results. The AI-generated summary is grounded in — and cites — the specific memories that informed it. This is not generative AI producing content from its training data; it is generative AI synthesizing from a curated, value-weighted, community-validated dataset. The memories used field in the response creates an auditable chain: the user can verify which memories the AI drew upon, inspect those memories, check their staking history, and form their own judgment about the reliability of the synthesis. This is a fundamentally more transparent epistemology than web search, where the relationship between a query and its results is mediated by a ranking function that is opaque, proprietary, and contested. In the memory paradigm, the AI is a synthesis tool, not a ranking authority. It explains what the community has valued; it does not determine what the community should value.

5.3 The Limits of AI Enrichment

The system’s semantic infrastructure is only as reliable as the AI models that generate it. If a multimodal model mischaracterizes a photograph — misidentifies a location, misreads the emotional tenor of a scene, fails to recognize culturally specific content — the memory will be poorly positioned in the semantic space and may be unfindable by relevant queries. This is not a marginal concern: AI models inherit the biases of their training data, and those biases systematically disadvantage content from underrepresented communities. The current implementation has no human-in-the-loop correction for AI-generated metadata at the individual memory level. This is an unresolved problem. We return to it in Section 7.
The memory paradigm inherits from and responds to several bodies of prior work that it is important to acknowledge. The semantic web project (Berners-Lee et al., 2001) proposed to extend the web with machine-readable metadata that would make the relationships between documents explicit and computable. The vision was similar to the memory paradigm’s use of semantic embeddings, but the mechanism was different: the semantic web relied on explicit ontological markup by human authors, while the memory paradigm uses implicit geometric positioning by AI embedding models. The semantic web largely failed to achieve adoption because the cost of producing explicit metadata was prohibitive. The memory paradigm sidesteps this by generating semantic metadata automatically. Knowledge graph approaches (Vrandecic and Krotzsch, 2014) create explicit structured representations of entities and their relationships. These are powerful for well-defined domains but are expensive to construct and maintain, and they presuppose agreement on ontological categories that may be genuinely contested. The memory paradigm makes no such presupposition: meaning is distributed across the geometry of the embedding space, not encoded in explicit categorical structures. Blockchain-based content systems (Filecoin, Arweave, and others) address the portability problem — ensuring that content persists independently of any single platform. The memory paradigm incorporates similar portability concerns but adds the value-embedding mechanism, which these systems do not provide. Content stored on Arweave is permanent, but its epistemic standing within the information ecosystem still depends on external indexers. Prediction markets (Hanson, 2008; Gnosis, 2015) use economic mechanisms to aggregate distributed judgments about uncertain events. The bonding-curve staking of the memory paradigm is structurally related to prediction markets: in both cases, agents commit capital based on their beliefs, and the aggregate price signal carries information beyond what any single agent knows. The key difference is that prediction markets resolve against a ground truth, while the memory paradigm does not: the staking market never resolves, because there is no single correct measure of a memory’s value. Finally, the attention economy literature (Goldhaber, 1997; Davenport and Beck, 2001; Wu, 2016) provides the economic framing for what the memory paradigm is attempting to measure and allocate. The insight that attention is the scarce resource in an information-rich world is foundational to the paradigm’s design. What the paradigm adds is a mechanism for making attention commitments explicit, costly, and auditable — converting the implicit attention that any act of reading or viewing represents into an explicit economic signal.

7. Limitations and Open Questions

We do not claim the memory paradigm is without problems. Several remain unresolved, and we believe intellectual honesty requires naming them clearly.

7.1 Plutocratic Bias

If relevance is weighted by staked amount, wealthy participants have disproportionate influence over what the community sees. The sublinear bonding curve mitigates this — it creates diminishing returns on large stakes, so doubling one’s stake does not double one’s influence — but does not eliminate it. A sufficiently capitalized actor can still achieve outsized influence. This is not merely a technical problem but a political one. Markets in epistemic authority have historically favored the powerful; the attention economy of the web has produced a world in which a small number of wealthy actors (and, worse, a small number of profitable algorithms) determine the information environment of billions. A system that partially democratizes this authority while retaining a structural advantage for capital is an improvement, but not a solution. Quadratic voting mechanisms (Lalley and Weyl, 2018) have been proposed as a way of balancing economic signal strength against plutocratic capture: the influence of a stake scales as the square root of its size, rather than linearly. Whether this or similar mechanisms can be integrated with the bonding curve architecture without destroying the discovery incentive is an open research question.

7.2 The Cold-Start Problem

New memories with zero stakes are invisible to value-weighted search. The system relies on semantic similarity as a fallback, but this creates a dead zone for content that is genuinely novel — memories that are unlike anything in the existing corpus. The discovery incentive (cheap early stakes) and creator rewards partially address this, but only for content that users actually encounter. Content that is never encountered cannot attract early stakes. This problem is structurally related to the cold-start problem in recommendation systems, and the solutions are similarly unsatisfying: editorial curation, random sampling, new-content boosts. Each of these reintroduces a degree of centralization that the memory paradigm is designed to avoid.

7.3 Temporal Decay

The bonding curve has no time component. A memory staked heavily several years ago retains its value indefinitely, even if the information is outdated or the consensus about its importance has shifted. This creates a form of epistemic path dependency: the information landscape of the memory paradigm reflects the judgments of the past, with no mechanism for revision. Whether and how temporal decay should be introduced is a genuine design dilemma. A memory paradigm without decay is a permanent record, which has archival value and resistance to revisionism. A memory paradigm with decay is more responsive to changing consensus, but introduces an implicit pressure toward recency that may disadvantage historically significant but temporally distant content.

7.4 AI Enrichment Fidelity and Bias

The system relies on AI models to generate the semantic metadata that positions memories in the embedding space. These models inherit the biases of their training data. Research has consistently documented that commercial vision models perform worse on images of people with darker skin, that geographic coverage of training data is heavily skewed toward the Global North, and that culturally specific knowledge is often misrepresented or absent. The consequence for the memory paradigm is that content from underrepresented communities may be systematically mislabeled, poorly embedded, and therefore unfindable by relevant queries — not because the community has not valued it, but because the AI infrastructure has failed to position it correctly in the semantic space. This is a form of structural discrimination that cannot be addressed by economic incentives alone.

7.5 Political Economy of Attention Commodification

A Marxian critique would observe that the memory paradigm financializes yet another dimension of human experience — the act of attending to and valuing what one has witnessed. The creation of memories becomes a form of labor, and the staking of attention becomes a form of capital accumulation. The system creates a new market in attention and, with it, new forms of exploitation: content creators compete for staking, platforms take fees, and the infrastructure operators extract value from every transaction. A Habermasian critique would argue that economic signaling is a systematically distorted form of communication — that the colonization of the lifeworld by systems logic (in this case, by market mechanisms) impoverishes rather than enriches collective understanding. Genuine epistemic pluralism, on a Habermasian account, requires undistorted communicative action, not market-mediated judgment. We acknowledge the force of both critiques. Our partial response is that the alternative currently on offer — opaque algorithmic curation by profit-driven platforms, with no mechanism for individual economic participation — is not a form of undistorted communication either. The choice is not between market mechanisms and Habermasian discourse; it is between transparent, participatory market mechanisms and opaque, extractive algorithmic ones. This is a genuine improvement, even if it is not a final solution.

8. Conclusion

The filesystem gave us private organization — a sovereign, local system in which each user’s information was their own, organized by their own ontology, inaccessible to others. The web gave us public linking — a global network in which any document could reference any other, mediated by external appraisers who accrued enormous power from their position as arbiters of relevance. The memory paradigm proposes a third possibility: public valuation — a system in which the worth of information is not decreed by a central authority but discovered by a market, embedded in the content itself, and refracted through the perspectives of its participants. The key architectural innovation is the coupling of value to content. In the web paradigm, content and value are separated: a document exists at a URL, and its relevance exists in Google’s index. In the memory paradigm, value is a property of the content itself, carried in its own state, portable across any infrastructure that can read it. This coupling eliminates the structural dependency on a trusted centralized appraiser that has defined and distorted the information landscape since the 1990s. The philosophical framing we have developed is not decorative. Wittgenstein’s account of meaning as use implies that relevance is not a fixed property of content but a function of the practices in which content is embedded — and this has a direct consequence for information system design: a system that returns the same ranking to all users for all purposes is not achieving objectivity; it is imposing a particular form of life as universal. Nietzsche’s perspectivism implies that richer understanding comes from the accumulation of many perspectives, not from the suppression of all but one — and this too has a direct consequence: a system that holds multiple economic commitments simultaneously, surfacing different views to different participants, is epistemically richer than one that collapses them into a single authoritative ranking. We have also been honest about what the memory paradigm does not solve. It reduces but does not eliminate plutocratic bias. It creates economic incentives for discovery but cannot solve cold-start problems for genuinely novel content. It provides no mechanism for temporal decay. It inherits the biases of its AI infrastructure. And it commodifies attention in ways that invite serious political-economic critique. These are not reasons to dismiss the paradigm. They are the research agenda that the paradigm generates. The transition from filesystem to web was not completed in a day; the problems introduced by the web — centralization, manipulation, homogenization — took decades to fully manifest and remain unresolved. The transition from web to memory will take at least as long. What we have attempted here is to characterize the transition clearly enough that the research agenda becomes visible. The evolution from files to pages to memories is not merely a change in technology. It is a change in epistemology — in what it means for information to have value, and who gets to decide.

References

Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press. Berners-Lee, T. (1989). Information Management: A Proposal. CERN Internal Memorandum. Berners-Lee, T., Hendler, J., and Lassila, O. (2001). The Semantic Web. Scientific American, 284(5), 34–43. Brin, S., and Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems, 30(1–7), 107–117. Bush, V. (1945). As We May Think. The Atlantic Monthly, 176(1), 101–108. Davenport, T. H., and Beck, J. C. (2001). The Attention Economy: Understanding the New Currency of Business. Boston: Harvard Business School Press. Goldhaber, M. H. (1997). The Attention Economy and the Net. First Monday, 2(4). Hanson, R. (2008). Futarchy: Vote Values, But Bet Beliefs. Working paper, George Mason University. Haraway, D. (1988). Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Feminist Studies, 14(3), 575–599. Lalley, S., and Weyl, E. G. (2018). Quadratic Voting: How Mechanism Design Can Radicalize Democracy. AEA Papers and Proceedings, 108, 33–37. Lewis, P., Perez, E., Piktus, A., et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Advances in Neural Information Processing Systems, 33. Longino, H. (1990). Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton: Princeton University Press. Nelson, T. H. (1974). Computer Lib / Dream Machines. Self-published. Nietzsche, F. (1887/1989). On the Genealogy of Morals. Trans. W. Kaufmann. New York: Vintage Books. Nietzsche, F. (1901/1968). The Will to Power. Trans. W. Kaufmann and R. J. Hollingdale. New York: Vintage Books. Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press. Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. New York: Penguin Press. Spence, M. (1973). Job Market Signaling. The Quarterly Journal of Economics, 87(3), 355–374. Vrandecic, D., and Krotzsch, M. (2014). Wikidata: A Free Collaborative Knowledge Base. Communications of the ACM, 57(10), 78–85. Wittgenstein, L. (1921/1961). Tractatus Logico-Philosophicus. Trans. D. F. Pears and B. F. McGuinness. London: Routledge. Wittgenstein, L. (1953). Philosophical Investigations. Trans. G. E. M. Anscombe. Oxford: Blackwell. Wu, T. (2016). The Attention Merchants: The Epic Scramble to Get Inside Our Heads. New York: Knopf. Zahavi, A. (1975). Mate selection — a selection for a handicap. Journal of Theoretical Biology, 53(1), 205–214. Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.