Social Networks and Custody Mismatch
technology

Social Networks and Custody Mismatch

web of trust, decentralized social, custody mismatch, social graph, social network
Leon Acosta
Leon Acostamar 11, 2026 · 4 min read

why the social graph should be infrastructure, and what it costs when it isn't

every platform you use to publish, communicate, or store data operates under a structural condition you rarely see named directly: custody mismatch. the entity holding your data is not the entity with social obligation to preserve it; it unilaterally controls who sees it, for how long and who captures the value it generates

a relay operator, a cloud provider, an instance admin; they decide how long your content lives, which audiences reach it, and what it's worth to advertisers and model trainers. they set these terms unilaterally. the asymmetry is total: you produce the data, they hold the leverage

censorship, algorithmic feed control, shadow banning, data monetization without consent: these are all expressions of custody mismatch

align custody with social obligation and all three return to the person the data belongs to. that alignment is what this article is about

centralization and the limits of trust at scale

centralized platforms are efficient by design. a single entity controls storage, decides what gets surfaced and to whom, and captures the value the data generates. that consolidation is not a side effect — it is the business model. the more data flows through one point, the more leverage that point accumulates over everyone connected to it

the problem is that this model scales the infrastructure while leaving the people behind. Robin Dunbar's research identified a hard cognitive constraint on meaningful social relationships: roughly 150. this is the boundary at which trust, reciprocity, and accountability actually function. Watts and Strogatz demonstrated that human social networks achieve optimal information flow precisely at these scales — dense local clusters, short paths between them. efficiency and trust peak together within human-scale groups, and degrade together beyond them

a platform with a billion users has no social incentive to act in your interest. it holds your data under legal and contractual obligations enforced after the fact — not under the relational pressure that makes custody meaningful. the Cambridge Analytica breach made this visible: at platform scale, data stewardship and social obligation are structurally incompatible

custody requires social obligation. social obligation requires human-scale relationships

distributed architecture and the custody problem

distributed protocols address the concentration of control. Nostr separates identity from infrastructure. Mastodon federates across instances. Bluesky pursues portability. each improves on centralized platforms in genuine ways — but the custody relationship persists

relay operators still decide what to store and which content to surface. instance admins still set retention policies and can suspend access. the architecture distributes; the leverage remains with operators

distributed systems also face the scalability trilemma: as networks grow beyond human-scale clusters, consensus slows, coordination complexity rises, and trust degrades. the trilemma forces a trade-off between decentralization, security, and scalability — optimizing two degrades the third. and without a social filter on propagation, open gossip networks amplify noise as efficiently as signal

distribution relocates storage. it does not resolve who controls preservation, visibility, and value capture

what if the social graph were the infrastructure?

human communities transmit information through reciprocity. your close circle remembers what matters to you, shares it with people likely to care, and extracts no rent from that exchange. what if a storage protocol worked the same way?

what if the people storing your data were the people who already have social reason to — your actual network, under the same reciprocity that governs any healthy relationship? what if visibility propagated through trust rather than through an algorithm optimizing for engagement? what if reach was determined by who vouches for you, not by who pays for distribution?

the answer points toward a design where peers store each other's data as a function of relationship, not commercial transaction — and where the social graph is self-regulating infrastructure rather than a surface requiring constant moderation

why the web of trust could work as infrastructure

the web of trust is not a feature layered on top of a network. it is the only governance mechanism that scales without central authority

legal contracts require courts. platform rules require moderators. cryptographic proofs require validators. trust requires none of these — it is enforced by the relational consequences of defection. a peer who fails their obligations loses standing in the network. the incentive structure is the social structure

Dunbar's ~150 limit is the natural boundary. beyond it, you cannot meaningfully evaluate behavior. a protocol built on this boundary doesn't fight human cognitive limits — it builds on them. spam and manipulation have to infiltrate real relationships before they can spread, which is a fundamentally harder attack than flooding an open relay

challenges and risk mitigations

social graph capture. a trust-based system inherits the vulnerabilities of the social layer. if a trust graph has been infiltrated — by a state actor, a coordinated attack, or gradual capture — the protocol inherits that compromise. the mitigation is layered: privacy-preserving request routing, bootstrapping mechanisms for newcomers, and redundancy across multiple independent trust paths

cold start. a new user with no network has no storage peers, no visibility, no reach. the mitigation is transition: new users begin relay-dependent and migrate toward peer-based custody gradually as their trust graph matures. existing infrastructure is the onramp

sovereignty language cuts both ways. individual custody and state sovereignty use the same vocabulary but serve different interests. any system framed around sovereignty needs to make the distinction explicit — custody belongs to the person, not the jurisdiction

value capture remains open. reciprocal storage addresses preservation. trust-weighted propagation addresses visibility. who profits from the value your data generates is not resolved by social obligation alone. it remains the harder problem

further reading