Why digital long-form discussions belong on computers, not phones

This topic could also be titled:

  • why is everyone so angry online today?
  • why does everyone misunderstand each other?
  • why has everyone become overly sensitive online?

The answer lies in how our brains actually process information, the Magical Number Seven, Plus or Minus Two, your brain’s bitrate and primal human instincts… and is the reason why Eve will not come to your phone any time soon, and will keep being on a computer only.

This topic is based on years of research, talking to people, reading lots, and I mean lots of scientific papers, all to establish something that I believe is intrinsically known, often studied in depth, but seldom understood.

This may read long, but it’s the shortest (most compressed) version of my thoughts on the matter… and… you will find this very sentence hilarious as you read on.
No ai was used to write this.


In 1956, american psychologist and founder of cognitive psychology, George Armitage Miller, published a paper in the Psychological Review journal, titled “The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information” [1], in which he made the claim that human memory has a fundamental, unalterable, constraint; it cannot hold more than seven chunks of information, give or take two. This has been replicated countless times, across different types of information, cultures, backgrounds, and levels of intelligence, and is considered factual beyond reasonable doubt in the scientific community. Whether subjects are asked to remember digits, letters, words, sounds, chess positions, or anything else, the pattern holds. Their performance degrades significantly when the number of items exceeds this range.

It is a fundamental limitation of the human brain, one that we all share, whether you’re Albert Einstein, Magnus Carlsen or Bob Smith from accounting.

While you’re limited in the number of chunks of data your memory can hold, the size of the chunk is flexible and highly dependent on how often your memory chunks are being used on any specific subject.


  1. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81-97 ↩︎

Let’s take the example of chess, when you first start learning how to play chess, you will use what feels like 100% of your brain power on just remembering how the pieces move, as well as basic patterns, but the more you play chess, and the more you play chess, the more it seems like it’s easier to reason about complex moves you would have never thought of. But surely the top players must be somehow extreme geniuses, and must somehow have crazy memorization skills that we, the masses, have no access to, right? NO.

Magnus Carlsen has the same amount of memory slots as a beginner chess player, he has just internalized patterns, which in turn free up brain space, and allows the brain to use those more effectively. Magnus never has to think about where his knight is, he looks at the board and one slot in his brain fills up with “French Defense structure”, then his thinking can focus entirely on that.

You, a novice player, looking at the same board he does, have an exponentially harder time following his thinking, and thus consider him a chess genius, a prodigy, but you’re playing a different game. If you’ve never played chess before, or have played it rarely, your 7 ± 2 slots are filled up entirely by the 32 pieces of the board, your RAM, so to speak is full, and information must constantly come in and out, and as you’re computing your next move, it takes your full focus, and thus you miss critical details.

If you’ve ever watched a chess game between two grandmasters, you will notice that often, making unexpected moves (not blunders, but “clever” moves) completely throws off both players, they both start getting agitated, their eyes shift back and forth, they start drinking more water, or make that slight gesture where they touch their face in an odd way. This is because that clever move has filled one slot of memory they didn’t expect to get filled. That clever move threw them both off their game, and gave them a handicap.

1 Like

This of course goes far beyond chess [1]. I consider myself a skilled programmer, but what does that mean?

When you first start learning how to code, and if you haven’t exercised that part of the brain enough in recent times, you find yourself stuck in the most minute of details: syntax. If you’re a beginner, the simple concept of typing what appears to be meaningless words such as int, let, bool, char, as well as those strange curly brackets, and square brackets that you only remember seeing maybe in high school maths class when you were learning algebra, are too much. They require your entire brain power; but as you start coding more and more, your brain literally stops seeing them, suddenly you start thinking in blocks.

if condition {
    // do something
} else {
   // do something else
}

And the more you get familiar with those, the more those blocks expand, as you get better, suddenly you think in patterns.

For instance, if you’ve coded for even a short amount of time, it takes no memory slots to think about how to check if a number is divisible by two, n % 2, (or n & 1, but this one may not be familiar to programmers that don’t think in bits often). The more experienced you get, the more you can do without thinking. An experienced developer can write quicksort from memory, by using perhaps one slot. A very good developer can fit in entire architectures in their brain.

Programmers take this concept so seriously that we have invented tools whose sole purpose is saving time, by allowing us to make chunks bigger (syntax highlighting, autocomplete, language server protocols, debug adapters, time travel debugging, even tools like bacon in rust because we want to take away any load).

Extra information

More recent research has refined this estimate downward, significantly, to 3 to 4 chunks. [2]


But what does this have to do with why everyone’s so angry online, why instant messaging is terrible, and why smartphones are the cause?

Everything


  1. Oberauer, K., et al. (2018). How does chunking help working memory? Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(4), 567-593. ↩︎

  2. Cowan, N. (2010). The magical mystery four: How is working memory capacity limited, and why? Current Directions in Psychological Science, 19(1), 51-57. ↩︎

1 Like

In the early 20th century, German psychologists Max Wertheimer, Kurt Koffka, and Wolfgang Köhler discovered something fundamental: our brains automatically group information according to predictable patterns[1]. Those groupings happen automatically, at a subconscious, pre-cognitive level.

When you look at a face, you don’t see two eyes, a nose a mouth and ears as separate objects, rather your brain chunks the entire thing into a “face”. Studies have shown that this isn’t learned behavior, babies start recognizing faces within hours of birth[2], so those principles are hardwired into our brains. And exist to save working memory slots.

When you walk into a room and see people standing close together and talking, Gestalt proximity chunks this as a group having a conversation rather (one slot) rather than person A + person B + person C + spatial relationship of AB + spatial relationship BC + spatial relationship AC + vocal patterns indicating interaction" (seven slots).

This chunking creates a fundamental problem in communication, in that we assume that others chunk information the same way we do[3], when you’re an expert in a domain, as we talked about before, your chunks are large and abstract, when you explain something to a novice you’r operating with completely different mental representations. You say “just use a hash map” and occupy one slot in your working memory, while the novice hears five separate concepts they must juggle: “hash”, “map”, “data structure”, “key-value pairs”, “O(1) lookup”.

This asymmetry becomes catastrophic in text-based communication, where we lack the real-time feedback loops that let us detect confusion. In face-to-face conversation, a furrowed brow or a pause signals misalignment, triggering automatic correction[4]. These signals occupy minimal working memory because they’re processed via ancient neural pathways designed for social cohesion[5].

If we remove those signals something happens, each person constructs their own mental model of the conversation using their own chunking patterns, with no mechanism to verify alignment[6]. You write “this is a simple fix,” chunking the solution as one item in your working memory. The reader parses this as “the author thinks I’m stupid” + “this is actually complex” + “I’m being gaslight” + “there’s missing information” + “I need to defend myself” - suddenly they’ve filled all their working memory slots with defensive cognition rather than the actual technical content.

This cascading failure mode becomes especially pronounced in asynchronous text discussions. When someone encounters a problem, they’ve already burned cognitive cycles attempting a solution. By the time they write their question, their working memory contains: “my attempted solution” + “where it failed” + “what I tried next” + “why I think it’s not working” + “the urgency of the deadline” + “embarrassment about not knowing” + “the specific error message.”

That’s already seven slots filled before they even begin composing their question. The actual root problem, the one occupying slot zero in their thinking when they started, has been swapped out of working memory entirely.

They’re now asking about implementation details of their solution attempt, not the original problem. The person trying to help loads their working memory with the question as presented, chunks it according to their own expertise, and provides an answer to what was asked. Both parties are now operating in completely different problem spaces, each convinced the other isn’t understanding them, each getting progressively more frustrated as their working memory fills with meta-cognitive load about the communication failure itself rather than the technical content. The helper’s slots fill with: “incomplete information” + “why won’t they just tell me what they’re actually doing” + “this smells like they’re solving the wrong problem” + “how do I ask without sounding condescending” + “I’ve explained this three times.” And this brings me to XY problems.


  1. Wertheimer, M. (1923). Laws of organization in perceptual forms. Psychologische Forschung, 4(1), 301-350. ↩︎

  2. Johnson, M. H., et al. (1991). Newborns’ preferential tracking of face-like stimuli and its subsequent decline. Cognition, 40(1-2), 1-19. ↩︎

  3. Keysar, B., et al. (2003). Limits on theory of mind use in adults. Cognition, 89(1), 25-41. ↩︎

  4. Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. Perspectives on socially shared cognition, 13(1991), 127-149. ↩︎

  5. Frith, C. D., & Frith, U. (2012). Mechanisms of social cognition. Annual review of psychology, 63, 287-313. ↩︎

  6. Krauss, R. M., & Fussell, S. R. (1996). Social psychological models of interpersonal communication. Social psychology: Handbook of basic principles, 655-701. ↩︎

This communication breakdown has a name in the technical community: the XY Problem[1]. Someone has problem X. They try to solve it with Y. Y doesn’t work. So they ask “how do I make Y work?” You answer their question about Y. But Y was never going to solve X. You both did everything right and the conversation still failed completely.

This is what chunking mismatch looks like in the wild. The novice is drowning in details: “my file path” + “this string function” + “regex pattern that should work” + “this one weird character breaking everything” + “deadline in 3 hours” + “why am I so bad at this” + “the exact error message.” Seven slots, completely full. The expert reads this and one single slot lights up in their brain: “oh, path handling.” They’ve played this game a thousand times. They know cross-platform paths, they know the edge cases, they know the gotchas. It’s all compressed into one chunk.

The expert tries to help: “Don’t use regex for this, just use Path.join()” and watches in confusion as the novice gets defensive. But look at what just happened in the novice’s head: “my approach is wrong” (slot fills with defensiveness) + “they’re not answering my question” (another slot) + “I need the regex to match THIS specific thing” (another slot) + “did I explain it badly?” (another slot). The expert gave the right answer. The novice asked the right question from their perspective.

The expert is talking in compressed abstractions. “Path manipulation” is a single atomic concept to them, something they don’t even think about anymore. The novice is still assembling that concept from scratch, piece by piece, character by character, still figuring out what a path even is at a fundamental level. When the expert says “just use the standard library” they’re not being condescending, they’re literally speaking a different language. One where years of pain have been compressed into efficient patterns. The novice can’t hear that language yet. Their chunks are too small.

The expert was once exactly here, struggling with the same regex, learning paths the hard way, discovering libraries after doing it manually twenty times. But their brain compressed all that. Now they can’t uncompress it fast enough to meet the novice where they actually are.

This memory slot problem gets exponentially worse when you factor in what happens when those slots overflow. Because here’s the thing: your working memory isn’t just limited in capacity - it’s also temporary storage with an expiration timer.


  1. Pohl, E. S. (2013). The XY Problem. Retrieved from http://xyproblem.info/ ↩︎

When your working memory reaches capacity, your brain doesn’t just stop processing information. Instead, it engages in what cognitive scientists call “memory consolidation”, moving information from working memory into long-term storage[1][2]. This process isn’t instantaneous or free. Your working memory operates at roughly 50 milliseconds access time - essentially real-time for conscious thought[3]. Long-term memory retrieval, however, takes 200 milliseconds or more[4][5]. That’s a 4× latency penalty minimum - and it compounds with every swap.

Think about the last time you cooked something new. You’re standing in your kitchen, recipe pulled up on your phone or printed on paper. You read: “dice the onions, mince three cloves of garlic, julienne the bell peppers.” You walk to your cutting board. By the time you pick up the knife, you’ve forgotten whether it was two cloves or three. You walk back. Check. Three cloves. Return to the cutting board. Start chopping. Wait, was it diced or minced onions? Back to the recipe.

Each trip back to that recipe is a huge latency penalty[6][7]. While the instructions are in your working memory, you can access them nearly instantly, within 50 milliseconds. But the moment you look away, start chopping, fill those memory slots with “knife angle” + “finger position” + “cutting rhythm” + “don’t cut yourself” + “is this the right size?” + “how much have I cut?” + “what’s burning on the stove?”, those recipe instructions get pushed out. They don’t vanish, they just move into your brain’s “long term storage”. But now, when you need them again, retrieval takes 200 milliseconds or more[8][9].

This is why experienced cooks glance at a recipe once and execute it perfectly. They’ve chunked “dice onions, mince garlic, julienne peppers” into one concept: mirepoix prep. One slot. Beginners are juggling six separate instructions, all competing for those limited slots[10][11]. When the slots overflow, the swapping starts. And each swap burns energy. Your brain is literally consuming glucose every time it consolidates memory[12][13].

This is why you’re exhausted after assembling IKEA furniture even though you barely moved. You read: “attach piece H4 to piece G7 using screw type M.” You find H4. Your memory fills: H4 in hand, need G7, where is G7, was it screw M or screw N? You check the instructions again. Everything you just had —where H4 was, which hand it’s in—gets swapped out. You look back at the furniture. Why did you pick up H4 again? The instruction swaps back in, 200 milliseconds later. Right, H4 to G7. You find G7. Memory fills again: G7 position, H4 somewhere, which hole, which way does it face, wait did I grab the right screw[14][15]. In the programming world we call this a cache miss, you’re managing a cache system with a 4× miss penalty.

Now put this in text messages. You’re discussing weekend plans. Your friend texts: “Want to go hiking Saturday morning? Great weather.” You start typing. Phone buzzes. Different friend: “Did you see the email about Monday’s meeting?” Your memory shifts. Meeting, Monday, check email, calendar conflict. You open email. Another buzz. First friend: “Or Sunday if that’s better?” You’ve lost it. What were you saying about Saturday?[16][17].

Which brings me to Information Density and Forced Smaller Chunks and the little rectangle in your pocket, the mobile screen.


  1. Squire, L. R., & Alvarez, P. (1995). Retrograde amnesia and memory consolidation: a neurobiological perspective. Current Opinion in Neurobiology, 5(2), 169-177. ↩︎

  2. McGaugh, J. L. (2000). Memory–a century of consolidation. Science, 287(5451), 248-251. ↩︎

  3. Baddeley, A. (2012). Working memory: theories, models, and controversies. Annual Review of Psychology, 63, 1-29. ↩︎

  4. Sternberg, S. (1966). High-speed scanning in human memory. Science, 153(3736), 652-654. ↩︎

  5. Wickelgren, W. A. (1977). Speed-accuracy tradeoff and information processing dynamics. Acta Psychologica, 41(1), 67-85. ↩︎

  6. Jonides, J., et al. (2008). The mind and brain of short-term memory. Annual Review of Psychology, 59, 193-224. ↩︎

  7. D’Esposito, M., & Postle, B. R. (2015). The cognitive neuroscience of working memory. Annual Review of Psychology, 66, 115-142. ↩︎

  8. Wickelgren, W. A. (1977). Speed-accuracy tradeoff and information processing dynamics. Acta Psychologica, 41(1), 67-85. ↩︎

  9. Anderson, J. R. (1974). Retrieval of propositional information from long-term memory. Cognitive Psychology, 6(4), 451-474. ↩︎

  10. Ericsson, K. A., & Kintsch, W. (1995). Long-term working memory. Psychological Review, 102(2), 211-245. ↩︎

  11. Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive Psychology, 4(1), 55-81. ↩︎

  12. Raichle, M. E., & Gusnard, D. A. (2002). Appraising the brain’s energy budget. Proceedings of the National Academy of Sciences, 99(16), 10237-10239. ↩︎

  13. Sokoloff, L. (1999). Energetics of functional activation in neural tissues. Neurochemical Research, 24(2), 321-329. ↩︎

  14. Sweller, J., et al. (2011). Cognitive load theory. Psychology of Learning and Motivation, 55, 37-76. ↩︎

  15. Barrouillet, P., et al. (2007). Time and cognitive load in working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(3), 570-585. ↩︎

  16. Oberauer, K., et al. (2012). What limits working memory capacity? Psychological Bulletin, 138(2), 240-258. ↩︎

  17. Cowan, N. (2017). The many faces of working memory and short-term storage. Psychonomic Bulletin & Review, 24(4), 1158-1170. ↩︎

Your monitor, that glowing rectangle you’re (hopefully – more on that later) reading this on has a specification called pixels per inch. It’s a measurement of how densely packed the tiny dots of light are that make up everything you see. Most desktop monitors sit around 90-110 PPI. Your laptop? Maybe 130-140 PPI. Retina displays and high-end monitors push 200+ PPI. But here’s what matters: it’s not the density itself that affects your cognition, it’s what that density allows you to see at once[1]. A 27-inch monitor at 1440p resolution can display roughly 3.7 million pixels simultaneously. That’s 3.7 million tiny decisions your brain has already processed and chunked before you’re even consciously aware of looking at the screen[2]. Your visual cortex, that massive chunk of brain real estate at the back of your skull, evolved over millions of years to process vast amounts of spatial information in parallel[3].

Let’s go back in time, if you’ve ever read a newspaper, you know that as its spread across your kitchen table as you’re eating breakfast, you’re not reading it like a book, word by word, rather than people that designed the newspaper have done it in such a way that your peripheral vision scans headlines, notes images and builds a stable map of where everything is. You know instinctively, as soon as you pick up the newspaper that the sports section, which will hopefully tell you that your favorite team won the champions league, is on page 4, and the economics section that you always ignore is on the back. This spatial memory is nearly free cognitively; it doesn’t consume your precious working memory slots[4].

Now here’s where it gets interesting: multiple monitors. Studies have consistently shown that two monitors increase productivity by 20-30% over a single monitor[5][6]. Three monitors? Even better, up to a point. Why? Because you’re reducing those catastrophic memory swaps we talked about. When you’re writing a document, your working memory fills with: current sentence structure, the point you’re making, where this paragraph fits in the overall argument, the rhythm of your prose, what you’re going to say next. If you need to check a source, and that source is on the same screen, hidden behind your document, you’ve got to minimize one window, find the other, read it, remember what you read, minimize it, bring back your document, and remember where you were[7]. That’s a memory swap.

But put that source on a second monitor? Your eyes glance right. Zero memory swaps. The document stays in memory because it stays in view. Your visual cortex handles the spatial transition automatically, burning almost no cognitive load[8]. Three monitors let you have your document center, your sources left, and your outline or notes right.

But there’s a curve. Thirty monitors? Your eyes are physically scanning across meters of space, your neck is turning, your body is moving. Now you’re burning attention on finding information rather than processing it[9]. Studies of control rooms (such as air traffic control, or emergency dispatch centers) show that there’s an optimal layout, and it’s not “maximum screens”[10]. Too many displays and operators start experiencing what’s called “cognitive tunneling”, where they fixate on one screen and miss critical information on others because the mental cost of maintaining awareness of all those spatial locations exceeds the benefit[11]. Your working memory slots fill up with “where is the thing I need?” instead of “what does this information mean?” The sweet spot for most people seems to be two to four monitors, arranged within natural eye movement range.

And then there’s the FUCKING PHONE SCREEN.

Let me take a moment to compose myself… A modern smartphone has a gorgeous display, beautiful, a marvel of engineering, often over 500 PPI, so dense that your eyes literally can’t distinguish between individual pixels. Everything in your phone is a marvel of engineering, it’s sharp and bright, the pixels can replicate colors so accurately that they’re almost the same as the colors in the real world.

But… it’s also 6 inches diagonally in size, some even smaller. At typical reading distance, that’s about 15 degrees of visual angle[12]. Your effective visual field spans about 120 degrees horizontally. Your phone occupies one-eighth of that. You’re peering through a keyhole at the entire world of information[13]. If you’re reading this post on a phone, you see maybe 200-300 words at once, depending on your font size. On a desktop monitor? 900-1100 words, easy. Remember those memory swaps? On a phone, everything requires scrolling. You want to reference something three paragraphs up? Scroll up, read it, scroll back down, try to find where you were. Two memory swaps minimum, probably more because you lost your place[14]. Want to compare two ideas I presented? Forget it. You’re doing mental gymnastics, burning working memory slots on “what was that thing danny said about chess?” while simultaneously trying to process new information. Your working memory is spending more time on navigation than on thinking[15].

And then there’s Bitrate… yeah, sorry, another programmer-sounding concept…

Hey, hey, focus, focus, I know this is getting boring, who cares about bitrate? That’s programmer talk… “When is Danny finally going to get to the point? Jesus”… I get it… Please stay with me, I’m getting there.


  1. Larson, K., & Picard, R. W. (2005). The aesthetics of reading. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 1-10. ↩︎

  2. Ware, C. (2012). Information visualization: perception for design. Elsevier. ↩︎

  3. Wandell, B. A., et al. (2007). Visual field maps in human cortex. Neuron, 56(2), 366-383. ↩︎

  4. Ekstrom, A. D., & Ranganath, C. (2018). Space, time, and episodic memory: the hippocampus is all over the cognitive map. Hippocampus, 28(9), 680-687. ↩︎

  5. Colvin, J., et al. (2004). Productivity and multi-screen computer displays. Rocky Mountain Communication Review, 2(1), 31-53. ↩︎

  6. Owens, J. M., et al. (2012). Investigation of the effects of display size on work performance in information-rich workspaces. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 56(1), 1338-1342. ↩︎

  7. Roda, C., et al. (2003). Attention aware systems: Theories, applications, and research agenda. Computers in Human Behavior, 19(5), 557-587. ↩︎

  8. Robertson, G., et al. (2005). Large display research overview. CHI’05 Extended Abstracts on Human Factors in Computing Systems, 1-11. ↩︎

  9. Grudin, J. (2001). Partitioning digital worlds: focal and peripheral awareness in multiple monitor use. CHI '01 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 458-465. ↩︎

  10. Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32-64. ↩︎

  11. Wickens, C. D., & Alexander, A. L. (2009). Attentional tunneling and task management in synthetic vision displays. The International Journal of Aviation Psychology, 19(2), 182-199. ↩︎

  12. Howarth, P. A., & Hodder, S. G. (2008). Characteristics of cell phone displays and vision-related symptoms in young people. Ophthalmic and Physiological Optics, 28(4), 346-354. ↩︎

  13. Maniar, N., et al. (2004). Examining the effects of screen size on video-based learning. Journal of Educational Multimedia and Hypermedia, 13(4), 375-396. ↩︎

  14. Hou, J., et al. (2017). Information quantity and information quality: How the comprehensiveness and accuracy of search results affect users’ search behavior and opinion formation. Journal of the Association for Information Science and Technology, 68(11), 2662-2676. ↩︎

  15. Trawinski, P. R., & MacKenzie, I. S. (2006). Comparing two input methods for keypads on mobile devices. Proceedings of the 4th Nordic Conference on Human-Computer Interaction, 287-290. ↩︎

Your brain processes information at roughly 120 bits per second[1]. That’s your bitrate. That’s the speed at which you can consciously process new information, make decisions, and update your mental model of the world. Speaking? We talk at about 39 bits per second[2][3]. Reading silently? Faster, maybe 60-80 bits per second if you’re good at it[4]. Notice something? Your brain can barely keep up with one stream of information at conversational speed. When someone’s talking to you, you’re operating near your maximum cognitive throughput. Now add in trying to formulate a response, monitor their body language, remember what you wanted to say three sentences ago, and stay aware of your surroundings. You’ve exceeded capacity. This is why arguments escalate; you’re no longer processing what they’re saying, you’re just waiting for them to stop so you can speak[5]. Your working memory is full of your rebuttal, not their words.

This is why professional debate is incredibly hard, by the way.

But here’s where it gets interesting for computers versus phones: Overview + Detail. When you’re looking at a desktop screen, you get both simultaneously[6]. You see the forest AND the trees. You can see the current paragraph in detail while your peripheral vision holds the document structure, where you are, how much is left, what sections are coming. Your brain builds what cognitive scientists call a “spatial scaffold”[7]. It’s a mental map that requires almost zero working memory because it’s offloaded to your visual cortex. Mobile forces you into detail-only mode.

You’re reading one paragraph at a time through that keyhole, blind to everything else. Studies show this isn’t just annoying, it’s cognitively expensive[8].

On a phone you’re using more brain power to do less!

Desktop has something magical that mobile will never have: the hover state[9]. Move your mouse over a link? You get a preview. Hover over a button? Tooltip appears. Float over a menu item? Submenu materializes. This is what Don Norman calls “information scent” – tiny hints that tell you what will happen before you commit[10]. You can explore interfaces consequence-free. Your prefrontal cortex – that expensive, energy-hungry part of your brain that does all your executive planning – doesn’t have to spin up and simulate outcomes[11]. On mobile? Every tap is a commitment. Touch is binary: pressed or not pressed. No hover state means every interaction requires mental simulation: “If I tap this, what will happen?” You’re burning working memory slots running a mental model of the UI before you even interact with it. And if you’re wrong? Now you have to undo, which breaks your flow state entirely, dumps your working memory, and forces you to rebuild your mental context from scratch[12].

Right-click. That beautiful, underappreciated gesture that desktop users take for granted[13]. Context menus that appear exactly where you need them, packed with secondary functions that don’t clutter your primary interface. You can switch modes without moving your hand, access advanced features without hunting through menus, and build muscle memory for complex operations. Mobile’s answer? Long-press. Which is just… slow right-click with a discoverability problem (nothing tells you it exists), a timing problem (how long is “long”?), and a conflict problem (long-press versus scroll versus select)[14]. Most mobile apps just give up and hide secondary functions in hamburger menus, which means more tapping, more navigating, more cognitive load, more memory swaps. You’re back in that IKEA instruction manual, constantly context-switching between “what am I doing” and “where is the thing that lets me do it.”

And keyboard shortcuts[15]. Semantic compression at its finest. One keystroke equals a complex operation. Ctrl+C, Ctrl+V – you don’t even think about it anymore. It’s muscle memory, processed by your cerebellum, completely bypassing your working memory[16]. And they’re composable: Ctrl+Shift+Arrow selects a word, Ctrl+X cuts it, Ctrl+V pastes it… move a word in three keystrokes…

If this were a different post I would spend the next few paragraphs taking the opportunity to talk about why vim motions are superior and why you should all learn to daw, ci(, and vape, but this is not a programmer focused post.

On mobile every operation is tap-navigate-tap-navigate. You can’t build fluency because there’s no muscle memory possible when every action requires visual search and conscious decision-making. You’re trapped in what psychologists call the “cognitive phase” of skill acquisition, never graduating to the “autonomous phase” where actions become automatic and free up your working memory for actual thinking[17].

Wait, that’s almost how a JPEG works… Fuck, I have to go back into technical mode, sorry.


  1. Zimmermann, E., et al. (2013). Spatial position information accumulates steadily over time. Journal of Neuroscience, 33(47), 18396-18401. ↩︎

  2. Ferrer-i-Cancho, R., & Elvevåg, B. (2010). Random texts do not exhibit the real Zipf’s law-like rank distribution. PloS one, 5(3), e9411. ↩︎

  3. Nishimura, T., et al. (2022). Different velocities of information transfer in different language domains. Journal of Language Evolution, 7(1), 42-58. ↩︎

  4. Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372-422. ↩︎

  5. Bodie, G. D., et al. (2012). The temporal stability and situational contingency of active-empathic listening. Western Journal of Communication, 76(5), 471-495. ↩︎

  6. Cockburn, A., et al. (2009). Review of overview+ detail, zooming, and focus+ context interfaces. ACM Computing Surveys, 41(1), 1-31. ↩︎

  7. McNamara, T. P., et al. (2008). Egocentric and allocentric spatial memory in adults with autism spectrum disorder. Cognitive Psychology, 56(2), 129-169. ↩︎

  8. Sweller, J. (2010). Element interactivity and intrinsic, extraneous, and germane cognitive load. Educational Psychology Review, 22(2), 123-138. ↩︎

  9. Norman, D. A. (2013). The design of everyday things: Revised and expanded edition. Basic books. ↩︎

  10. Pirolli, P., & Card, S. (1999). Information foraging. Psychological Review, 106(4), 643-675. ↩︎

  11. Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24(1), 167-202. ↩︎

  12. Altmann, E. M., & Trafton, J. G. (2002). Memory for goals: An activation-based model. Cognitive Science, 26(1), 39-83. ↩︎

  13. Ahlström, D., et al. (2010). Improving menu interaction for cluttered tabletop displays. Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, 121-124. ↩︎

  14. Boring, S., et al. (2012). Touch projector: mobile interaction through video. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2287-2296. ↩︎

  15. Lane, D. M., et al. (2005). Hidden costs of graphical user interfaces: Failure to make the transition from menus and icon toolbars to keyboard shortcuts. International Journal of Human-Computer Interaction, 18(2), 133-144. ↩︎

  16. Ericsson, K. A., & Kintsch, W. (1995). Long-term working memory. Psychological Review, 102(2), 211-245. ↩︎

  17. Fitts, P. M., & Posner, M. I. (1967). Human performance. Brooks/Cole. ↩︎

You know what a JPEG is. That image format that’s everywhere, that makes your photos small enough to text to your mom without eating your data plan. But do you know how it works? Here’s the thing: JPEG is brilliant specifically because it throws away information[1]. Not random information; it throws away the information humans can’t perceive anyway. Your eye is terrible at detecting subtle changes in color but great at seeing edges and brightness shifts[2]. JPEG exploits this. It preserves what you notice, discards what you don’t. A 10MB photo becomes 1MB, and you genuinely cannot tell the difference. That’s lossy compression that sometimes works perfectly, but only because it’s making assumptions about the receiver. It assumes you have human eyes with human limitations.

Now here’s the uncomfortable truth: all human communication is lossy compression[3][4]. When you have a thought and you try to express it in words, you’re compressing. You’re taking this massive internal representation and squeezing it through the narrow bottleneck of language. You’re throwing away nuance, discarding emotional context, stripping out the web of associations that gave that thought its full meaning[5]. And just like JPEG, you’re making assumptions. You’re assuming the person reading your words shares enough of your context, your chunk library, your cultural background, that they can decompress your message back into something resembling your original thought. When someone shares your context, this works beautifully.

When they don’t? You’re on twitter having a screaming match with someone because to them peanut butter cannot be peanut butter if its too chunky, and you’re fascist, racist, and a communist for implying it can.

Think about the last time you tried to explain something you care deeply about to someone who knows nothing about it. Maybe it’s your hobby, your job, your favorite book. You’re talking, words are coming out, and you can see in their eyes that they’re not getting it[6]. You’re speaking different languages, you’re compressing your thoughts using a codec trained on thousands of hours of experience in this domain. They’re trying to decompress using a codec that lacks all those libraries, all those chunks. It’s like trying to play an H.265 video on a VHS player. The information is there, but the decoder can’t make sense of it[7].

Language itself is a lossy codec that evolved to work between humans with similar contexts[8]. When you and I both understand what “a hash map” means, I can use those two words and transmit a complex concept efficiently. But that only works because we both have that chunk pre-loaded. To someone without that chunk, those same two words carry almost zero information, or worse, they carry wrong information, filling up working memory slots with confused guesses[9]. And ironically, even if we could somehow transmit information perfectly, at infinite speed, with zero latency, it wouldn’t fix the fundamental problem. Because the compression loss doesn’t happen in the transmission, rather it happens at the encoding and decoding stages. You and I can speak the same language, use the same words, and still be having completely different conversations because we’re running incompatible decoders.


  1. Wallace, G. K. (1992). The JPEG still picture compression standard. IEEE Transactions on Consumer Electronics, 38(1), xviii-xxxiv. ↩︎

  2. Gonzalez, R. C., & Woods, R. E. (2018). Digital image processing (4th ed.). Pearson. ↩︎

  3. Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(3), 379-423. ↩︎

  4. Sperber, D., & Wilson, D. (1986). Relevance: Communication and cognition. Harvard University Press. ↩︎

  5. Pinker, S. (2007). The stuff of thought: Language as a window into human nature. Penguin. ↩︎

  6. Keysar, B. (1994). The illusory transparency of intention: Linguistic perspective taking in text. Cognitive Psychology, 26(2), 165-208. ↩︎

  7. Healey, P. G., et al. (2018). Divergence in dialogue. PloS one, 13(10), e0205935. ↩︎

  8. Tomasello, M. (2008). Origins of human communication. MIT press. ↩︎

  9. Clark, H. H. (1996). Using language. Cambridge University Press. ↩︎

Now put this on a phone screen. Suddenly you’re not just dealing with codec incompatibility, you’re dealing with forced over-compression[1]. Your thoughts have to fit in a text box. Character limits, typing speed on glass, the sheer cognitive load of pecking out words with your thumbs, all of it forces you to compress tighter, throw away more information, strip out more nuance. You can’t include the careful hedging, the elaboration, the “what I mean by that is…” that would help the receiver decode your message correctly. You’re compressing an already-lossy codec even further[2]. And what gets thrown away first? The metadata. The tone indicators. The contextual signals. The very information that would help someone understand how to interpret your words. Face-to-face communication is high-bandwidth: you get words, prosody, facial expressions, gestures, timing, context[3]. Text communication on mobile is the lowest-bandwidth codec humans have ever invented for interpersonal communication. You’re transmitting pure semantic content through the narrowest possible pipe, hoping the receiver can reconstruct your intent from essentially nothing.

And to make matters worse, you’re doing that on a public platform where hundreds of people with contrasting beliefs, ideas, and shared frameworks interact with, misinterpret, and interject.

Online text communication using a smartphone maximizes every source of loss simultaneously. You’ve got forced compression from character limits and typing constraints. Stripped metadata, fragmented context because algorithmic feeds scatter conversations across time and space. Unknown decoders, strangers with completely different chunk libraries trying to parse your words[4]. The smallest possible chunks because mobile screens show you nothing. And the highest latency because it’s asynchronous communication without proper threading or spatial context. Every single factor we’ve talked about compounds. You’re trying to have a high-fidelity conversation through a tin can telephone while blindfolded, using a language you only half-share, about topics you chunk completely differently. And then we wonder why everyone’s so angry, why everyone misunderstands each other, why the discourse is so toxic.

Wait… isn’t this exactly what Newspeak was about in 1984? Uh huh :slight_smile:


  1. Vandergriff, I. (2013). Emotive communication online: A contextual analysis of computer-mediated communication (CMC) cues. Journal of Pragmatics, 51, 1-12. ↩︎

  2. Thurlow, C., & Brown, A. (2003). Generation txt? The sociolinguistics of young people’s text-messaging. Discourse Analysis Online, 1(1), 30. ↩︎

  3. Mehrabian, A., & Ferris, S. R. (1967). Inference of attitudes from nonverbal communication in two channels. Journal of Consulting Psychology, 31(3), 248-252. ↩︎

  4. Walther, J. B. (2011). Theories of computer-mediated communication and interpersonal relations. The SAGE handbook of interpersonal communication, 4, 443-479. ↩︎

In the year 1949, dystopian author George Orwell, then known for writing the political satire Animal Farm, which directly made fun of communist movements, published 1984 [1]. In it, the totalitarian superstate of Oceania doesn’t just control what people can say, rather they systematically engineer the language itself through something called Newspeak[2]. The goal is brutally simple: make it literally impossible to think forbidden thoughts. You can’t contemplate rebellion if you don’t have words for freedom, autonomy, or resistance. You can’t articulate dissent if your vocabulary has been compressed down to a handful of approved chunks[3]. The Party understood something fundamental: if you can’t think complex thoughts, you can’t organize complex resistance. Language control equals thought control. Eliminate the words, eliminate the concepts, eliminate the ability to even recognize that you’re being oppressed. Your working memory can’t hold what it doesn’t have chunks for.

And here’s the thing that’ll keep you up at night: they weren’t wrong about how it works[4]. When you constrain language, you genuinely constrain thought. Not because language is thought exactly, but because language provides the scaffolding for abstract reasoning[5]. Try doing complex mathematics without mathematical notation. Try reasoning about data structures without the vocabulary of computer science. Try planning revolution without words for justice, solidarity, collective action. The chunks aren’t there. Your working memory has nothing to load[6]. Newspeak’s entire idea was to reduce vocabulary, which in turn reduced chunk size, working memory capacity, and the ability to hold complex thoughts in your mind simultaneously…

But wait wait wait wait— hold the fuck on. This isn’t just some dystopian fiction thing, this is EXACTLY what the Nazis did[7].


  1. Orwell, G. (1949). Nineteen Eighty-Four. Secker & Warburg. ↩︎

  2. Deutscher, G. (2010). Through the language glass: Why the world looks different in other languages. Metropolitan Books. ↩︎

  3. Whorf, B. L. (1956). Language, thought, and reality: Selected writings of Benjamin Lee Whorf. MIT Press. ↩︎

  4. Sapir, E. (1929). The status of linguistics as a science. Language, 5(4), 207-214. ↩︎

  5. Carruthers, P. (2002). The cognitive functions of language. Behavioral and Brain Sciences, 25(6), 657-674. ↩︎

  6. Vygotsky, L. S. (1962). Thought and language. MIT Press. ↩︎

  7. Klemperer, V. (2000). The language of the Third Reich: LTI, lingua tertii imperii: A philologist’s notebook. Continuum. ↩︎

Victor Klemperer, philologist who lived through it, documented in excruciating detail how the Reich systematically poisoned German language. They compressed complex moral concepts into black-and-white propaganda chunks. They created new compound words that encoded their ideology directly into the vocabulary. They made certain ways of thinking literally unsayable; the Stasi did it too[1]. East Germany’s secret police monitored language, flagged dangerous chunks, compressed permitted expression down to party-approved patterns. And of course Stalin pioneered this in the USSR[2]. Mandatory self-criticism sessions, required use of specific ideological vocabulary, systematic destruction of pre-revolutionary language patterns, designed to make dissent cognitively expensive, to fill up your working memory slots with approved thoughts so there’s no room left for dangerous ones.

And NOW, right now, today, in 2025, we’re doing it to ourselves[3]. We’ve built the infrastructure of totalitarian thought control and packaged it as convenience. Your phone tracks every word you type, every site you visit, every chunk you load into your working memory[4]. The algorithms optimize for engagement, which means they optimize for emotion, which means they optimize for that compressed, reactive, working-memory-overloaded state where you can’t think carefully about what you’re reading[5]. They’re not building Newspeak directly, they’re building the conditions where Newspeak becomes inevitable: platforms that force compression, interfaces that prevent overview, attention economics that reward bite-sized reactivity over careful reasoning[6]. We don’t even need a totalitarian government to do this anymore. We’re paying corporations to limit our cognitive capacity[7].

And unlike Oceania, unlike the Third Reich, the Stasi, or the KGB, we can’t even point to the bad guy and organize resistance because we’re doing it voluntarily[8]. We’re building our own Panopticon and calling it “staying connected.” The surveillance isn’t just watching us anymore, it’s shaping how we think by controlling the medium through which we think[9].

This is why privacy matters, not just as some abstract civil liberty, but as a prerequisite for complex thought[10]. When you know you’re being watched, when you know your words are being logged, indexed, analyzed, fed into content moderation algorithms and engagement optimization engines, you self-censor. Not just what you say, but how you think[11]. You compress your thoughts preemptively. You avoid loading controversial chunks into working memory because you know they’ll leave traces. You stick to approved patterns, safe expressions, conventional wisdom. Your brain learns that complex, nuanced, potentially-misunderstood thinking is expensive. [12]. Privacy isn’t about having something to hide. Privacy is about having the cognitive space to think thoughts that don’t fit neatly into pre-approved chunks. Privacy is about maintaining the working memory capacity to load dangerous ideas, uncomfortable truths, complex analyses that can’t be compressed into a tweet[13].

So… We’re Fucked?
No, you see, there’s one thing that I’ve purposefully done my best to avoid mentioning until this point, Dunbar’s number. Dunbar’s what? Dunbar’s number.


  1. Koehler, J. O. (2000). Stasi: The untold story of the East German secret police. Westview Press. ↩︎

  2. Figes, O. (2007). The Whisperers: Private life in Stalin’s Russia. Metropolitan Books. ↩︎

  3. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs. ↩︎

  4. Schneier, B. (2015). Data and Goliath: The hidden battles to collect your data and control your world. W. W. Norton & Company. ↩︎

  5. Wu, T. (2016). The attention merchants: The epic scramble to get inside our heads. Knopf. ↩︎

  6. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. ↩︎

  7. Cohen, J. E. (2019). Between truth and power: The legal constructions of informational capitalism. Oxford University Press. ↩︎

  8. Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books. ↩︎

  9. Lyon, D. (2018). The culture of surveillance: Watching as a way of life. Polity Press. ↩︎

  10. Solove, D. J. (2011). Nothing to hide: The false tradeoff between privacy and security. Yale University Press. ↩︎

  11. Richards, N. M. (2015). Intellectual privacy: Rethinking civil liberties in the digital age. Oxford University Press. ↩︎

  12. Macnish, K. (2017). The ethics of surveillance: An introduction. Routledge. ↩︎

  13. Cohen, J. E. (2012). Configuring the networked self: Law, code, and the play of everyday practice. Yale University Press. ↩︎

In 1992, British anthropologist Robin Dunbar published research suggesting that humans can maintain approximately 150 stable social relationships[1]. Not acquaintances, not people you’ve met, but actual relationships where you know who they are, how they relate to you, what their deal is, what inside jokes you share, what conversational shortcuts work between you. This isn’t arbitrary; it’s a cognitive limit based on the processing capacity of the human neocortex[2]. Your brain literally doesn’t have enough working memory and long-term storage to maintain the metadata required for more relationships than that. Beyond 150, people become strangers wearing familiar faces[3]. You can’t remember their context, can’t predict their reactions, can’t decode their communication properly because you don’t have the shared chunk library that real relationships require[4].

And here’s the beautiful thing: for most of human history, we never had to deal with more than that. Your tribe, your village, your community… maybe 150 people, everyone knew everyone, everyone shared massive amounts of context[5]. When someone said “remember that thing last winter?” everyone knew exactly what thing. Shared experiences, shared language, shared chunks. Communication was high-bandwidth because the decoders were compatible[6]. You weren’t compressing for strangers, you were compressing for people who shared your entire context database, to borrow another technical concept…


  1. Dunbar, R. I. (1992). Neocortex size as a constraint on group size in primates. Journal of Human Evolution, 22(6), 469-493. ↩︎

  2. Dunbar, R. I. (1998). The social brain hypothesis. Evolutionary Anthropology, 6(5), 178-190. ↩︎

  3. Zhou, W. X., et al. (2005). Discrete hierarchical organization of social group sizes. Proceedings of the Royal Society B: Biological Sciences, 272(1561), 439-444. ↩︎

  4. Roberts, S. G., et al. (2009). Exploring variation in active network size: Constraints and ego characteristics. Social Networks, 31(2), 138-146. ↩︎

  5. Goncalo, J. A., et al. (2010). Creativity in groups. Research on Managing Groups and Teams, 13, 215-239. ↩︎

  6. Tomasello, M., et al. (2005). Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences, 28(5), 675-691. ↩︎

In the 1980s and early 1990s (and then later in a slightly different context with phpbb), before the web as you know it today ate everything, there were these magical things called Bulletin Board Systems, BBSes[1]. You’d dial into them with your computer and a modem, and you’d connect to someone’s actual computer in their actual house, running in their actual basement[2]. These weren’t websites, they were small communities. Most BBSes had maybe 20-100 regular users[3]. You knew everyone. You knew their handle, their posting style, which forums they frequented, what they cared about. You built relationships[4].

And the conversations on BBSes were good, like genuinely good in a way that modern social media can’t touch[5]. Threaded discussions that maintained context, conversations that could stretch over days or weeks without losing coherence, people actually reading what came before and building on it[6]. You’d post something, someone would reply, someone else would build on that, and everyone was working from shared context because everyone was in the same small community[7]. You didn’t have to explain everything from first principles every single time because people knew you. They knew your chunk library. They knew how you thought[8].

They operated below Dunbar’s number[9]. Most active communities on a BBS were 50-150 people, right in that sweet spot where your brain can actually maintain proper relationship models with everyone[10]. You weren’t broadcasting to thousands of strangers, you were talking to Sysdrone, Bob and Naginagi, people you knew, people whose communication patterns you’d internalized[11]. Your brain wasn’t spending precious working memory slots on “who is this person?” and “can I trust them?” and “what’s their agenda?”; that was all handled automatically by your existing relationship model[12].

The asynchronous nature helped too, but not in the way modern platforms do it[13]. On a BBS, you’d post something, go to bed, come back tomorrow and there’d be thoughtful replies. Not because people were slow, but because there was no algorithmic pressure to respond instantly[14]. People could take time to think, to decompress your message properly, to formulate a response that wasn’t just a kneejerk reaction[15]. The conversations had depth because people had time to load the proper chunks, access their long-term memory, craft replies that actually engaged with what was being discussed[16].

And threading, oh my god, threading[17]. On a BBS, conversations were threaded. You could follow a discussion from beginning to end, see how ideas developed, reference earlier points without losing everyone[18].

Modern platforms have destroyed this, Reddit thinks it has treading, it doesn’t, it has a poor man’s version of it. Twitter and Facebook just vomit everything into an algorithmic feed where context is scattered [19], and even platforms that attempt to replicate it (such as modern forum software), are at best a poor man’s interpretation of the majesty of threading in a BBS. On a BBS, the entire conversation was right there, spatially organized, and your visual cortex could build a map of it. You didn’t have to hold the whole discussion in working memory because it was externalized on the screen[20].

Domain-specific boards meant you had overlap in chunk libraries[21]. If you were on a programming BBS, everyone spoke programmer. If you were on a ham radio BBS, everyone understood RF propagation and antenna theory. You weren’t explaining basic concepts to randos every single time[22]. The shared expertise meant you could communicate at a higher level of abstraction, using fewer words to convey more information, because everyone had the same mental libraries loaded[23]. This is exactly how Magnus Carlsen talks to other grandmasters about chess they’re all operating with the same massive compressed chunks[24].

I admit that probably there’s some nostalgia, and some rose-tinted glasses, but dman it, we had accidentally built communication systems that worked with human cognitive constraints instead of against them[25].

Small communities below Dunbar’s number, persistent threading that supported working memory, asynchronous communication without urgency, shared context that enabled high compression ratios, no algorithms optimizing for engagement at the expense of understanding[26].

And then something beautiful started happening: these small communities began federating[27]. FidoNet, UseNet, they created ways for BBSes to share messages between communities while maintaining their local identity[28]. You could have a conversation that spanned multiple communities, but it was still structured, still threaded, still comprehensible[29]. This was the dream, right? Scale without sacrificing the human elements that made communication work[30].

But then… then we fucked up… We created a monster by trying to make everything better. We created smartphones, we destroyed centralizations in name of ease and fucked with everything that worked about BBSes and systematically eliminated it in the name of “scale” and “ease”[31].


  1. Driscoll, K. (2014). Social media’s dial-up ancestor: The bulletin board system. IEEE Spectrum, 51(11), 54-60. ↩︎

  2. Rheingold, H. (1993). The virtual community: Homesteading on the electronic frontier. MIT Press. ↩︎

  3. Christensen, W. (1989). The origins of networked communications. BYTE Magazine, 14(10), 423-424. ↩︎

  4. Baym, N. K. (1995). The emergence of community in computer-mediated communication. CyberSociety: Computer-Mediated Communication and Community, 138-163. ↩︎

  5. Myers, D. (1987). “Anonymity is part of the magic”: Individual manipulation of computer-mediated communication contexts. Qualitative Sociology, 10(3), 251-266. ↩︎

  6. Smith, M. A., & Kollock, P. (1999). Communities in cyberspace. Psychology Press. ↩︎

  7. Wellman, B., & Gulia, M. (1999). Virtual communities as communities. Communities in Cyberspace, 167-194. ↩︎

  8. Donath, J. S. (1999). Identity and deception in the virtual community. Communities in Cyberspace, 29-59. ↩︎

  9. Dunbar, R. I. (2012). Social cognition on the Internet: testing constraints on social network size. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1599), 2192-2201. ↩︎

  10. Wellman, B., et al. (1996). Computer networks as social networks: Collaborative work, telework, and virtual community. Annual Review of Sociology, 22(1), 213-238. ↩︎

  11. Reid, E. (1991). Electropolis: Communication and community on Internet Relay Chat. University of Melbourne. ↩︎

  12. Parks, M. R., & Floyd, K. (1996). Making friends in cyberspace. Journal of Computer-Mediated Communication, 1(4), JCMC144. ↩︎

  13. Haythornthwaite, C., & Wellman, B. (1998). Work, friendship, and media use for information exchange in a networked organization. Journal of the American Society for Information Science, 49(12), 1101-1114. ↩︎

  14. Walther, J. B. (1992). Interpersonal effects in computer-mediated interaction: A relational perspective. Communication Research, 19(1), 52-90. ↩︎

  15. Spears, R., & Lea, M. (1994). Panacea or panopticon? The hidden power in computer-mediated communication. Communication Research, 21(4), 427-459. ↩︎

  16. Herring, S. C. (1999). Interactional coherence in CMC. Journal of Computer-Mediated Communication, 4(4), JCMC442. ↩︎

  17. Golder, S. A., & Donath, J. (2004). Social roles in electronic communities. Internet Research Annual, 1, 55-90. ↩︎

  18. Erickson, T. (1999). Persistent conversation: An introduction. Journal of Computer-Mediated Communication, 4(4), JCMC441. ↩︎

  19. Marwick, A. E., & boyd, d. (2011). I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society, 13(1), 114-133. ↩︎

  20. Sack, W. (2000). Conversation map: An interface for very large-scale conversations. Journal of Management Information Systems, 17(3), 73-92. ↩︎

  21. Burnett, G. (2000). Information exchange in virtual communities: a typology. Information Research, 5(4), 5-4. ↩︎

  22. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press. ↩︎

  23. Brown, J. S., & Duguid, P. (2000). The social life of information. Harvard Business School Press. ↩︎

  24. Chi, M. T., et al. (1982). Categorization and representation of physics problems by experts and novices. Cognitive Science, 6(2), 121-152. ↩︎

  25. Oldenburg, R. (1999). The great good place: Cafes, coffee shops, bookstores, bars, hair salons, and other hangouts at the heart of a community. Da Capo Press. ↩︎

  26. Gillespie, T. (2014). The relevance of algorithms. Media Technologies: Essays on Communication, Materiality, and Society, 167, 167-194. ↩︎

  27. O’Mahony, S., & Ferraro, F. (2007). The emergence of governance in an open source community. Academy of Management Journal, 50(5), 1079-1106. ↩︎

  28. Pfaffenberger, B. (1996). “If I want it, it’s OK”: Usenet and the (outer) limits of free speech. The Information Society, 12(4), 365-386. ↩︎

  29. Kollock, P., & Smith, M. (1996). Managing the virtual commons. Computer-Mediated Communication: Linguistic, Social, and Cross-Cultural Perspectives, 109-128. ↩︎

  30. Lessig, L. (1999). Code and other laws of cyberspace. Basic Books. ↩︎

  31. Lanier, J. (2018). Ten arguments for deleting your social media accounts right now. Henry Holt and Co. ↩︎

So… What happened? Silicon Valley and VCs, that’s what.

BBSes died because they stayed small[1]. You couldn’t scale a BBS past a few hundred users without the intimacy collapsing, without Dunbar’s number asserting itself like gravity. You couldn’t throw venture capital at a BBS and expect it to 100× overnight because the thing that made it work was its size constraint[2]. VCs looked at these thriving communities and saw… nothing. No path to a billion users. No way to extract billions in value. No exponential growth curve. BBSes were profitable in the sense that they sustained themselves, but they weren’t scalable in the Silicon Valley sense, which means they were worthless[3]. The sysdrones running them were doing it for love, for community, for the pure joy of creating spaces where people could actually talk to each other. You can’t take that public. You can’t sell that to A16Z.

So what did we get instead? Platforms explicitly designed to violate every principle that makes human communication work[4]. Facebook, Twitter, Reddit at scale, TikTok, Instagram, all of them require hundreds of millions of users to justify their valuations. And the second you scale past Dunbar’s number, shared context becomes mathematically impossible[5]. You’re not talking to people anymore, you’re broadcasting to an undifferentiated mass of strangers who don’t share your chunks, don’t understand your context, don’t have any relationship model of you loaded in their working memory. Every interaction starts from zero, so you’re shouting into the void and hoping someone decodes your message correctly. They won’t.

And here’s where it gets truly sinister: the platforms know this[6]. They’ve done the research. They have teams of cognitive scientists, behavioral psychologists, attention engineers, all working to optimize one metric: engagement. And you know what drives engagement better than anything else? Misunderstanding. Anger. Outrage[7]. When you see something that triggers your defensive cognition, that fills your working memory with “this is WRONG” + “I must correct this” + “how dare they” + “everyone needs to see how wrong this is,” you engage. You comment. You quote-tweet. You share it with your followers while adding your rebuttal. The algorithm sees that spike in engagement and says “oh, this is good content” and shows it to more people[8]. More people get angry. More engagement. More ad impressions. More money.

The business model is fundamentally, irreparably incompatible with functional human communication[9]. Algorithmic feeds don’t show you conversations in order because chronological feeds would let you build context, would let you understand the development of ideas, would reduce confusion and therefore reduce engagement[10]. Mobile-first design forces you into those tiny screens with maximum context loss because more sessions means more opportunities to serve ads. More interruptions means more memory swaps means more cognitive load means less careful thinking means more emotional reactivity means more engagement[11]. Every single design decision is optimized to fill your working memory slots with anger and urgency rather than understanding and nuance.

We’ve accidentally created an ecosystem where only dysfunctional communication platforms can survive[12]. Platforms that maximize misunderstanding get labeled as “high engagement.” Platforms that minimize shared context get praised for “user discovery” and “network effects.” Platforms that fragment your attention into bite-sized dopamine hits get rewarded with higher valuations because fragmented attention means more ad inventory[13]. Platforms that scale past any functional community size hit their growth targets and make their investors rich. The market has spoken, and it has selected against human cognitive function. Well, the highly controlled, highly cronystic, highly anti-libertarian market has spoken.

“Where’s your growth story? How do you get to a billion users? What’s your path to monetization at scale?” The questions don’t even make sense in a human context, because the value of a platform that actually prioritizes communication is precisely its refusal to scale.

Healthy communication is a market failure in the venture capital model[14]. The platforms that survive are those that successfully exploit human cognitive limitations for profit.

We’ve built a system where only platforms optimized against human cognition can afford to exist. The market selected for dysfunction because dysfunction is profitable and understanding is not.

And then… to make matters worse AI happened…
Cue in the hate – Danny you’re such an idiot, don’t you see how many problems AI solves, can’t you see how much better my life is because of this can’t you see blah blah blah. No I cannot, and I’ll prove it.


  1. Hauben, M., & Hauben, R. (1997). Netizens: On the history and impact of Usenet and the Internet. Wiley-IEEE Computer Society Press. ↩︎

  2. Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121-136. ↩︎

  3. Zittrain, J. (2008). The future of the internet–and how to stop it. Yale University Press. ↩︎

  4. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown. ↩︎

  5. Marwick, A. E., & boyd, d. (2011). I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society, 13(1), 114-133. ↩︎

  6. Tufekci, Z. (2018). YouTube, the great radicalizer. The New York Times, 10(2018), 2018. ↩︎

  7. Brady, W. J., et al. (2017). Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, 114(28), 7313-7318. ↩︎

  8. Vosoughi, S., et al. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151. ↩︎

  9. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs. ↩︎

  10. Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164-1180. ↩︎

  11. Kushlev, K., et al. (2016). Checking email less frequently reduces stress. Computers in Human Behavior, 43, 220-228. ↩︎

  12. Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. ↩︎

  13. Wu, T. (2016). The attention merchants: The epic scramble to get inside our heads. Knopf. ↩︎

  14. Ries, E. (2011). The lean startup: How today’s entrepreneurs use continuous innovation to create radically successful businesses. Crown Business. ↩︎

So here’s where it gets truly beautiful in that horrifying dystopian way that makes you laugh because the alternative is screaming into the void. After venture capital spent two decades systematically destroying functional communication platforms in favor of engagement-maximizing hellscapes, after they created a world where everyone’s angry and nobody understands each other and context has been compressed out of existence… they looked at this burning dumpster fire they’d created and thought: “You know what would make this better? Another layer of lossy compression[1].

Enter Large Language Models. ChatGPT, Claude, Gemini, whatever the flavor of the month is by the time you’re reading this. And don’t get me wrong, the technology is genuinely impressive from an engineering standpoint, the amount of work that Andrey Markov did in 1906 before Turing even created his famous Turing Machine, that was truly stunning…

Wait…

did I say 1906, and Andrey Markov?

WHAT?

Yeah, you heard that right. 1906. Andrey Markov, Russian mathematician, published his work on stochastic processes that would later form the mathematical foundation for everything from weather prediction to… well… ChatGPT[2]. The math underlying Large Language Models isn’t new, it’s over a century old. We just threw obscene amounts of compute and data at it until something that looked like intelligence emerged[3].

And here’s the thing that’ll make you laugh-cry: LLMs are literally just probability machines. They don’t understand anything. They’re playing an incredibly sophisticated game of “what word typically comes next?” based on patterns in their training data[4]. They don’t have working memory (but they do chunk information, in a way), they don’t build mental models. They pattern-match at a scale humans can’t comprehend, which creates this eerie illusion of understanding when really they’re just exceptionally good at statistical next-token prediction[5].

But let’s set aside the technical details for a moment and look at what’s actually happening here. VCs spent twenty years systematically breaking human communication in pursuit of scale and engagement metrics[6]. They destroyed the small communities that worked, the threaded conversations that maintained context, the spatial interfaces that supported human working memory. They forced everyone onto mobile screens that guarantee cognitive overload. They built algorithmic feeds that maximize misunderstanding because misunderstanding drives engagement. And now, NOW, after creating this absolute disaster zone where nobody can talk to each other without screaming… they’re funding AI companies to “solve” the communication breakdown they themselves engineered[7]. It’s the perfect scam. Create the problem, sell the solution, profit from both sides. It’s like burning down all the houses and then selling fire extinguishers at a markup.

Here’s how it works in practice. You’ve got platforms that are fundamentally broken because they scaled past Dunbar’s number and destroyed shared context.

Communication has become impossible because everyone’s broadcasting to strangers while their working memory is maxed out from mobile interfaces and algorithmic chaos. So what’s the VC pitch? “We’ll use AI to moderate content at scale!” Which sounds great until you realize what that means: you’re adding another layer of lossy compression on top of an already catastrophically lossy system[8]. An LLM trained on god-knows-what is now deciding what you’re allowed to say, how you’re allowed to say it, compressing your already-compressed thoughts through yet another codec that strips out even more nuance, even more context, even more of the metadata that makes communication work[9].

Or how about this gem: “We’ll use AI to summarize conversations so you don’t have to read everything!” Translation: we built platforms where conversations are so fragmented and context-free that nobody can follow them anymore, so instead of fixing that, we’ll compress them further[10]. You know what happens when you summarize an already-lossy conversation?

You get a JPEG of a JPEG of a JPEG. Sorry, I know… too much technicals at this point… I meant to say a “deep fried meme”, ok?


  1. Bender, E. M., et al. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. ↩︎

  2. Markov, A. A. (1906). Rasprostranenie zakona bol’shih chisel na velichiny, zavisyaschie drug ot druga. Izvestiya Fiziko-matematicheskogo obschestva pri Kazanskom universitete, 15(94), 135-156. ↩︎

  3. Brown, T., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901. ↩︎

  4. Bender, E. M., & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 5185-5198. ↩︎

  5. Marcus, G., & Davis, E. (2020). GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about. MIT Technology Review. ↩︎

  6. Srnicek, N. (2017). Platform capitalism. John Wiley & Sons. ↩︎

  7. Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. ↩︎

  8. Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 2053951720943234. ↩︎

  9. Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press. ↩︎

  10. Zhang, J., et al. (2023). Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712. ↩︎

Each compression throws away more information, and eventually you’re left with a pixelated mess that bears no resemblance to the original[1]. But hey, at least you saved thirty seconds of reading time, which you can now spend doom-scrolling through more algorithmically-optimized rage bait[2].

The recommendation algorithms are even better. “We’ll use AI to show you content you’ll love!” But remember what we established: engagement correlates with misunderstanding, with anger, with emotional reactivity that fills your working memory and prevents careful thought[3]. So when an AI optimizes for engagement, it’s explicitly optimizing for dysfunction. It’s finding the content most likely to overload your cognitive capacity, most likely to trigger defensive reactions, most likely to fill those precious few working memory slots with rage instead of understanding[4]. The algorithm isn’t broken, it’s working exactly as designed, it’s just designed to break you[5].

Then there’s the chatbots, oh god, the chatbots. “Use AI to answer customer service questions!” Sounds efficient, except what you’re really doing is automating away the last remaining synchronous, high-bandwidth communication channel where humans could actually resolve complex issues[6]. Now you’re talking to a probability machine that has zero working memory, zero context about your actual problem, and is trying to pattern-match your issue to a script[7]. It can’t understand that you’ve already tried the five things it’s about to suggest. It can’t chunk your problem the way a human who actually shares your context could. It just regurgitates the most statistically likely response based on keywords[8]. And when it inevitably fails because it’s a language model trying to solve a communication problem, you’re left even more frustrated, your working memory even more full of anger, and no closer to a solution[9].

Which is why company earnings are at an all-time high, but customer satisfaction is at an all-time low[10].

And here’s the truly sinister part: every single one of these AI “solutions” requires the problem to exist at scale[11]. Content moderation through AI only makes sense when you have millions of users and can’t possibly have humans moderating. Algorithmic recommendations only work when you’ve destroyed chronological feeds. Summarization only matters when you’ve fragmented conversations so badly that nobody can follow them. Chatbots only save money when you’ve scaled so far past functional community size that human customer service becomes prohibitively expensive[12]. AI isn’t solving the dysfunction; it’s monetizing it. It’s a perpetual motion machine: break communication for scale, sell AI tools to “manage” the breakdown, use the revenue to scale even further, which creates even more dysfunction, which requires even more AI, which generates even more revenue[13].

The VCs have built a system where they profit twice: once from creating platforms that don’t work, and again from selling tools that don’t fix them[14]. And the beautiful part? Users pay for both. You pay with your attention, your data, your cognitive capacity on the platforms. Then you pay again (or your company pays) for the AI tools that claim to make those platforms usable[15]. It’s double-dipping on dysfunction. They’ve industrialized the creation and monetization of human miscommunication, and they’re calling it innovation[16].

You know what’s really fucked up? Nobody actually wants any of this[17]. Users don’t wake up thinking “I really wish ChatGPT would mediate my conversations with my friends.” Nobody’s begging for AI to summarize threads and strip out even more context. When platforms roll out algorithmic feeds, users revolt and demand chronological timelines back[18]. When Facebook or Twitter or Instagram kills chronological feeds, there’s massive backlash, people write browser extensions to get it back, they create entire movements around “Make Twitter Chronological Again”[19]. People don’t want content moderation at scale through AI; they want smaller communities where they actually know each other and can self-moderate through social pressure and shared norms, the way BBSes did, the way every functional human community has done for millennia[20].

The actual human needs, the things we established work: communities below Dunbar’s number, chronological feeds, proper threading, spatial interfaces that support working memory, shared context, high-bandwidth communication, all of it gets ignored because it doesn’t scale and therefore doesn’t justify investment[21]. You can’t take a functional BBS public. You can’t sell a community that refuses to grow past 150 active members to a SPAC. You can’t generate unicorn valuations from people actually understanding each other. So instead we get these zombie products that nobody wants, solving problems nobody has, funded by investors who are betting against functional human communication because dysfunction is where the money is[22].

Well… fuck you too… now what?


  1. Roemmele, M., & Gordon, A. S. (2018). Automated assistance for creative writing with an RNN language model. Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion, 1-2. ↩︎

  2. Lorenz-Spreen, P., et al. (2019). Accelerating dynamics of collective attention. Nature Communications, 10(1), 1-9. ↩︎

  3. Brady, W. J., et al. (2020). An ideological asymmetry in the diffusion of moralized content on social media among political leaders. Journal of Experimental Psychology: General, 149(10), 2802-2813. ↩︎

  4. Bail, C. A., et al. (2018). Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences, 115(37), 9216-9221. ↩︎

  5. Tufekci, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. Colorado Technology Law Journal, 13, 203-218. ↩︎

  6. Luger, E., & Sellen, A. (2016). “Like having a really bad PA”: The gulf between user expectation and experience of conversational agents. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 5286-5297. ↩︎

  7. Følstad, A., & Brandtzæg, P. B. (2017). Chatbots and the new world of HCI. Interactions, 24(4), 38-42. ↩︎

  8. Corti, K., & Gillespie, A. (2016). Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human. Computers in Human Behavior, 58, 431-442. ↩︎

  9. Gnewuch, U., et al. (2017). Faster is not always better: Understanding the effect of dynamic response delays in human-chatbot interaction. Proceedings of the 25th European Conference on Information Systems, 1-16. ↩︎

  10. Fornell, C., et al. (2023). The American Customer Satisfaction Index at 30 years: A retrospective review and emerging opportunities. Journal of the Academy of Marketing Science, 51(5), 1065-1089. ↩︎

  11. Sadowski, J. (2020). Too smart: How digital capitalism is extracting data, controlling our lives, and taking over the world. MIT Press. ↩︎

  12. Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books. ↩︎

  13. McQuillan, D. (2022). Resisting AI: An anti-fascist approach to artificial intelligence. Bristol University Press. ↩︎

  14. Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. PublicAffairs. ↩︎

  15. Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75-89. ↩︎

  16. Winner, L. (1993). Upon opening the black box and finding it empty: Social constructivism and the philosophy of technology. Science, Technology, & Human Values, 18(3), 362-378. ↩︎

  17. Turkle, S. (2015). Reclaiming conversation: The power of talk in a digital age. Penguin. ↩︎

  18. Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30-44. ↩︎

  19. Eslami, M., et al. (2015). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 153-162. ↩︎

  20. Matias, J. N. (2019). The civic labor of volunteer moderators online. Social Media + Society, 5(2), 2056305119836778. ↩︎

  21. Scholz, T. (2016). Platform cooperativism: Challenging the corporate sharing economy. Rosa Luxemburg Stiftung. ↩︎

  22. Tarnoff, B. (2022). Internet for the people: The fight for our digital future. Verso Books. ↩︎

Now we talk about Eve. Yes, sorry this was an ad. You are reading an ad. A well researched ad, but one nonetheless :slight_smile:

Here’s the thing that’ll blow your mind after everything we’ve just covered; cognitive constraints, information theory, the totalitarian parallels, the economic incentives that broke everything, the AI grift masquerading as innovation; there’s actually a solution. To all of this. Every single problem I just spent 10,000 words describing.

And it’s not complicated. It’s not revolutionary. It’s not some brilliant Silicon Valley disruption that requires billions in funding and a team of PhDs. The solution is to go back to what actually worked[1]. Remember those BBSes? Those small communities where people actually knew each other? Those threaded conversations that maintained context? Those spatial interfaces that didn’t overload your working memory? That entire paradigm we abandoned because it wasn’t “scalable”? Yeah… What if we didn’t scale?

This is where Eve comes in. Private, end-to-end encrypted, invite-only, closed community networks. Not a platform trying to be everything to everyone. Not a growth machine optimized for engagement metrics. Just small, bounded communities where people actually share context, where Dunbar’s number is respected as a design constraint rather than a limitation to overcome.

What if – and bear with me here because this is going to sound ABSOLUTELY UNHINGED – what if we built a platform that actively refuses to do all the things that make money in silicon valley? What if we said “fuck your growth metrics” and “fuck your engagement optimization” and “fuck your path to a billion users” and instead built something that just lets people talk to each other like humans? What if we made communities that are private, properly end-to-end encrypted so the platform itself can’t even read your messages (let alone sell them to advertisers or train AI on them), invite-only so you actually know who you’re talking to, and – here’s the revolutionary part – small. Like, deliberately, architecturally, fundamentally small. Communities that stop accepting new members when they hit Dunbar’s number because we’re not trying to scale, we’re trying to work.

I know… mind blown… but… Enter Eve.

It’s basically BBSes but with modern infrastructure. Proper threading that maintains context. Interfaces designed for actual human working memory instead of engagement metrics. No algorithmic feeds because why would you want an algorithm deciding what conversations you see? No AI moderation because you know everyone in your community and can just… talk to them like adults. No growth targets because the whole point is staying small enough that shared context is possible. No surveillance because everything’s E2E encrypted and we’re not in the data harvesting business. It respects chunking limits by keeping conversations focused. It respects information density by actually working on desktop screens with proper spatial organization instead of forcing you to peer through that phone-sized keyhole. It respects the fundamental reality that human communication requires shared context and you can’t have shared context with ten million strangers.

Except that wait… we can do more than that. We can build on top of BBSs, we have technology to build additional things, not just forums, but more… Every community should have tools that work better for them, so why not let them build them?

And here’s the beautiful part: it’s economically viable specifically because it doesn’t scale. We’re not paying for data centers to handle billions of users. We don’t need them, Eve is peer to peer. We’re not burning cash to acquire users we’ll monetize later. We’re not building AI infrastructure to moderate content at scale because there is no scale. Small communities are cheap to run, like hilariously cheap compared to what Facebook spends per user. You know what else small communities are? Sustainable. You’re not trying to reach $50 Billion. You can build them outside the VC trap entirely because you’re not promising exponential growth. You can make decisions based on what makes communication work instead of what makes investors happy.

I know what you’re thinking: ‘Danny, what happens when Eve gets successful? What happens when you get VC offers? What stops this from becoming the next Twitter?’

Here’s the thing: Eve is architurally designed to resist that. Communities are owned by the users. I don’t know what you do in your communities, and I don’t want to know. You own your data. We can’t sell what we don’t have access to. And if I ever sold out? You’d just fork it, because the code is open source.

The whole point is the architecture prevents the enshittification. We’re not asking you to trust us forever. We’re asking you to verify the math.


  1. Norman, D. A. (2013). The design of everyday things: Revised and expanded edition. Basic books. ↩︎

2 Likes

Great post Danny! I completely agree with most of this. But I also believe in seeking balance. There’s rarely a correct absolutist approach to anything. However, it is definitely clear that the harms of mobile are leading us down the path your essay and supporting research show.

With that said, here’s a Key & Peele clip that summarizes your post perfectly.

I might have some more specific comments to highlight the value of algorithmic discoverability, AI, and mobile for future replies.

Really awesome article. Can we publish this for the Society of Problem Solvers?