CJ Eller

Classical Guitar by Training, Cloud Engineer by Accident

It feels good to type words for this blog again. Not doing so for months left me uneasy in a way I didn't expect. I'd like to interrogate that feeling.

There's a particular concept from psychologist Karl Groos that could help here. David Graeber describes it well in Bullshit Jobs:

[I]nfants express extraordinary happiness when they first figure out they can cause predictable effects in the world, pretty much regardless of what that effect is or whether it could be construed as having any benefit to them [...] Gross coined the phrase “the pleasure at being the cause,” suggesting that it is the basis for play, which he saw as the exercise of powers simply for the sake of exercising them.

Groos' findings — which have since been confirmed by a century of experimental evidence — suggested maybe there was something simpler behind what Nietzsche called the “will to power.” Children come to understand that they exist, that “they” are the thing which just caused something to happen — the proof of which is the fact that they can make it happen again. Crucially, too, this realization is, from the very beginning, marked with a species of delight that remains the fundamental background of all subsequent human experience.

The pleasure at being the cause might start with knocking over a milk bottle, but as we age, that species of delight is drawn out from other activities. Could blogging be one of those things? You write something and then it appears on the internet for all to read. You are the thing which just caused that to happen. You can make it happen again if you'd like. Or not. The choice is yours. That choice helps one come to realize how they can “exist” on the web. The pleasure at being the cause of a blog.

Groos also found that bad things happen if you take away the ability for one to derive pleasure from being the cause of something. Remove the milk bottle the infant was throwing around and there will bound to be tears and tantrums. Could this be why I felt blue for these past blogless months? I had removed something that gave me agency in the world by the sheer fact that I could write sentences and have them show up for others to see. They didn't exist before, but they do now. The milk bottle wasn't on the floor before, but it is now. How wonderful.

In 1493, Charles C. Mann observes how pre-colonial Andean peoples grew potatoes using seeds and let cross-pollination create a wide variety:

Andean peoples cultivated different varieties at different altitude ranges. Most people in a village planted a few basic types, but everyone also planted others to have a variety of tastes, each in its little irregular patch of wacho, wild potatoes at the margins. The result was chaotic diversity. Potatoes in one village at one altitude could look wildly unlike those a few miles away in another village at another altitude.

Using a seed potato (tuber), however, produces different results:

When farms plant pieces of tuber, rather than seeds, the resultant sprouts are clones; in developed countries, entire landscapes are covered with potatoes that are almost genetically identical. By contrast, a Peruvian-American research team found that families in a mountain valley in central Peru grew an average of 10.6 traditional varieties—landraces, as they are called, each with its own name [...] The International Potato center in Peru has sampled and preserved more than 3,700. The range of potatoes in a single Andean field, [Karl] Zimmerer observed, “exceeds the diversity of nine-tenths of the potato crop of the entire United States.”

The ways potatoes are planted makes me think about the web. To speak about the web is to speak of something that is, for the most part, structurally homogenous — millions of servers handling similar protocols down the stack. That homogeneity is important. If millions of computers follow the same protocols, those millions of computers can communicate with each other. Things become possible when people communicate with each other. By way of metaphor, the web is mostly made from the same tuber, clones of the same potatoes.

But I use “mostly” for a reason. On the web there is still a chaotic diversity wrought from the seeds of disparate protocols and utilities. Not just HTTP/S but Gemini, Gopher, Finger, Activity Pub, and countless others. There are wild patches of the web, wild potatoes at the margins, each with their unique taste.

This variety isn't merely ornamental either, it's crucial for survival of an open and decentralized web. Homogenization can create dependencies which can cause disaster if something breaks in the dependency chain. One reason why the Great Famine of the 19th century was so disastrous was that the potatoes were genetically identical, planted via tubers. It makes one wonder how homogenization can and has caused problems for the web. How can that remediated? More seeds wouldn't hurt.

I'm reminded of this blog post from Darius Kazemi that urges for such a cross-pollination of protocols:

[S]oftware developers should not be afraid to mix, match, and layer protocols. There is no rule that says you can't do this, yet I've noticed people picking one protocol and sticking to it out of some kind of loyalty. When really, as a developer, your loyalty should be to your values of an open, decentralized internet, whatever those are.

Loyalty to your values of an open, decentralized internet, whatever those are. What are they? Ecologist Murray Bookchin's has this concept called “the ecology of freedom” that David Graeber & David Wengrow explore in The Dawn of Everything. Their description feels relevant here:

The ecology of freedom describes the proclivity of human societies to move (freely) in and out of farming; to farm without fully becoming farmers; raise crops and animals without surrendering too much of one's existence to the logistical rigours of agriculture; and retain a food web sufficiently broad as to prevent cultivation from becoming a matter of life and death. It is just this sort of ecological flexibility that tends to be excluded from conventional narratives of world history, which present the planting of a single seed as a point of no return.

To move in and out but not be tied down, to embody agency in one's creation and adoption of social structures. Perhaps an ecology of freedom is crucial for an open, decentralized internet, a place where the seeds of chaotic diversity can be cultivated.

Why do we stop at a digital garden as a metaphor? What could it mean for a garden to enter digital experience beyond being a way to describe organizing Turing machines in a slightly different way than we normally do?

Let's enter such an inquiry from another place. Composer Pauline Oliveros, in her lecture “Quantum Improvisation: The Cybernetic Presence” (source), talked about whether machine intelligence could be trained to perform improvised music:

Music and especially improvised music is not a game of chess – Improvisation especially free improvisation could definitely represent another challenge to machine intelligence. It won't be the silicon linearity of intensive calculation that makes improvisation wonderful. It is the non linear carbon chaos, the unpredictable turns of chance permutation, the meatiness, the warmth, the simple, profound, humanity of beings that brings presence and wonder to music.

Freely improvised music is an open system, not closed to a particular set of rules like chess. Gardens can be organized linearly with rows of plants, but beyond our own technique something happens. The seeds take on a relationship with weather and soil and plants and atmosphere and nutrients and animals and microorganisms and God knows what else. A garden is an improvisation of ecology that harkens to the non linear carbon chaos of Oliveros.

So then questions arise for me. Digital gardens in their current idiom are metaphoric window dressing for incomplete thoughts and wiki entries. Nothing wrong with that. The growth of a digital garden occurs from the author(s) only — they add to an entry or delete parts from another. It is a closed system because the computers they are hosted on, Turing machines, are closed systems which take input from a single user (or multiple depending on the context). What would happen if a digital garden had more of a relationship with the world like a garden? What could that look like?

James Bridle's fantastic book Ways of Being explores this question in great detail and profundity. One example Bridle mentions is a random number generator called ERNIE (Electronic Random Number Indicator Equipment) built in the late 1950's by the UK government for a lottery. Instead of being a closed system, it had a relationship with the outside world — well, with neon tubes that also had a relationship with the outside world:

ERNIE was one of the first machines to be able to produce true random numbers, but in order to do so it had to reach outside itself. Rather than simply doing math, it was connected to a series of neon tubes—gas-filled glass rods, similar to those used for neon lighting. The flow of the gas in the tubes was subject to all kinds of interference outside the machine’s control: passing radio waves, atmospheric conditions, fluctuations in the electrical power grid, and even particles from outer space. By measuring the noise in the tubes—the change in electrical flux within the neon gas, caused by this interference—ERNIE could produce numbers that were truly random: mathematically verifiable, but completely unpredictable.

The ERNIE is an early example of a machine taking on non-linear carbon chaos by being in relationship with the outside world. Bridle emphasizes how crucial this is for giving computers the ability to operate with true randomness:

Given the way we have constructed them, computers are not capable, operating alone, of true randomness. To exercise this crucial faculty, they must be connected to such diverse sources of uncertainty as fluctuations in the atmosphere, decaying minerals, shifting globules of heated wax and the quantum dance of the universe itself.

This is like a seed entering the soil of a garden — there are so many diverse sources of uncertainty that make it grow (or not grow) in the way it does. What if a digital garden could be the same way? You could write something one day and then it grows by itself something different the next, influenced by AI that is then influenced by solar panels, water saturation, and pH levels of the soil in a garden nearby.

Such a possibility feels like science fiction, but I think it's a future worth striving for. Why stop at metaphor? Bridle makes a good case such that I will end with it:

In order to be full and useful participants in the world, computers need to have relations with it. They need to touch and be in touch with the world. This stands in stark opposition to the way we build most of them today: systems of inscrutable, inhuman logic, comprehensible only partly by a narrow cadre of highly trained, and highly privileged engineers, and based on systems of extraction, manufacture, and use that damage the planet in multiple ways, from large-scale mineral mining, through the heat and greenhouse gases produced by server farms, to vast fields of electronic waste.

But the use of randomness, both the processes it invokes, and the radical equality it makes possible, suggests it doesn’t have to be this way. We can reimagine our technologies—and our political systems—in ways which are less extractive, more generative, and ultimately more just.

What happens when a computer hibernates? Does it hibernate like an animal? Open your computer and you are back where you left it. Is there any difference? Perhaps a little less battery if it's a laptop, but not much else. Hibernation only in name. Bummer.

What if a computer could actually hibernate, turning itself off for a long period of time only to turn back on some time later? What would that look like? What would happen after it turned back on? What if it were to adopt seasonality?

Such a hypothetical question whirls in my head after reading a particular stretch of David Graeber & David Wengrow's The Dawn of Everything. Therein they describe how seasonality in human social and political life took many forms in Prehistory as well as in particular indigenous peoples. One example they mention is 20th century anthropologist Franz Boas' research of the Kwakiutl of Canada's Northwest Coast:

Here, Boas discovered, it was winter – not summer – that was the time when society crystallized into its most hierarchical forms, and spectacularly so [...] Yet these aristocratic courts broke apart for the summer work of the fishing season, reverting to smaller clan formations – still ranked, but with entirely different and much less formal structures. In this case, people actually adopted different names in summer and winter – literally becoming someone else, depending on the time of year.

Sure, a holiday can nudge me to be more generous or gregarious, but is that the same as shifting an entire social system and your identity? Probably not. Even if we are “stuck”, as Graeber & Wengrow put it, unable to adopt such seasonal shifting due to our 21st century day-to-day, perhaps we can find something on the web. However, does seasonality exist even there? I adopt an online persona, usually on a platform, usually with many. Does that platform change its structure, its social system, seasonally? New features or designs? Superficial window dressing. What I mean is that a forum is run by one admin during the Summer, but delegates admin control to 5 members of the forum chosen by lot during the Winter. Something like that. It could already exist out there, but I wonder why there isn't more of this experimentation. Even on a personal level, to take on a new identity depending on the season. What would that look like? Perhaps one uses social media during the Summer but then uses only email during the Winter? I'm not sure.

But why even take on such an effort? For Graeber & Wengrow, seasonality in identity and social life creates what they call “laboratories of social possibility” where we can “step outside the boundaries of any given structure and reflect; to both make and unmake the political worlds we live in.” That strikes me as being alive, of changing with the seasons as they change the world around us, of being more with the world, of not being stuck.

Eric Stein's Compost Epistemology is a lovely collection of passages that interrogates the idea of the digital garbage heap that I've expressed previously here:

Do we risk creating such a nightmare for ourselves with a perpetually expanding web of hyperlinked notes? How do we prevent such a garbage heap from accumulating in the first place?

Particularly, there is a passages from Donna Haraway's Staying with Trouble: Making Kin in the Chthulucene that struck me:

The Communities of Compost worked and played hard to understand how to inherit the layers upon layers of living and dying that infuse every place and every corridor. Unlike inhabitants in many other utopian movements, stories, or literatures in the history of the earth, the Children of Compost knew they could not deceive themselves that they could start from scratch. Precisely the opposite insight moved them; they asked and responded to the question of how to live in the ruins that were still inhabited, with ghosts and with the living too.

How do we prevent a digital garbage heap from accumulating? From Haraway's perspective (as I gather it), this question is bound in utopian thinking, as if we could start from scratch in some digital (or digital free) Eden. It cannot be done. Haraway instead brings out questions of inheritance rather than prevention, questions that ask how we live within these ruins than how we get rid of them. Such questions strike me as more fruitful than the previous framing. (thanks for helping me “embrace the heap” Eric)

I am moved by this image — ruins inhabited with ghosts and the living. Sounds like the web.

The memory of learning to type on a keyboard is fuzzy.

I don't even recall how it feels not to use a keyboard. It shows my age more than anything else, but I think most computer users hit an inflection point where typing becomes second nature. All that has to be done is thinking of what to say to a friend, program in a script, or announce to a public. Our hands know where to go.

Does that come at a cost? The artist Lynda Barry once said that in the digital age, don't forget to use your digits. There is no doubt you use our hands, but can you lose awareness of them?

This is something I think about all the time when playing guitar. I've been playing for a while, and sometimes when you play for a while it gets to the point where you don't think as much about your hands (and that doesn't make me any better of a player, rest assured). However, whenever I learn a new piece on guitar, however, I am reintroduced to my hands all over again. A piece of sheet music needs to not only be translated to a musical note but to my hands also. Which right-hand finger will I use to pluck this note? Which left-hand finger will I use to play this note? Those questions reintroduce me to what my hands are doing. That awareness can fade as I memorize a piece, but I try to make it a habit to review technique for pieces. Are my hands subject to unnecessary strain? If so strained, what can I do to alleviate the tension so I can play better? I feel more awareness of what my hands (and body for that matter) are doing to use this musical technology

I've been trying to think along these lines with how to make myself more aware of my hands when using a keyboard. One approach could be procuring a new keyboard, but that's like buying a new guitar to do what I already do with learning new pieces. I wonder how software, like a piece of music, can help develop such mindfulness — digital tools reminding you of your digits. One such tool I've been working with lately has been Vim (NeoVim to be precise). It requires me to rethink how to edit text (especially without using the damn trackpad so much). In such a process, hand awareness take center stage — like learning how to type all over again.

Why such an insistence on being aware of your hands when you type? That might be the real question to explore. It's as if there's a part of me that doesn't want to accept that the computer, to take a phrase from Marshall McLuhan, is a medium which has different sense ratios at play than a musical instrument like a guitar. Why does the difference in the sense of touch have to make something better or worse? Perhaps it doesn't, but it feels as though having less touch would change the way of perceiving & understanding the world...in a way I wouldn't prefer?

Still something I need to think through more.

I don't know about you, but I've bounced off of personal knowledge management tools like crazy. Wikis? Digital gardens? Zettelkasten systems? Nothing sticks.

A recent piece brought this to the forefront for me: “Personal Knowledge Management is Bullshit” from Justin Murphy (source). There's a passage I'd like to focus on:

The most important thing about writing is discovering novel and non-trivial truths, and determining which of your truths is most important—then imposing order, hierarchy, and linearity—through judgment, decisiveness, and will. To produce meaningful work, and then forget about it, so you can move on to another and hopefully greater act of linear will.

A perpetually expanding web of hyperlinked notes is not impressive but oppressive. It’s not useful, and it’s not illuminating.

The idea of the ever expanding knowledge graph as oppressive flies in the face of the current climate that holds the bi-directional link as the lingua franca for many popular software tools & systems. It reminds me of a short story from Jorge Luis Borges called “Funes, His Memory.” In the story, a man acquires near superhuman memory after a horse riding accident. Borges describes this memory in lush detail. Here's a couple examples (translation by Andrew Hurley):

With one quick look, you and I perceive three wineglasses on a table; Funes perceived every grape that had been pressed into the wine and all the stalks and tendrils of its vineyard. He knew the forms of the clouds in the southern sky on the morning of April 30, 1882, and he could compare them in his memory with the veins in the marbeled binding of a book he had seen only once, or with the feathers of spray lifted by an oar on the Río Negro on the eve of the Battle of Quebracho.


Not only was it difficult for him to see that the generic symbol “dog” took in all the dissimilar individuals of all shapes and sizes, it irritated him that the “dog” of three-fourteen in the afternoon, seen in profile, should be indicated by the same noun as the dog of three-fifteen, seen frontally.

But towards the end of the story, the narrator is suspicious of Funes' prodigious mnemonics:

He had effortlessly learned English, French, Portuguese, Latin. I suspect, nevertheless, that he was not very good at thinking. To think is to ignore (or forget) differences, to generalize, to abstract. In the teeming world of Ireneo Funes there was nothing but particulars [...]

Funes' memory is a never ending accumulation of specifics, never allowing for those specifics to be removed or for generalities to take their place. One new thing after another after another after another. Funes even goes so far as to tell the narrator, “My memory, sir, is like a garbage heap.”

Without the ability to generalize and abstract away his memories, Funes is left with a garbage heap that keeps piling up. “Funes, His Memory” is a story not of a gifted individual but a cursed one, trapped in an endless web of memories with no way out. A nightmare.

Do we risk creating such a nightmare for ourselves with a perpetually expanding web of hyperlinked notes? How do we prevent such a garbage heap from accumulating in the first place?

Is the internet more terrifying than we imagine?

There's a bit from Jorge Luis Borges' short story “Book of Sands” that makes me wonder this. The plot revolves around a book with no beginning nor end. Today it's interpreted as a premonition to the endlessness of the internet. Cool, but here's the thing: the narrator looks upon the book's infinite pages with sheer horror. Here's the bit in question:

Summer was drawing to a close, and I realized that the book was monstrous. It was cold consolation to think that I, who looked upon it with my ten flesh-and-bone fingers, was no less monstrous than the book. I felt it was a nightmare thing, an obscene thing, and that it defiled and corrupted reality.

I conisdered fire, but I feared that the burning of an infinite book might be similarly infinite, and suffocate the planet in smoke.

The book so overwhelms the narrator that he drops it off at a nearby library, even going as far as to avoid walking by the same library ever again.

This reaction is strange to think of as someone who accesses their own “Book of Sands” on a daily basis. However, I wonder if it's my reaction that is strange. Perhaps I am denying the metaphysical horror of the internet, of being lost inside a sphere whose center is everywhere and circumference is nowhere, of being lost inside a nightmare thing that defiles and corrupts reality.

I've been playing Elden Ring for about 70 hours now. One fascinating thing has stood out so far. It doesn't have an expository narrative like many other games. You're launched into a world with locales & characters that aren't outright explained. But the more you play, a rich world comes to life.

This form of narrative has been unlike any other game I've experienced. It's been jarring for some and revelatory to others. I happen to fall in the latter category, but I've been ignorant of the reason. Why do I like this? Why is a hands-off narrative so compelling?

While playing Elden Ring I read Anil Seth's Being You: A New Science of Consciousness. There's a part in the book that brings up 20th century art historian Ernst Gombrich's concept of the beholder's share (though first brought up by fellow art historian Alois Riegl). Seth describes it as “that part of perceptual experience that is contributed by the perceiver and which is not to be found in the artwork – or the world – itself.” In other words, it's the observer who creatively “completes” the artwork. Impressionist paintings are an example of the beholder's share in action. Seth explains that,

Impressionist landscapes attempt to remove the artist from the act of paintings [...] To do this, the artist must develop and deploy a sophisticated understanding of how the subjective, phenomenological aspects of vision come about. Each work can be understood as an exercise in reverse engineering the human visual system, from sensory input all the way to a coherent subjective experience. The paintings become experiments into predictive perception and into the nature of the conscious experiences that these processes give rise to.

To quote Gombrich: “When we say the blots and brushstrokes of the Impressionist canvas 'suddenly comes to life,' we mean we have been led to project a landscape into these dabs of pigment.”

Could Elden Ring be seen as an example of the beholder's share? An attempt to remove the developer from the act of direct storytelling by giving the player the raw materials to weave together a story through her own experience of the game? In a way I think so, and it could be a part of why it resonates so much with people. Giving narrative power back to the beholder.

The project of having my personal site show specific content depending on the sun's location? A bit of a wash. Technical difficulties make it overkill for any longevity. Static HTML it is. Replacing the sun's location is an oblique way of showing time elapsed, perhaps more symbolic than anything else.

The background color of my site is as close as I can get to the color of the top of my guitar. The German Spruce has changed in color; warmer in comparison to when I first played it in 2017. Time brings out more color in the wood as well as the sound. I had to go in knowing that it'd take a couple years before the guitar's true sound came out. This isn't to say that the guitar initially sounds horrible, it's just that needs time to become dryer and more resonant.

Just give it time. Imagine saying the same about software! As much as I try to compare software to the analog organic (digital gardens, etc.), there seems to be a disconnect when it comes to this quality of emergent properties. Well, maybe such emergent properties in the digital come in the guise of bugs, glitches, errors. Let me rephrase: positive, creative emergent properties. When the wood ages, it affects the look & sound of the guitar for the better. When software ages, what is changed for the better? Am I not seeing it? Are there examples of software doing this?

Imagine a world in which letting a software tool age augments its utility like time does to the tone of a guitar.