CJ Eller

Community Manager @ Write.as — Classical guitar by training, Software by accident

One thing that I love about PaaS services is how quickly you can create a proof of concept. Recently I've found out about CanaryTokens, a free & easy honeypot tool. In particular, there's a type of token that triggers an alert when your website is cloned. How could I test this out without much hassle? PaaS.

The test website will be made with Glitch. All I have to do is spin up a static site from one of their templates.

In a matter of seconds I have a site I can put the token in. Next I created a Canarytoken using the auto-generated URL Glitch gave my site.

Once the token is created, I receive some JavaScript that you add to the site. It's just a matter of adding the code to my site's JavaScript file the site using Glitch's in-browser editor.

Now that my site is all set up, I'll play the role of the attacker. I want to clone the site to trick people to click stuff they shouldn't. Anyways, I'll go into my terminal and pull the site down recursively using wget.

wget -r https://geode-living-jackrabbit.glitch.me/

With the site in a local directory, I want to deploy it as a fake website to activate the token. There's an easy to do this. Netlify has a wonderful service called Netlify Drop. All you have to do is drag in the folder with your site's HTML, CSS, & JavaScript and Netlify deploys the site for you. (I'm surprised I only heard about it now — it's been around since 2018)

So now I just drag the folder of the Canarytoken site into Netlify Drop.

As soon as the site is live I receive an email alert that the Canarytoken was triggered!

What's interesting here is that the source IP address belongs to an AWS EC2 instance. This gives us a glimpse into both Netlify and the attacker. Netlify probably uses an automated process to put my site into an available EC2 instance running some version of Linux. All of which happens in the background as I drag and drop a folder into Netlify Drop.

If the Cloud wasn't an abstraction already, PaaS is an abstraction of an abstraction. They not only hide the computer from you but the cloud as well. This can be a good thing. Platforms like Glitch and Netlify manage cloud infrastructure so you can focus on building your app rather than configuring the EC2 instance that hosts your app. While this is already common knowledge, I find it fascinating that this knowledge arose from using Canarytokens.

I also can't help but think of how PaaS makes it easy for phishing sites to be deployed. PaaS also gives an attacker yet another means to obfuscate their whereabouts. We only know that the attacker deployed her site using an EC2 instance from an AWS data center in North Virginia. That doesn't mean she's there. It also doesn't mean she used AWS directly — enter PaaS middlemen like Glitch and Netlify.

And yet these platforms advise against using their services for such purposes — it's laid out in the terms of service. Does that stop people though? I'm curious how Glitch and Netlify try to curb these phishing sites. For one, sites made on each platform without an account expire after a period of time. That's just one control that's simple but effective against driveby phishing attempts. What about phishing from those with accounts? What then?

These questions come from the democratizing power of PaaS platforms like Glitch and Netlify. What does security look like when anyone can host a web app by dragging and dropping a folder?

I've recently stumbled across squid. No not the animal, thank goodness. The other kind of squid.

Squid is a server I can run in the background on a machine. Like a web server? Sort of. Squid acts as a proxy to my web browser. This means that any time I make a request to a website, it has to go through the Squid server first. In that moment, Squid can use rules I make to decide whether the request goes through. If the website is on my blocklist, for example, the request is denied.

So what does that look like? Say I have Twitter on my block list and I try to go to twitter.com on Chrome. This is what I'd encounter:

If I didn't know Squid was running, I'd think Twitter was down or something. What does ERR_TUNNEL_CONNECTION_FAILED mean anyway? There's a blog post for that:

Just to let you know, the ERR TUNNEL CONNECTION FAILED error in Chrome occurs when Chrome cannot create a tunnel connected with the targeted website host. Simply put, Chrome cannot connect to the internet. One of the main causes behind this error is the use of a proxy to connect to the internet. At times, browsing data and Cookies saved in Chrome may also cause the ERRTUNNELCONNECTION_FAILED error to show up. Whatever may the reason be, the methods to fix this error are fairly simple.

Thanks blog post. So either my browsing data and Cookies are conspiring against Twitter alone, or Squid is working as promised. While Chrome is sparse, trying Twitter on FireFox gives me more information:

That's exactly what I'm looking for. The proxy server, Squid, is refusing connections to Twitter (because that's how I set it up).

How strange that these browsers take different approaches when dealing with proxy servers. When I set up Squid with Firefox, it was integrated in the browser settings. Chrome, on the other hand, kicked me off to change my computer's proxy settings. Not sure whether to think Firefox views proxies as a first class citizen on its browser while Chrome brushes it off to the OS. What does that say about each browser's priorities? A question for another time.

Anyways, let's peel back a layer and see what Twitter looks like on Squid's logs. This might give us more information on what happens when Squid blocks a website. Below is from /var/log/squid/access.log. It shows my laptop (192.168.50.39) trying to access Twitter:

1604799814.830      0 192.168.50.39 TCP_DENIED/403 3982 CONNECT twitter.com:443 - HIER_NONE/- text/html

The interesting thing here is the HTTP error message — 403. That usually means that the client (my laptop) is not granted to access to a website (twitter.com) for some reason. Here we are given a rather vague reason — TCP_DENIED. Given that we know Squid is running, it makes sense. Squid is denying the TCP connection needed for our browser to connect to Twitter.

But how do I know that Squid is doing this? Well, we have to peel back yet another layer. This time I'll sniff things on the packet level to see what happens when I attempt to access Twitter. A little tcpdump will do the trick.

tcpdump -i any -w twitter.pcap

This is the packet capture in Wireshark. My laptop is 192.168.50.39 and the Squid server is 192.168.50.102:

My inkling about Squid denying a TCP connection is bogus. When you look at the packet capture, a TCP connection occurs within the first three packets! It just happens between my laptop and the Squid server rather than with a web server directly. This is because, shocker, Squid is a proxy server. My laptop doesn't connect to websites directly. Squid does so on my behalf.

Once the three-way handshake occurs, we get this interesting HTTP request method I've never seen before — CONNECT. Mozilla has a handy definition for it:

The client asks an HTTP Proxy server to tunnel the TCP connection to the desired destination. The server then proceeds to make the connection on behalf of the client. Once the connection has been established by the server, the Proxy server continues to proxy the TCP stream to and from the client.

In this case, my laptop (client) asks Squid (proxy server) to tunnel the TCP connection to Twitter (desired destination). I wonder if this is similar tunneling that occurs with VPN's? Instead of tunnelinwe see the 403 HTTP response we analyzed from the Squid logs.

So what does TCP_Denied mean given this extra context? What we know is that the TCP connection is already established between my laptop and Squid. So what's being denied? Well, my laptop is asking Squid to tunnel the established TCP connection to Twitter. What does Squid do? Deny that request. Otherwise, if Twitter wasn't on my block list, I would've gotten a 200 HTTP response and been able to access the site. It would look like business as usual.

And that's the potentially malicious side of proxies. As we saw, a proxy server acts as the intermediary between you and the web. There's a lot that can happen between you making a request and the proxy server tunneling the TCP connection to your requested site. I could only imagine tools that mimic Freedom and similiar services, assuring users that they're only site blockers, but have seedier machinations underneath. They're out there.

Having a proxy can be completely mundane, like with my Squid experiment. On the other hand, proxies can act as an attack vector. They're called man-in-the-middle attacks for a reason. This experiment barely scratches the surface of understanding what proxies do behind the scene. I'd be curious to do similar analysis to see how HTTP & TCP traffic looks different with a nefarious proxy than with a proxy like Squid.

Recently I created a script to help me play a wargame called Bandit. The rules of Bandit are simple — log into a level with SSH and find the password for logging into the next level. This can mean everything from searching for a hidden file to cracking a 4-digit code with a script.

This script doesn't help me solve any of Bandit's levels. Rather, I made the script to help me play Bandit — it automates the process of logging into levels and storing level passwords. These were things that tripped me up when initially playing the game. Typos and password mismanagement stalled my progress more times than I'd like to remember. Frustration brought up an interesting question...

Is it the game's fault for this?

Bandit doesn't keep track of your passwords. That's the player's job by design. Think about when you play a card game. You have to follow the rules and enforce them yourself. Some might call this laborious. Most wouldn't call it broken. That's the player's job by design.

But this is where it gets fun. As a player, you have the freedom automate some of those rules and progressions so you can focus on other aspects of the game. Not completely automate a game like digital solitaire, but take parts that are redundant and make them easier to deal with. This is where the Bandit script comes in. It doesn't solve the game for you, just makes logging on and storing passwords easier so you can focus on how to solve the levels.

The script is in no way a complicated project, but I think it echoes a point I keep running into time and time again — contribution happens at the seam of things (which reminds me of this awesome newsletter). The question is how to keep (generously) adding threads where seams are needed. How do you find the seams? How do you add your threads to the seam? How do you maintain seams? These questions are what makes contributing to projects so exciting.

It doesn't take much. Just start with a thread.

The command line mesmerizes me.

I never thought it'd come to that. That single blinking cursor intimidated me more than lines of code ever could. How did the command line grab a hold of me? Let's just say it might have involved some magic.

Before my essay I'd like to refer to a piece that helped me better articulate my fascination (as good writing often does) — Zach Mandeville & Angelica Blevins' rhapsodic introduction to the command line:

Beneath the visual surface of your computer is an old and powerful magic, a silent but quick stream of energy that the computer draws from for power. This magic is hidden but always present, like the sacred well held in the base of a cathedral.

This hidden place has many names: Shell, Terminal, Bash, Zsh, The Command Line. All of these names are correct, but incomplete; accurate to a part, but unable to describe the whole. Like all magical things, there are aspects of the command line always beyond our articulation.

Too, Like so many magical things, the secular world will always try to deform and defang it. Modern tech culture will describe the command line as an obscure productivity tool; something you learn only to impress other tech folk (a meaningless activity) or to become a “power user” (a meaningless phrase). Conventional tech wisdom will tell you that the command line is an intimidating, obscure, imposing place — impossible to learn and dangerous to use. This is only an attempt to hide its true nature: the command line is a place made entirely of our first occult technology, the word.

The command line is pure language, and to exist in it is to practice all the reality-shifting and world manifesting power of metaphor and dialogue. This is a place of empowerment, tangible creativity, and mystic bewilderment. While it can be dangerous, it’s also exceedingly helpful if you know how to listen.

Source: The Map is the Territory

Zach & Angelica get at the intimacy I find in the command line. You type in words and the computer responds in kind. Such a tight feedback loop begins an improvisatory jam session, each musician responding to what the other does. It starts to feel nothing like programming.

And that get's at the other aspect of Zach & Angelica's introduction — the command line's magic of pure language. You never think about how magical the command line is until you realize the things you can do with a couple words — start a server, connect to another computer, scan for open connections in your network, write to a file. The fluff of HTML, CSS, & JavaScript is removed. Publishing a blog post from the command line is more like an incantation than pressing a “Publish” button ever could be.

When paired with the ability to navigate, the command line turns the computer into a mysterious labyrinth. It's no surprise that early, text-based RPG's resemble the command line more than modern day RPG's. The first dungeon crawler was the command line. What's more, this is a dungeon you not only explore but construct yourself, adding new alley ways along with more knowledge of the magic latent within the machine. And that labyrinth can be interconnected with other labyrinths, until you get to a tangled, Borgesian beast — something like the Internet.

All of this isn't to gamify the experience of the command line. The more you use it, this talk starts to feel less like a metaphor and more like a reality. That is why I love the earnestness of Zach & Angelica's writing. The command line is a place of empowerment, tangible creativity, and mystic bewilderment. It reminds me of one of Arthur C. Clarke's Three Laws — a highly advanced technology is indistinguishable from magic.

The command line will continue to enthrall me for that reason.

Have you ever abandoned a habit that you cultivated for a long time?

For me it's music. I played guitar for years, went to school for it, played in bands, and saw myself taking up the life of a professional musician and music teacher. Just when I entered a PhD program for classical guitar, the path soured for me. That decision turned me away from playing the guitar as often as I did.

Now I find myself on the other end of the spectrum. Interacting on the web is all well and good, but I yearn for an analog interface to balance out the digital. Guitar gives me that reprieve to express myself without having to hit “Publish.” You pluck the string and the sound decays — how novel compared to words that seem to stick long after you publish them. (what would a blog post that decayed like a plucked note look like?)

There are some cases where the sound sticks. I recorded a bit of a piece I've been learning from Baroque lutenist/composer Silvius Leopold Weiss, a contemporary of J.S. Bach. Allegedly the two jammed frequently, improvising fugues together — the prospect of which is beyond me.

When you talk with people, overlap is bound to happen. Patterns emerge that connect disparate conversations together. How do you keep track of those connections in a cogent, meaningful way?

It could start with where the conversation happens.

I've been experimenting with using Are.na as a chat room. Conversation is had between text blocks that are exchanged in a channel for a span of time. After the conversation is finished, I use a script that imports the channel from Are.na to a blog post like this.

After having three conversations so far, I noticed that they kept drifting to the topic of tinkering on the web — whether messing with a theme or writing a lightweight web app to interact with your blog. How could I collect each of these instances of tinkering? Are.na's superpower is that a block can be connected to multiple channels. So all I have to do is create a separate channel dedicated to tinkering on the web. Now, if I notice a moment in a past Are.na conversation that highlights the topic, I add it to the topic's channel. Now the text block sits in the middle of a Venn diagram — both part of a chat log and part of a curated selection from conversations I have.

This simple gesture expands the meaning from which I can gather from conversation online. Not that the point of conversation is to ring it of value. The practice is valuable in and of itself, but my default position is often that of the sieve. Morsels of memory remain embedded in my mind but not for long — I soon forget them. And perhaps that is baked in so that I can continue to converse with people about the same things, lest I end up like Borges' Funes. However, I wonder if there is something to extracting a piece of conversation and using it like a theme upon which infinite variations can exist, as a first-class citizen on the web rather than a passing remark. The web already functions like this in many ways, but I think there's always room for purposeful experimentation.

That adage of history rhyming, not repeating — does it work on a smaller, local level? You always hear the phrase when talking about spans of centuries. What about a couple weeks, months, or years? One of my favorite parts about publishing writing on the web is seeing how your thoughts rhyme with others'. It becomes a fruitful game of noticing rhymes and nurturing the environment for new rhymes to occur.

I had written lately about making a single page site for my mother this Mother's Day. It got me thinking about a website more like a DM than a public media object. Brendan Schlagel saw this & noted how it reminded him of a blog post he wrote a couple years ago, “Adding Hidden Layers to Websites via Secret Subdomains” (source). Reading it was a joy, especially to see where our ideas “rhymed.” I particularly gravitated to Brendan's idea of a subdomain that was dedicated to a person:

yourname.brendanschlagel.com — could be either parlor trick or highly useful and innovative networking device! I could, after meeting someone interesting, quickly put together a one-pager with a personalized curated list of articles or other resources, links to more of my work, and other fun things. I could even have a template for this, making it super simple to make a new one for favorite new people I meet.

This got me thinking about how relationships with other people online could be curated differently. Brendan, for instance, has an ongoing blogchain with Tom Critchlow. What if that also lived on Brendan's site as tom.brendanschlagel.com? Maybe that site could also give context as to how the blogchain came about.

The “yourname” subdomain could also serve as a public log for conversation with a particular person. For example, I had a great chat with David Blue for a community project. What if that conversation and others could continue to live on its own subdomain? I played around with this on a simple Next.js Glitch app: david.cjeller.site. This form could be a great place for countless blog chats to live.

What's great about this “yourname” subdomain idea is the flexibility of execution — from networking and blogchains to anything else you could imagine. It reminds me again how online relationships can be filtered through conventions of social media platforms. Brendan's idea brings out a more bespoke web, a weird & wonderful web, a homegrown web. Something more than a video montage of the photos a friend & I are both tagged in on Facebook.

Adding Hypothesis to your blog brings about about the benefit of in-line annotation for your posts — contextual highlights & comments. That power of contextualization can also extend to the texts you quote within your blog.

The usual courtesy of quotation in blogs is adding a link to the source. When a reader clicks on that, they're taken right to the beginning. Any context from your post is erased. What if the link could retain some context from your thoughts around the quotation as they relate to your post? This is where the quotation linking to a Hypothesis annotation could prove useful. Take the below block quote as an example:

i think theseus would have enjoyed the world wide web in 1997; the adventure and excitement that it fostered. its labyrinthine shape full of passages, turns, tunnels, and the unknown. websites often eschewed the formal navigation systems we have come to rely on and expect in favour of more open-ended or casual solutions. moving around a site—much like moving around the web in general—was a journey that embraced the forking, wandering naturing of hypertext and allowed the user—with varying degrees of agency—to choose their own path through cyberspace, the hero of their own self-authored epic.


When you click Go to text, it takes you to a Hypothesis link — the highlighted quotation with a previous annotation of mine. The annotation can be as detailed as you want it to be. You could even just have the annotation link to your blog post (or the spot where you quote the passage). What's more, the Hypothesis link allows a reader to go beyond the quoted passage and read your other annotations of the text. Perhaps you wonder what the author thinks of other parts of the text she quotes in a post — now you can. These additional annotations could lead to other blog posts she's written that quote the text.

Interesting possibilities can come from extending annotations not only to your blog but to the texts your blog quotes — adding context to a post through intentional hyperlinks. I am reminded of a great point from Toby Shorin about finding a balance with the affordances the web can offer text:

This all suggests that a compromise must be struck between the coherence of a text and the new opportunities for knowledge work afforded by the fundamental capabilities of the medium: the internet’s connectivity, the screen’s frame rate.


Correspondence chess can mean many things — playing chess on a forum, through postcards, across email. Wikipedia even notes that “less common methods” include the use of homing pigeons. What these disparate means of correspondence share is that they map onto a single game — chess. As Glenn Adamson explains in Fewer, Better Things,

[Chess] is a pure abstraction, in black and white. You can make a chessboard out of any material you want, cheap or expensive, pencil on paper or ivory on wood, but it doesn't really change the game. That is why you can play by postcard. Chess itself is intangible, so it can go anywhere.


I find myself being more interested in figuring out what other means of correspondence could be used to play chess than the playing of chess. That bleeds into my fascination with personal publishing on the web. Putting words behind a link that someone can access (that mean something) is an abstract enterprise. From common to bespoke, there are so many ways to publish. It becomes a game in and of itself — tinkering around with the printing press rather than using it to print something.

But I find the relationship between publishing and tinkering to be a mutual one. The more you publish, the more you tinker your means of publishing. The more you tinker, the more you publish about your means of tinkering (among other things). Both feed into each other quite well if you let them. This blog is a testament to that. I hope I can keep reminding myself to do both — to not just fuss with the means of correspondence but to also play chess.

Instead of getting a Mother's Day card this year, I find myself hacking together a single web page to commemorate my mother.

While circumstance has brought this about, there's an odd feeling of why I hadn't thought about doing this before. Maybe I predominantly think of publishing on the web as a public affair. But now I am crafting a simple webpage that only one other person will see. Not by accident, but by design.

And yet I find this pattern elsewhere in my digital life — writing “blog” posts for only one person (via anonymous posts). Sometimes these posts are detailed help for a Write.as user that includes code blocks. Other times they are personal messages that won't fit in a Twitter DM.

We talk about direct messages as being a crucial part of social media, but I wonder if the same could be said of websites & blog posts. I wonder if blurring that line between public & private more explicitly can make writing HTML & blogging carry the same intimacy as writing a letter.