emacsen

Hello everyone. My name is Serge, and I am a recovering Howard Stern fan.

I feel like I'm at an AA meeting when I say it, but it feels oddly good to get it off my chest. Howard Stern has been a significant figure for roughly half of my life, and only in the last year or so have I really begun to understand the influence of the show on me, the kind of person Howard Stern really is, and how that plays into my own past trauma and pathology. This has allowed me to understand myself better and heal in new and significant ways. But I'm getting ahead of myself- let me start at the beginning.

I had a pretty lonely childhood. I didn't have siblings. I went to “special school” (ie “Special Ed”) and my parents did a number on me emotionally. Both my parents learned early on that they could use fear of abandonment as a tool to control me, so they both cultivated it. At age eight, my father threatened me that he would quote “Come to my school and tell the other students all of my secrets”. As an adult now, I don't know what kind of shameful secrets my father would have told my third grade classmates, but at the time the idea was scary and filled me with shame.

This type of threat was commonplace in my home. Mirroring it, when I was seventeen, my mother said that she hoped my girlfriend would quote “Find out what a terrible person I really was” and leave me.

I grew up being told that no one would want to be my friend and that friends I did make would “learn who I was” and wouldn't want to be around me anymore.

At around age 16, I first heard Howard Stern on the radio. I now know it was during “The News”, a segment on the show when Howard's assistant Robin would read out the day's news and Howard would comment.

“Adopted kids will kill their parents.”, I heard the deep, manly voice on the radio exclaim.

“Not all adopted children, Howard.”, I heard the woman on the radio respond, chuckling.

“Yes Robin, all of them. All adopted children will kill their parents.”

It was so extreme as to be absurd. It was trolling, years before I would know the word.

On an interview I saw on the news, Howard made a similarly absurd statement about cybersex on Prodigy chat rooms, explaining to the interviewer that in his new book, he had transcribed his experiences with online sex chatrooms, but that if the interviewer asked his wife, Howard would tell her that this was all fiction, quote “a bit for the radio”– but that we, the audience, knew the real truth.

As an Andy Kaufman fan, I saw it as an extension of that style of comedy- of breaking the fourth wall and provoking a response.

I was in love.

I stayed in love as I continued to listen and heard radio bits such as when Howard punished received low ratings in Dallas and in retaliation, decided to “punish his audience” with bad music, or when he went on the air after supposedly “saving a suicidal man”, Howard planned a press conference on the air, planning out each word and even the responses of others around him. We, the audience, were in on every moment, and so when Howard then held his press conference live on the air and we heard the press conference go off exactly as planned, including a seemingly impromptu conversation between Howard and Robin about whether or not he should be called a hero. I was infatuated.

Howard played a huge role in my life from then on. I would wake up at 5:45 in order not to miss the show. In college I programmed my computer to tune the radio to the local Stern station, record the show onto my hard drive and then encode it into an mp3 that I would sync to my portable mp3 player. This was in 1998, three years before the launch of the iPod and six years before the term “podcasting”.

I kept this setup throughout college and beyond. I listened to the Howard Stern show almost every day. On the day he left terrestrial radio for satellite, I listened live at work and had to excuse myself to the bathroom not to let my coworkers see me cry.

I moved with him to Sirius radio and just as Howard had promised, the show was better than ever. Not only did Howard seem invigorated, but the program director, Tim Sabean, had transformed the Howard Stern show from a single show to an entire galaxy of programming, including The Wrap Up Show- a show by Stern show staffers about the day's show, the Intern Show, a show about the Stern show from the perspective of show interns, to wackier shows like the Riley Martin Show, a radio show about space aliens hosted by an occasional guest who claimed to have been abducted by aliens and brought back to earth to share a message of both peace and warnings, and to sell crudely drawn pictures that would be used as tickets to get the owner on an intergalactic space ship.

Howard had created a larger version of what his show had been to me- a way of feeling like I was part of a group- of being on the inside, and now there was more content than even I could consume.

For a lonely kid who didn't feel safe at home or have many friends, the Stern Show felt like more than home, it felt like family. And that's no wonder, as I was listening to the show nearly every day, absorbing each morning discussion of Stern and other staffer's personal life, show bits, celebrity interviews, wack pack segments, and the news. I listened to the show far more than I spoke with my actual family, and in turn I felt like I'd learned life lessons from the show, especially as they related to early Stern bits such as remaining faithful to one's spouse, virtues around not being hypocritical, and speaking truth to power, even if it made you less popular.

I didn't always agree with everything the show did. I didn't care for the strippers and I didn't like that Howard made fun of people who were different or disadvantaged such as black people, homosexuals or transgender people, but I explained away those problematic bits as being either about the times the show aired, satire, or an artifact of Howard's age and generation. Like a favorite uncle, I didn't have to agree with everything he said to love him.

One day my boss asked me how many hours a week I listened to Howard Stern. I did the math and calculated that I listened around twenty or twenty-five hours a week. He said “...That's got to affect your mind.”

He was right. It did.

The show felt /good/. It felt fun and even loving in a way.

I saw myself in Howard- a geeky Jew who was misunderstood and rebelled against the system. I didn't think Howard would necessarily like me if he met me but I could at least have someone to look up to.

When long time show writer Jackie Martling left and Artie Lange came in, it felt strange in a way that I imagine it feels when a mother brings home a new boyfriend. But in time I came to accept him into the family as well, a lovable lug from New Jersey, the same state as I was from.

Similarly, I felt odd when Star Trek's George Takei became the show's guest announcer and sit-in guest, talking in candid detail about his past and present sexual activity, his sexual schedule (Sunday is sex day) and where and how he preferred to masturbate. It was odd, but I knew more about this actor's sexual life than I did about my neighbors or even many of my friends.

Once Artie left the show in 2009, the tenor of the show changed. It felt emptier and slower. I thought that just as the show had gone through a lull after Jackie left, that the show would bounce back once again once it found its rhythm and pacing, or possibly a new cast member to fill the role.

But that didn't happen. Instead, the show just became slower and more repetitive, more structured and less spontaneous. Staff drama had always been a staple of the show, but now it felt more over-hyped and manufactured.

Some time around 2015 or 2016, I realized I didn't care any more and I stopped listening to the show. It wasn't a conscious decision at first. I would go days without listening and then listen to a show. At some point I just stopped listening and didn't really start up again.

I still listened to old shows. I enjoyed “Classic Stern”. I found old shows online and replayed them.

Every once in a while I would find a copy of a new show someone had posted online and give it a listen. Sadly the luster on the show had worn off and I retreated back to old clips from classic moments on the show.

But something changed in the last year.

After the pandemic hit, I was alone. I found myself isolated from my fiancee and my pets by a national border. I was alone in my apartment for about fifteen months. During this time, I also had a very bad ending to something that spanned friendship, collaborator, and planned business partner. The word relationship is often used to mean a romantic relationship, but this relationship felt as close as it could be without any romantic or sexual feelings.

The relationship ended because I became that the person I was collaborating with was a narcissist. As I began to ask for solid commitments in exchange for my free work and began to assert boundaries, the interactions became more toxic until I had to end the relationship altogether.

This was sadly not my first time at the rodeo. My childhood has left me with a lot of scars and codependency, or what psychologist Ross Rosenberg calls “self-love deficit”. Simply put, I am always afraid of being rejected, and especially susceptible to people who are affectionate. This makes me especially vulnerable to people with Cluster-B personality types, including Borderline and Narcissistic Personality Disorder.

But back to Stern...

I'd been interested in the experiences of other listeners and also looking for updates about Artie Lange, who was no longer part of the show. Through that journey, I began to hear about horror stories such as when long time Stern show engineer Scott Salem asked Howard if he could raise money to try to cure his wife's cancer and in response, Stern had him first demoted, then fired.

More than that, other former staffers began coming out to talk about their experiences working on the show including draconian rules such as not being allowed to greet Howard in the hall, no communication with, or even about ex-staffers, having their jobs kept in limbo for extended periods, as well as a pattern of severe underpayment accompanied by insults or humiliation about the notion of leaving. While on-the-air humiliation was a staple of the show, I'd always assumed it was countered with a positive working environment off the air, but the pattern of mistreatment of staffers was consistent and complete- Howard mistreated his staff, and always had since the early days.

As I learned more about the show's inner workings, my feelings turned from disappointment to anger. How could Howard treat his staff so badly? How could he humiliate them both on and off the air? And how could I have been either blind to it or make excuses for it all of these years?

All of this has lead me to “Quite Frankly: A Howard Stern podcast”, hosted by two former Stern fans, Jim and Samantha. These two hosts, sometimes joined by guests, take apart old Stern shows and analyze them critically. They deconstruct the show's contents and show Howard's truth distortions, lies and manipulation. They also bring on expert guests to analyze the show through lenses such as psychology, and Narcissistic Personality Disorder and how we as an audience can identify these traits.

I've listened go nearly a dozen of their shows, which now number nearly a hundred, and while I don't always agree with the hosts or their presentation, I understand it. Both Jim and Samantha seem angry, genuinely angry at Howard and at the show, which is understandable...

Howard garnished not just listeners, but superfans, people like myself who would listen to the entire show, start to finish, buy not only his books, but those of his staffers as well, and continue to do so for years, even decades. It makes sense that when faced with such fanaticism, those same people don't turn the show off, when they realized they've been suckered, they feel angry and betrayed, and that is exactly how it is with the two hosts. The two hosts clearly know their Stern history as only superfans do and yet there's dismissiveness and derision in their voice when talking about the man himself.

Listening to this pocdcast feels like therapy, or at least catharsis, not only for the love I had for Howard Stern and his show, but also for the ways that Howard used his show and his image to pull the wool over our eyes.

Through it, I feel myself healing from the experience of a twenty year relationship that came at a vital time in my life and shaped who I am and how I saw the world. I see not only the patterns in Howard, but also the patterns in past relationships, of a former colleague who still uses their fame and persona to garner a fan base, and even sometimes in ways that I see myself reflected in Howard's behavior.

I asked my therapist once why we talk about the past in therapy. He said that while we can't actually go back in time that we can use therapy to find a new way to relate to past events and bring an awareness and understanding that we didn't have the first time. Listening to Quite Frankly feels like therapy, and I feel like I'm gaining a deeper awareness of myself than I ever have. The old Stern show bits still have a place in my past and my heart. Like my dysfunctional childhood, I'll always love them, but now I cringe at the way that Howard bullies and humiliates his staff, manipulates the callers and railroads his guests. The show makes me cringe. This mix of affection and cringe is good. It's healthy. It lets me know that I'm healing and learning. It's all anyone can do.

If you're an ex-Stern fan, I hope you are joining me on this journey of understanding. If you still love Stern, then the words I've said probably have no impact. And if you're someone who never liked Stern, I hope that this has given you insight onto why I did and the way that he and his show gave me a sense of belonging when I needed it most.

Now, at 42, I'm moving on and making my own family and community.

Here are some links that have opened my eyes:

Timeline of the Stern Show Decline

An article in the New York Post about the quality of the Howard Stern Show

What Happened to Howard Stern?

Artie Lang goes on Opie and Anthony and talks about what happened between him and Howard

Jackie, Billy West and Stuttering John talk about their experiences on the Stern Show

Quite Frankly: A Howard Stern Podcast

I've tried to learn a second language before, specifically Esperanto. I outlined why I wanted (and continue to want) to learn Esperanto in [ this post]. This post is some thought I’m having as I go through the process.

Read more...

Jitsi Meet is a Free Software audio and video conferencing platform that allows for people around the world to participate in a video conference without proprietary software like Zoom or Google Meet.

Jitsi has an add-on program called Jigasi that allows for call-ins (and even call-outs). Unfortunately, while Jitsi Meet is well documented, Jigagi has less documentation. In this guide, I will demonstrate how to set up Jitsi Meet and Jigasi using the Twilio phone platform. I'll be going step by step, but if you want to just read the final code, I've called it “jitsi-twilio-example” and it's available at:

This post will try to cover the basics of the various components, but I am not an expert on any of them- I just managed to get everything working after a lot of trial and error.

Connecting to the Phone Network

Jitsi is great for computer based meetings. It even has an iOS and Android app, but occassionally we need to support phone dial-in attendees. Jitsi uses a media transport called WebRTC, while VOIP software most commonly uses a protocol called SIP.

This means we need to bridge both the technical protocols but also the very different way that these two protocols see the world.

Traditionally, making a voice-enabled application would involve setting up a PBX. PBX stands for Private Branch Exchange, which is another way to say that a PBX system works like a small phone company.

In the past, PBX systems were proprietary and expensive, but Asterisk changed all that. Asterisk and other SIP FLOSS servers can run on relatively small installations, but still require a good deal of specialized knowledge to use. In addition you will still need a “trunk provider” to connect your installation to the phone network.

Twilio is a phone provider that makes it easy for programmers to build phone applications by simply putting up a web server. It requires no proprietary software on the client end, easy sign-up and competitive prices.

The largest downside of Twilio is that because of it's specialized API, there is a bit of vender lock-in, unlike using a plain SIP trunk provider and connecting it to a program like Asterisk or FreeSwitch. On the upside, the Twilio API is very somple and its tools make debugging applications a breeeze.

Since we only want one or two numbers and an easy installation, we're going to go ahead and use Twilio for this application.

Another Web Server?

Twilio has an event driven API. When a telephonony event occurs, Twilio triggers an event on its end. One option for events is to hit a specified HTTP endpoint. We can run own webserver and direct Twilio on what to do next.

For this particular application, I'm going to use the popular Python Flask web framework because it's easy and because Twilio offers an SDK that makes using it very easy, but you could use any web server you like.

Installing Jitsi Meet

If you already have Jitsi meet installed, you can feel free to skip this section.

Jitsi Meet itself is fairly well documented. To make deployment easier, I've been using the official Jitsi Meet Docker image. The installation manual for the Docker install is available here.

While not strictly necessary, since you will need to run additional services anyway, I'm using a SSL reverse proxy that integrates Let's Encrypt called docker-compose-letsencrypt-nginx-proxy-companion.

If you want to do the same , you will need to set your LETSENCRYPT_DOMAIN and LETSENCRYPT_EMAIL in your Jitsi .env, but don't set ENABLE_LETSENCRYPT. In addition, you will need to set DISABLE_HTTPS.

It should be mentioned that SSL is mandatory for WebRTC on the browser level, so using some SSL configuration is necessary, whether it's through a proxy or Jitsi itself.

You'll also need to change your docker-compose.yml file. Add VIRTUAL_HOST=${LETSENCRYPT_DOMAIN} and LETSENCRYPT_HOST=${LETSENCRYPT_DOMAIN} to your web section environment section. You'll also need to add the proxy network (which defaults to webproxy to web's networks. Just add webproxy: there and in the networks section add:

webproxy: external: name: webproxy

If you're already familiar with this proxy companion, or jwilder/nginx-proxy then this will be familiar to you.

Once that's working to your satisfaction, let's move onto the next step.

Dial-in Number

The next step is to sign up for Twilio and get a phone number. This is the number that people will use when they dial into the phone conference.

Because this phone connection connects to the standard phone system, you will need to pay or this, but the prices are relatively inexpensive. In my experience, my Twilio costs were about $3 a month for light/moderate usage.

As an aside, it should be mentioned that Twilio also offers their own WebRTC-based videoconferencing system. If we only cared about pricing, then it would be a safe win to use thier system, but we are using Jitsi because we also care about the Software Freedom.

Twilio's SIP

In addition to the number, you will also need to set up a SIP domain. Twilio offers a number of SIP offerings and navigating the system is a little confusing. I found this article on sip phones from Twilio was very informative.

You will need a SIP domain that represents your organization, but since you can also have multiple SIP domains, that is up to you. Similarly, you can choose a username independently of anything else, though this blog post from Twilio suggests using an E.164 format phone number for the username.

You'll also need to set parameters around network address based logins and other settings. The Twilio documentation mentions being able to create this configuration through a RESTful Interface, but since this is a one-off, I think using the GUI is easiest.

At this point I'll assume both your Twilio and SIP configuration are working and you're able to register

Web Server Setup and CORS

Setting up a web server is outside the scope of this tutorial. I'm going to assume you've already set up a web service before and or have an understanding of HTTP methods (GET, POST, etc.) as well as the basics of XML and JSON encoding.

You'll need to stand up a web server somewhere. Since you already have a Jitsi instance, standing up another service should be fairly easy for you. If you want to use Docker, look at the Dockerfile included in the project for an example of setting up a Docker instance.

You may also find ngrok to be a nice tool to use during testing, but this is optional.

For Twilio's purposes, we just need a web server running somewhere it can talk to, but we'll also be setting up a could of HTTP endpoints for Jitsi Meet clients, and because of that, we'll need to set up Cross-Origin Resource Sharing or CORS on the web server. In my example we're going to configure the server to return the header:

Access-Control-Allow-Origin: *

But you may want to restrict the respone only to your Jiti domain but I have't bothered in my example.

You will also need to configure SSL access to your web server- not for Twilio, but later on when we configure Jitsi Meet. You can certainly wait on this step until you're ready to test the Jitsi Meet components, but it's good to know that you'll need this eventually.

For the remainder of this tutorial, I'm going to assume your web server is up and running and at https://example.com.

Configuring Twilio to Answer a Call

/In this section we'll talk about Twilio and using it's event-driven system to answer calls. If you're already familiar with Twilio, you can safely skip this section./

In the old days, setting up a PBX was expensive and required both proprietary hardware and software. That all changed when Asterisk came out. With Asterisk, you could run your own PBX, either physically or virtually, with Free Software. You just had to learn a little telephony nomenclature and you could set up your own virtual PBX in an afternoon. Today, several options exist for running your own PBX, including FreeSWITCH, OpenSIPS and FreePBX (which sits on top of Asterisk). All of these systems are wonderful, but even in the best case, require some understanding of telephony and creation of a dialplan. A dialplan is nothing more than a set of instructions that are carried out when a telephony event occurs. For example your office might want the CEO's phone to ring in their office, but also ring their administrative assistant, or you might want everyone in the office to have their own extension, but only certain extensions have direct dial in numbers that people can call externally.

Twilio abstracts the dialplan idea into a series of events that you configure it to respond to. You can choose how to respond to those events, but in our case, we will use webhooks, which are nothing more than simple HTTP endpoints.

Our first example will be to configure our phone number to say a greeting and then hang up. We could easily configure this with a static file, even on the Twilio website, but by testing it on our web server, we're also ensuring that our web server is configured properly.

Twilio provides an SDK that abstracts its domain specific XML, TwiML and makes it easy to use. You don't need the SDK, of course. You could do it all yourself manually.

I'm going to name the HTTP endpoint “answer”, since that is the event that we'll respond to. I'll also be setting up some basic Flask things. If you've worked with Flask before or most web frameworks, nothing here will be especially new.

from flask import Flask from twilio.twiml.voice_response import VoiceResponse

app = Flask(name)

@app.route(“/answer”) def answer(): “”“Respond to incoming phone calls with a greeting”“” resp = VoiceResponse() resp.say(“Hello and welcome to the conferencing system”) return str(resp)

if name == “main“: app.run(host='0.0.0.0', debug=True)

A majority of that program is just setting up the web server, but we can see just how easy it is to set up.

If you look at the result of hitting that endpoint, you will see something that looks like

<?xml version=“1.0” encoding=“UTF-8”?> Hello and welcome to the conferencing system

I've formatted the output, but you can see that the result is a small XML document. We could just store that as a static file, but we're going to need to make our site more interactive later.

Checking our SIP Configuration

If you've already configured and tested your SIP endpoint, this step is unnecessary

With our telephone number and web server configured, let's turn our attention back to our SIP configuration. If you haven't done that already, go to Programmable SIP Domains and add a new domain for yourself.

Then go ahead and add a user for that domain. As mentioned earlier, one practice is to name the user the same as the phone number, but that's entirely optional.

What's not optional is making the SIP domain, user for that domain and also setting the IP address ranges that will connect to the endpoint. This will be your Jigasi server's IP, but I also recommend testing the SIP endpoint with a SIP softphone such as Linphone or Zoiper, so you'll want to add the IP address of the computer you'll be testing it from as well.

If you haven't used Twilio's SIP before, one small gotcha that I encountered is that the SIP domain is not always the same as the server, so I had to add us1 to the sip domain, such as myuser@mydomain.sip.us1.twilio.com.

Just be sure that your SIP phone can connect to the endpoint successfully. We'll be configuring our number to ring the SIP phone next. so it's a good time to ensure that this part is working before we move on.

Configuring our number to call our SIP Endpoint

We have our web server and our SIP endpoint both working, so now it's time to connect them together.

Since we're now dealing with a bunch of configuration, I'm going to use dotenv to make it easy for me to store configuration separately from the application. In production, I'm using Docker, so I'll be storing my configuration there instead, but this is a nice bridge between the two. We'll then use environ to retrieve our configuration.

Let's store our SIP user with domain as SIPURI.

Then when someone calls our number, we'll have it call the SIP endpoint. When that happens, your softphone should ring and you'll be able to talk to yourself.

from flask import Flask from twilio.twiml.voiceresponse import VoiceResponse, Dial from dotenv import loaddotenv from os import environ

loaddotenv() SIPURI = environ['SIP_URI']

app = Flask(name)

@app.route(“/answer”) def answer(): “”“Call the SIP endpoint”“” resp = VoiceResponse() dial = Dial() dial.sip(f”sip:{SIP_URI}“) resp.append(dial) return str(resp)

if name == “main“: app.run(host='0.0.0.0', debug=True)

Not a lot of change here, but now when we call our phone number, it should call our SIP user, which is connected to the softphone.

If this all works, you're cooking with gas and it's time to move on to configuring Jigasi itself!

Configuring Jigasi

In this example, I'm going to be using the Docker installation of Jitsi. In this configuration, a lot of the details have been abstracted away and only need to be set inside the .env file your Jitsi installation uses.

If you're not using the Docker installation, you'll need to make the changes in the config files themselves.

Here's the relevant part of the Jitsi .env

# # Basic Jigasi configuration options (needed for SIP gateway support) #

# SIP URI for incoming / outgoing calls JIGASISIPURI=SIP_USER

# Password for the specified SIP account as a clear text JIGASISIPPASSWORD=MYSIPPASSWORD

# SIP server (use the SIP account domain if in doubt) JIGASISIPSERVER=MYSIP_DOMAIN

# SIP server port JIGASISIPPORT=5060

# SIP server transport JIGASISIPTRANSPORT=UDP

JIGASI_SIP_URI should be the same as the SIP_URI we set for our Flask application, JIGASI_SIP_PASSWORD is the password, and JIGASI_SIP_SERVER should be the SIP Domain, including the us1 part.

Once you do this, you'll need to recreate the Jitsi and Jigasi config files. If you're using the Docker images, the .env file specifies a CONFIG variable which stores the location of the configuration directory.

You'll need to erase that directory and recreate it with:

mkdir -p CONFIG_DIR/{web/letsencrypt,transcripts,prosody/config,prosody/prosody-plugins-custom,jicofo,jvb,jigasi,jibri}

Substituing CONFIGDIR with the location in CONFIG.

Also you'll need to be sure that from now on, you reference both the docker-compose.yml as well as the jigasi.yml files, such as:

docker-compose -f docker-compose.yml -f jigasi.yml up -d

Once you make these changes and restart the services, Jigasi should register as a SIP endpoint (just like the softphone) and be able to recieve calls. The problem is that it doesn't know which conference to send the calls to by default.

We can give Jitsi a default conference room for it to use by setting it in CONFIG/web/config.js as org.jitsi.jigasi.DEFAULT_JVB_ROOM_NAME but I think a better way is to modify our Python script to specify the room there.

What we need to do is technically to specify the room name inside of a SIP header when we make the SIP INVITE. That header is X-Room-Name by default and we can specify the room name there.

Twilio lets us set SIP headers on the URI, so all we need to do is specify X-Room-Name on the dial.sip line like so:

dial.sip(f”sip:{SIP_URI}?X-Room-Name=MyDefaultRoomHere”)

Now a call to our number will be directed to the MyDefaultRoomHere room!

Technically we could stop here. If we always know that we want calls to come into this one room, we don't need to take any further action.

But we probably want features like PIN numbers and other things, so let's go ahead and add that!

Mapping PINs to Rooms

Jitsi Meet has the concept of rooms. Rooms have a unqiue identifier which we can think of as an access token into the room. We need to map those room names to digits that we can easily type into the phone.

Then when a caller calls in, we need to ask them for a PIN and then map that back to a room name, which we then use to tell our python program where to send them.

This is a bit of a chicken and egg problem, because we need both parts to fully test this, but I'm going to implement the PIN<–>Room Name mapping first.

We technically could do this entirely in memory, but then if we shut the program down, we'd lose all the previous mappings, so we need to serialize this to disk. We could use a full fledged database, but on my system I only get a few visitors a day on my Jitsi instance and generate maybe one or two new rooms a week, so a full fledged database seems like overkill as well, so I'm opting ofr a very simple solution in the form of a Python library tinydb which works like a dictionary, but loads the data each time it's called, which means that while not guaranteed, it's certainly thread safe enough for our this use.

Jitsi Meet makes the calls on the client side, from the web interface, and this is why we must address the Cross Origin Resource Sharing issue. Since we're not dealing with any large resource generally, we'll just put a blanket policy allowing anyone. In production you may wish to set this to your Jitsi meet URL.

The official Jitsi meet instance server has an instance of mapping the conference to a PIN at https://jitsi-api.jitsi.net/conferenceMapper. This URL takes in one of two parameters through a GET request, either conference or id. The conference is the full conference name, that is the room name @ the instance. The ID is what I'm caling the PIN. The result is a JSON document.

Some tutorials suggest using an auto-incrementing ID, but I think this is a mistake because even though it doesn't tell you what room you'll get, it does make it likely that someone could guess the next room PIN, so instead I'll be using a random number.

from flask import Flask, jsonify, request from flask_cors import CORS from tinydb import TinyDB, Query from secrets import randbelow ...

PINDIGITS = 6 DBFILE = environ.get(“DB”) db = TinyDB(DB_FILE) ...

app = Flask(name) cors = CORS(app) ...

@app.route('/conferenceMapper') def conferencemapper(): pin, conference = request.args.get('id'), request.args.get('conference') if not pin or conference: return jsonify({“message”: “No conference or id provided”, “conference”: False, “id”: False}) elif pin: result = db.search(Query().id == pin) if result: conference = result[0]['conference'] return jsonify({ “message”: “Successfully retrieved conference mapping”, “id”: pin, “conference”: conference}) else: return jsonify({ “message”: “No conference mapping was found”, “id”: pin, “conference”: False}) else: # The conference has been specified- make a new PIN maxint = pow(10, PINDIGITS) while True: pin = randbelow(maxint) result = db.search(Query().id == pin) if not result: db.insert({“id”: pin, “conference”: conference}) return jsonify({ “message”: “Successfully retrieved conference mapping”, “id”: pin, “conference”: conference})

That will give us back what Jitsi Meet expects.

If you're wondering what limebrass is, it's the name I gave to my conferencing system. It doesn't mean anything other than it's a unique name.

Now we must tell Jitsi Meet to use this new mapping. That's done by editing the CONFIG/web/config.js file and adding in dialInConfCodeUrl in the large Javascript object, before the makeJsonParserHappy, such as:

dialInConfCodeUrl: 'https://example.com/conferenceMapper',

Now that this is done, we need to turn our attention back to Twilio for a moment and how we will connect the PIN we've just made to the phone system.

Luckily for us, Twilio makes this very easy with a Gather directive that can be used to collect digits. Our process will be to ask the caller to enter in their PIN, then if the conference exists, they'll be connected into it. If not then they'll be given another chance to enter their PIN. And if they can't do it three times, they'll be asked to call back.

Twilio's Gather directive works a bit like an HTML form in that it has an action paramater that it POSTs the result to.

If we didn't care about letting someone try to enter their pin a second or third time, we could use one single endpoint for both the answer and the gather, but since we do want to allow this, we'll need two endpoints.

First let's change our answer code and add the redirect.

Our first step then will be to change our /answer code to announce that the user is in the phone conference, then to redirect them to the gather request.

@app.route(“/answer”) def answer(): “”“Announce the conferencing system” resp = VoiceResponse() resp.say(“Welcome to the conferencing system!”) resp.redirect(“/gather?tries=0”) return str(resp)

You may have noticed that I added a query parameter tries to the URL. That's so we can count the number of tries that have been attempted and hang up when it's been too many.

Now let's work on the gather code.

@app.route(“/gather”) def gather(methods=[“GET”, “POST”]): “Gather the PIN number” if request.method == “GET”: tries = int(request.args.get(“tries”, 0)) resp = VoiceResponse() gather = Gather(numdigits=PINDIGITS, action=“/gather?tries={tries}) gather.say(“Please enter your conference number, followed by the pound sign.”) resp.append(gather) # If no response, end the call resp.say(“I didn't a conference pin. Please call back once you have it!”) return str(resp) else: # This is the POST method, and should only be called once a # gather is made tries = int(request.args.get(“tries”, 1)) pin = int(request.form.get(“Digits”, 0) if not pin: resp.say(“I didn't get a conference pin. Please call back once you have it!”) return str(resp) # Look up the PIN result = db.search(Query().id = pin) if not result: tries += 1 if tries >= 2: resp.say(“Too many incorrect pin attempts. Please call back once you have it!”) resp.rediect(f”/answer?tries={tries}“, method=“GET') return str(resp) # Success! Redirect the caller to the correct conference! conference = result[0][“conference”] dial = Dial() dial.sip(f”sip:{SIP_USERDOMAIN}?X-Room-Name={conference}“) resp.append(dial) return str(resp)

Phew! Our little Python program is getting bigger, but it's all relatively straightforward code.

You may notice that I'm playing a little fast and loose with error handling here. That's because this application will only be interacted with by other known applications. If an exception occurs, it's due to a bug somewhere, rather than us wanting to try to correct for it. This is also why I don't feel very strongly about disabling the Debug mode, though if I ran this for any significant installations, I would turn it off.

At this point, a user who knows a conference pin can dial in. But how will they know the number to dial into? That's the next section!

Setting Call-In Number

Now that we have the pin sorted out, let's make it easy for someone to find the call-in number(s). Jitsi Meet makes it easy to find out by having a configurable url that returns a JSON document with a list of phone numbers. This could be a static file, but let's just include it in our web application.

@app.route(“/dialInNumbers”) def dialinnumbers(): “”“Return our available phone numbers”“” return jsonify({ “message”: “Phone numbers available.”, “numbers”: PHONE_NUMBERS, “numbersEnabled”: True})

In this code, we use our environment to set the phone numbers we want to use. The format used is a JSON object. Showing an example is probably easier than explaining it:

{“US”: [“+1.555.555.1212”]}

You can see we have a mapping of country codes and a list of numbers. The formatting of the numbers is entirely up to you.

Setting Call-Out

At this point we can do everything a standard call-in phone conference can do, but we can also optionally allow for call-outs, which is to say that we can initiate a phone call from inside a conference.

This can be useful if you're needing to contact someone directly and don't want to go through the dance of having them call in. But because this can also be used to initiate calls, it's advised that this only be enabled on Jitsi installatiosn that have authentication turned on!

With that warning out of the way, let's make a new endpoint!

@app.route('/callOut') def callout(): “Make an outgoing call” callerid = request.args['callerId'] to = request.args['To'] toformatted = to.split('@')[0].split(':')[1] resp = VoiceResponse() resp.dial(toformatted, callerid=callerid, answerOnBridge=True) return str(resp)

As you can see, it takes in two arguments, To and callerID. To contains a full SIP address, so what we need to is strip that out so it looks like a phone number, formatted in E.164 format, ie a + symbol, then the country code and phone number.

Twilio's policy is that the calledId must be a number we have associated with our account, ether that we bought from them or have verified. We'll supply that manually, but we could also be clever here and look at other factors in deciding which caller ID to supply. For example, we might have numbers in different countries and want to use the appropriate number for the country we're dialing out to. As long as the number is either through Twilio or verified with them, we can do that. In this case, though, I've simply supplied the calledId as an argument to the script in Twilio's SIP domain configuration.

The final bit of setup, then, is to go to your SIP domain (ie https://www.twilio.com/console/voice/sip/endpoints?), clicking on your SIP domain, then putting in the URL (ie https://example.com/callOut?calledId=+15555551212) in the “A CALL COMES IN” field.

That may seem a little confusing at first, but we need to think about it from the perspective of the SIP endpoint. It is what's getting a call, which is why it's considered “inbound” for it.

Final thoughts

And that's it! A fully functional system for both calling in and calling out with Twilio and Jitsi! All we had to do was write a tiny amount of glue code and viola, we have a powerful connection between our phone system and a conference system! If you don't use it much (like me) then the price for this is going to be fairly inexpensive and we didn't have to set up a PBX server like Asterisk, just a little web server!

There's certainly a lot more to do here if you want to turn this little toy into a “real application”. You'll want to change the voices in your call-in to something pleasant, you'll want to set up real logging, and probably a more substantial database than what we have, but this should be a good launching point for a beginner.

Enjoy!

In 2004, I was sitting in my living room watching one of my favorite programs- Frontline– a PBS show doing investigative journalism on a variety of topics.

This particular episode, “The Persuaders” was about advertising was not only changing the way we buy but the way we think. As part of this discussion, they examined cults- including the Hare Krishnas to “cult brands” like Apple.

As an example, they showed a person talking about Linux and how some people were “part of the tribe”. While they never showed the person's name, I recognized them and that moment struck me. It was like I was seeing myself in the mirror- I had spoken that way in the past, but seeing it in front of me, I realized I never wanted to look or think like that in the future.

Being part of a community is important. I've spoken in the past about how being part of the Free Software movement literally saved my life. Being part of the this movement also included breaking free of terms like “Intellectual Property”, which has an impact on the way we think about these topics. It's also meant that I've discouraged the use of proprietary formats such as Microsoft Office, when the OpenDocument Format is both available and standardizes.

At the same time, I've also seen people inadvertently use Free Software to divide or shame people. If someone uses the “wrong word” (for example Open Source instead of Free Software or Linux instead of GNU/Linux), or admitting to use proprietary software, they may get an earful from someone in the Free Software community.

I've thought a lot about why this is and my conclusion is there are three reasons why Free Software advocates become this intense about terms and phrasing. The first is that as we've learned about the ways that our society has indoctrinated us into thinking about these topics (including the idea that copyright is paramount to property), that we're motivated in the same way to help others break free of the mind-control that we were under. We want to liberate them the same way we ourselves were liberated.

The second reason is less altruistic but I think sadly just as true, which is that these verbal signals are part of, as the “Linux user” in Frontline said. When I got my bachelors in Psychology, I learned about the idea of cognitive dissonance and how it makes us love things we sufferfor. I believe that some of our strong reactions are part of this unconscious desire to “bring people into the fold”, doing the same kind of thing that was done to us.

Lastly and possibly most importantly, while such people are highly disruptive and hurtful, in reality, they represent a small minority of the community.

When I started the Libre Lounge podcast with my friend Chris Webber, one of my goals was to widen the umbrella and embrace more people into Free Software with open arms. We want to bring new people to Free Software and help them see that we are a warm and caring community.

In doing that, we've talked about a variety of topics, worked hard to bring on guests with varying backgrounds, connected larger cultural movements to our own, and generally tried to retain the sense of fun and playfulness that we think is so important in maintaining a healthy community.

Occasionally disputes arise around terminology and in those moments, I'm reminded of the old joke by Emo Philips:

Once I saw this guy on a bridge about to jump. I said, “Don't do it!” He said, “Nobody loves me.” I said, “God loves you. Do you believe in God?”

He said, “Yes.” I said, “Are you a Christian or a Jew?” He said, “A Christian.” I said, “Me, too! Protestant or Catholic?” He said, “Protestant.” I said, “Me, too! What franchise?” He said, “Baptist.” I said, “Me, too! Northern Baptist or Southern Baptist?” He said, “Northern Baptist.” I said, “Me, too! Northern Conservative Baptist or Northern Liberal Baptist?”

He said, “Northern Conservative Baptist.” I said, “Me, too! Northern Conservative Baptist Great Lakes Region, or Northern Conservative Baptist Eastern Region?” He said, “Northern Conservative Baptist Great Lakes Region.” I said, “Me, too!”

Northern Conservative†Baptist Great Lakes Region Council of 1879, or Northern Conservative Baptist Great Lakes Region Council of 1912?” He said, “Northern Conservative Baptist Great Lakes Region Council of 1912.” I said, “Die, heretic!” And I pushed him over.

When we argue about who is “more pure” or when we tell people that they're bad or evil because they don't use either the same software stack we do, or use the same terminology we do- then we've lost the point of Free Software, which is to spread Freedom.

Be the person who welcomes, not the one who shuns.

EDIT March 26th I've edited this post for clarity since my original point was lost in some of the details. I've also provided more citations.

Introduction

Just over a year ago, Chris Webber gave a talk at CopyleftConf about how the AGPL is incompatible with a style of computing.

If you want to read the slides, they're at: https://dustycloud.org/misc/boundaries-on-network-copyleft.pdf

Sadly there hasn't been much discussion about it since, so I'm going to throw my hat into this rodeo- or some metaphor to that effect.

Before we wrestle with bulls, let's talk about the goal of the AGPL and why it's important in the Free Software ecosystem.

As most people reading this probably already know, the GNU GPL is a license that says that if you have a program, you're entitled to use it, copy it and modify it and that if you distribute it to others, you must do so under the same terms that you received it. It's “Share and Share Alike”

But what does this mean when we have applications that run remotely, such as web applications where executing the program means executing code on someone else's computer? The AGPL states that if you release a program under the AGPL and make it available to others that they have the same obligation to release it to others, whether you release the program as a binary or make it accessible for execution over a network.

This is a good thing in my opinion. Running a program in a networked way to get around the GPL is an anti-social thing to do.

With that out of the way, let's dive in.

A simple program

Let's first begin with the idea of a program where state is captured inside execution, rather than in variables. If you know what a closure is, then you can skim or skip this part.

If you don't know what a closure is, you might be wondering what the heck I'm talking about, but it's really not that hard to imagine. Let's take an example from Chris's own work

Chris wrote their code in Scheme. I think the use of a Lisp can lead people to come to the conclusion that this is somehow a Lisp related issue, so I'm going to write my code in Python in order to show that the issue is universal.

Chris proposes that some programs may contain private data but at the same time be stateless. This was hard for me to wrap my head around at first, but we can write a program like this fairly easily:

   def make_greeter(greeter_name):
        return lambda guest_name: print(f"Hi {guest_name}, I'm {greeter_name}!")

With this, we can construct a greeter named Alice

    alice = make_greeter("Alice")
    alice("Bob)

And we'd get back “Hi Bob, I'm Alice”. What's important here is that the alice function doesn't maintain state. The “Aliceness” is constructed at the time the function is defined.

The data in this case is actually the “Bob” string and not the “Alice” string. The “Alice” string is part of the alice function's executable code.

It's a nifty trick, but it has some deeper implications.

Turning our program into a service

Imagine that instead of being generated on the Python shell, there was some external database, and instead of just being a name, the function also contained private information.

Let's rewrite our program with that in mind. We'll create a database of people and their favorite colors.

   db = {
        'alice': 'red',
        'bob': 'blue'}

    def make_person(name, color):
        return lambda guest_name: print(f"Hi {guesprogramming model.t_name}, I'm {name} and I like {color}")

    people = [make_person(*record) for record in db.items()]

Remember, our secrets aren't contained within our database- they're contained within the functions themselves. While this example is trivial, we're starting to see how this could become interesting.

Let's up the ante a bit by turning this into a network application.

    from flask import Flask, abort, request
    app = Flask(__name__)

    db = {
        'Alice': 'red',
        'Bob': 'blue'}

    def make_person(name, color):
        return lambda guest_name: f"Hi {guest_name}, I'm {name} and I like {color}.\n"

    people = {name: make_person(name, color) \
              for (name, color) in db.items()}

    @app.route('/<person>')
    def show_greeting(person):
        guest = request.args.get('guest')
        return people[person](guest)
        abort(404)

And run it:

    serge@laptop:~$ curl http://localhost:5000/Alice?guest=Bob
    Hi Bob, I'm Alice and I like red.

Nifty, but not especially different from the previous example, except as it applies to the AGPL.

We can take this example in one of two directions, both of which I believe breaks the AGPL.

The first is that we might imagine the database contains some other secrets, but that we're encoding these secrets as code. Let's imagine that we have a service that lets doctors and other services that we explicitly permit to have access to health-related data about us.

As privacy-oriented developers, we may want to self-host this application. I certainly feel better about running my own services, especially where sensitive/private data is concerned.

As far as the standard GPL is concerned, this is no problem. My private version of my application that only runs on my computer is entirely mine. But the AGPL is different- the network accessibility of the service places the program under the same distribution terms as we would have if we were to distribute the program.

Configuration as Code

How realistic is this scenario of using code for configuration? It's far more common than you might originally think. As Chris's talk points out, it's extremely common in Lisp to use this method- but it's not limited to Lisp by any means. Several popular Python web frameworks use a config.py file, and PHP developers use config.php.

This is because while the licenses do not pertain to running environments, these configuration systems turn the configuration “data” into running an executable. That is distinct from, for example, pulling data from a YAML or config.ini file because in a config.py file, the file is being interpreted as code and becoming part of the program itself.

This is largely a non-issue because in a vast majority of cases there is a distinction between the types of static variables placed inside a configuration file and the dynamic code that's inside the program files, but this doesn't have to be the case. It's possible to write configuration that contains executable code, and if that executable code modifies the behavior of the application itself, then it is indistinguishable from program code.

Does this mean you can't write a Python application that uses config.py or a PHP program that uses config.php under the AGPL? In most cases, the difference between simply storing a variable statically inside one file or another would not make a difference, but as the complexity of configuration may grow to include functionality, that line begins to blur, and while I'm not a lawyer, I believe that without relicensing the configuration files, the answer is that if your configuration is sufficiently complex that it is indistinguishable from code that you will need to publish it as code under the AGPL.

Obviously this is not the intent of the AGPL, and this specific scenario is easily remedied by separating out and separately licensing the config files, but this is a conscious action that the developer must take.

Plugins

Let's take on a more complex version of this problem: What happens when applications are not simply monolithic, stand-alone things, but when they include components that are external in some way?

Chris in a reddit reply to this post, mentions browsers- so let's use that as an example. If you're reading this, you most likely are doing so on a web browser. You're also likely to have one or more plugins. Plugins are application logic that extends the functionality of your application in some way. The plugins may be under a variety of licenses- anything from extremely permissive to entirely proprietary.

If your browser is under the GPL, the waters become very murky as it relates to the licensing requirements of plugins. Wordpress, the popular CMS and blogging platform, has stated that Wordpress plugins should, (or possibly must) be released under the GPL. That is because a plugin is not a stand-alone work. A plugin depends on the Wordpress application framework, and thus plugins are derived (or as GPLv3 calls it, “based on”) the original program.

For GNU GPL applications, this is a bit of an oddity, as while Wordpress may require plugins be under the GPL, they cannot compel users running proprietary plugins to provide source code to them. With the AGPL, a network user of the program has the same rights as person downloading the program.

This is a lot to take in, but we're not quite done yet. In Spritely Goblins, the system Chris is developing, there is no distinction between a local program execution and one that runs on the network. While some developers may be used to thinking about remote procedure calls and remote APIs, the Goblins model makes this distinction largely invisible to the user and even the developer- program logic may be run locally, on a nearby server owned by the same person, or halfway around the world by someone, they've never met.

Goblins, by design, erases the distinction for a programmer about whether the code being run is internally or externally. It erases the distinction for a programmer about whether or not the code is being run at arm's length.

Under the GPL, this is no problem- network services are at arm's length and thus there's no problem with integrating your GPLed internal code with some external proprietary service. But under the AGPL, network services are explicitly included.

A brief review

...That was a lot to cover, so let's review briefly.

  • Some programs are going to be Free Software, but contain “proprietary parts” because they need to for privacy reasons.

  • Plugins that are written for an AGPLed system must be AGPLed, even if they operate across the network

  • Therefore we have an impedance mismatch between the intent of the AGPL (to protect Software Freedom) and personal privacy, which is amplified on a system that makes no distinction between local and network code

In the land of tomorrow...

Now that this is covered, let's get weird...

Spritely Goblins has the potential to do more than just provide remote procedure calls for remote applications- it's designed so that it could also take object code and safely execute it locally.

This may seem strange at first, but a longer-term goal of Spritely appears to be to take in-memory object code and ship it to another machine where it can be safely executed. I use the adjective “apparently” here because I don't see mention of this in the Spritely docs, but it is something Chris and I have discussed privately.

In terms of functionality, this is extremely powerful, but it gets complicated when we talk about source code requirements. As people who have done work in the field of Reproducible Builds know, making software reproducible is not trivial, and if instead of shipping object code, we had to ship source code around, this would be a large burden on the recipient system to then need to not only build the source but possibly also to replicate the remote environment.

Even if we were able to replicate the remote build environment for every single program we might encounter, requiring us to build software just to use it is a high barrier of entry. We in the Free Software world most often distribute programs through binaries because we know what a burden it would be to require every program to be compiled.

Even if we could build every program, it might be practically impossible to do so. We are seeing the beginning of artificial intelligence systems that build models or sometimes build software itself. Models, or software built by artificial intelligence is replicable but is impractical to replicate by virtue of its sheer size.

In a system like Spritely Goblins, the peer-to-peer network design allows us to integrate programs into our own safely by using the OCAP security model. With the security addressed, and the ability to run code either remotely or locally from anywhere, the possibilities for computing start to seem infinite, but if we had to build every single program we encountered, it would be a major wet blanket.

Where does this leave us?

I care deeply about software and user freedom. Heck, I do a podcast about it with Chris. I've mentioned on multiple episodes that Free Software has saved my life. It's a part of me and important.

The goal of the AGPL is noble, and I agree with it, but it's clearly not compatible with the type of programming that is coming down the pike.

So what do we do?

Chris's suggestion is that the GPL is sufficient, but I don't agree.

Instead, I think that we need to capture the spirit of the AGPL is a new license or new revision of the AGPL that can accommodate this new model.

Let the discussion begin!

Listening to the news about the Democratic party can be disheartening at best. This week a story in the New York Times came out discussing how DNC leadership is willing to disenfranchise up to half the party in order to prevent Bernie Sanders from getting the nomination.

They claim that this is in order to solidify a win. They claim that it's the swing voters that they're courting and that those voters would never vote for Sanders. It's policies, they claim, or occasionally they'll claim it's those mid-westerners and their anti-Semitism, usually while engaging in anti-Semetic tropes.

Meanwhile, On the Media put out a story this weekend about the disenfranchized progressive voter, just how many progressives are turned away from voting, or vote for a third party rather than vote for a moderate.

On its face, these two situations don't reconcile. The Democratic Party must want to in, mustn't it? Instead of courting Republicans who might somehow be persuaded to vote for a Democrat (despite Trump's 80% approval rating amongst Republicans) why wouldn't they work to energize the voter base- to register more underprivileged, undercounted, underrepresented people and energize the youth?

Why wouldn't the DNC want to show the country that Trump is wrong in his “Do nothing Democrats” taunt, that the Democratic Party does have a grand vision as a counter to the grand vision of Republicans?

The answer is simpler than it seems... The DNC's fear-mongering about Sanders not being a viable candidate is not for Republicans or the moderates amongst its ranks, but rather they themselves.

We see this reflected not just in political circles, but the corporate “liberal media” where Sanders is consistently painted in a negative light even on self-described liberal news outlets.

The fact is that the critique that many Republicans have had over the hypocrisy of the Democratic party is real. There is a “Limousine Liberal” with a vested interest in the status quo, who decries Trump's “Make America Great Again” slongan, but who pines for the days of the Clinton era, where public programs were cut, but since corporate growth was high, only poor/brown people noticed.

Sanders makes the DNC uncomfortable because he forces the Democratic party to come face to face with the reality that it isn't for poor people, brown people or the youth, but rather to keep things simmering just enough below the surface to keep the lid from popping off.

With Trump in office, the lid has popped off and now the DNC leadership is scrambling to figure out how to keep control of the narrative. They've invented a make-believe voter, a Joe or Jane Republican who watches Fox News but will be persuaded to vote for a “moderate” Democrat.

It's time for the DNC leadership to get honest with itself and the American people. The Democratic coalition is breaking apart at the seams. The party is split between two very different ideas, one where we dream of the 90s and the other where we live in the present and present the people with a comprehensive plan to enact sweeping changes that will save our children, help heal our environment and repair our decaying infrastructure.

I've lived through the 90s and I don't want another Bill Clinton. I want another Franklin Rosevelt.

Datashards is finally getting traction in the world and so it's time to reflect on where we are and where the project is going.

Datashards is a project that offers up a new storage primitive for secure data storage and transmission. With Datashards the data at rest is encrypted and also protected against data shape attacks. Datashards is designed to work either online or offline and even lets you store your data on someone's machine even if you don't trust them.

Datashards has the opportunity to be an entirely transformational technology in terms of being able to safely store and transmit data.

We've already proven the concept works and we can implement it in multiple languages as we have Fixed Datashards (previously Immutable Datashards) implemented in Racket and Python, and we have Updatable Datashards (previously Mutable Datashards) in Racket.

In the next few months, we'll be working to get Updatable Datashards implemented in Python.

We're also working with a talented and dedicated software developer to get a Javascript implementation of Datashards (both Fixed and Updatable), which we hope will open up many new opportunities.

We will be highlighting these libraries on the Datashards website, along with documentation on how Datashards works and implementation guidelines.

In even more exciting news, we're starting work on a protocol built on top of Datashards designed to enable Datashards servers to communicate.

Datashards is a storage primitive. In that way, it's a bit like the concept of a file- useful as a concept but without implementations and application, nothing more than an interesting idea. The protocols that we're building on top of Datashards are akin to a filesystem built on top of those primitives and that will allow developers to build interesting things using Datashards.

In order to accomplish this task well, Chris and I have been working with possible users of the technology as well as spending time researching similar systems in the past, as well as various peer to peer messaging technologies and patterns in order to build something that is pratical, scalable and build on solid engineering principles.

Thoughts on Canonical S-Expressions

Datashards currently uses Canonical S-Expressions as a data format and after using it for a few months, I have some thoughts.

First things first: If you aren't familiar with the format, let me give you a quick rundown. Canonical S-Expressions are a bit like regular S-Expressions, with a twist. If you already know Lisp, none of this will be new, but for the rest of you, there are two items in an S-Expression- a list and an atom. A list is what it sounds like- a sequence of things. And an atom is a thing. An S-Expressions looks like:

(item1 item2 item3 item4)

If you're familiar with Python or Javascript, you can think of that as the same as:

[item1, item2, item3, item4]

In Canonical S-Expressions (csexp), every atom is actually a byte object, and we say the size of the byte object by prepending it with the number of bytes, followed by a color:

(5:hello5:world)

That's a list of two items, 'hello' and 'world'. I'm putting these in quotes but the values aren't strings, they're bytes. That means it's very efficient to put raw binary data in a csexp. If you put binary data in JSON, you'd have to do something like base64 encode it. No need in csexp!

You can also give a “type hint” in csexp, so if you have a binary object that represents an image, you can stick the mimetype in the csexp, such as:

([image/jpeg]1024:)

You can also store other lists inside of a csexp, such as

(9:groceries(4:milk5:bread))

The Good

The good things about Canonical S-Expressions is how darn easy they are to write and to write a parser for. You can write a csexp parser/generator in an afternoon. It's really that easy!

It's also a very efficient format. You can store image data, text data, anything you want!

And it's extremely versatile. The simplicity is the power!

The Bad

The worst problem I have with csexp is that despite its simplicity, if you want to use it, you're probably going to end up writing your own parser/generator for it. I found a library for Python 2.7, but it didn't work for Python 3, so I had to write my own. My friend Chris Webber wre the implementation for Racket. As of the time of writing, I don't know of an implementation for Javascript, Ruby, Golang or Rust. Writing your own library for something this fundamental isn't fun, even if it's not hard.

The second problem that I have with csexps is that they're not very useful for describing data. For example in Datashards, we will represent the a file size by an integer, 1000, for example. But in csexp, this is represented as 4:1000 which means that my program has to know to convert the value from bytes to an integer.

I could use type hints for the type of data, such as [int]4:1000 but this doesn't help in practice because the program reading

Datashards currently uses Canonical S-Expressions as a data format and after using it for a few months, I have some thoughts.

First things first: If you aren't familiar with the format, let me give you a quick rundown. Canonical S-Expressions are a bit like regular S-Expressions, with a twist. If you already know Lisp, none of this will be new, but for the rest of you, there are two items in an S-Expression- a list and an atom. A list is what it sounds like- a sequence of things. And an atom is a thing. An S-Expressions looks like:

(item1 item2 item3 item4)

If you're familiar with Python or Javascript, you can think of that as the same as:

[item1, item2, item3, item4]

In Canonical S-Expressions (csexp), every atom is actually a byte object, and we say the size of the byte object by prepending it with the number of bytes, followed by a colon:

(5:hello5:world)

That's a list of two items, 'hello' and 'world'. I'm putting these in quotes but the values aren't strings, they're bytes. That means it's very efficient to put raw binary data in a csexp. If you put binary data in JSON, you'd have to do something like base64 encode it. No need in csexp!

You can also give a “type hint” in csexp, so if you have a binary object that represents an image, you can stick the mimetype in the csexp, such as:

([image/jpeg]1024:<bytes>)

You can also store other lists inside of a csexp, such as:

(9:groceries(4:milk5:bread))

What I Like

There's a lot to like about Canonical S-Expressions. They're extremely space efficient, very flexible and super easy to parse. Writing a reader for a csexp is fairly trivial. And even if your language doesn't already have a csexp library, you can easily write one in a day, if not an afternoon.

The other thing I like about Canonical S-Expressions is that they do what they claim to do and nothing else. They're a binary format that only does byte strings and lists.

What's Not to Like About Canonical S-Expressions

Working with CSEXP data can be a pain. You're always stuck writing a reader for your data. Your reader will take the resulting abstract parsed data and convert it into something your application will actually consume. In some cases this conversion is easy, 3:100 becomes the integer 100. If you want to store more complex data structures, such as associative arrays, however, then you'll need to think about it.

Since CSEXP doesn't have associative arrays, only lists, you'll end up writing the serialization/deserialization format on your own. You could store them as lists of lists, ((key val) (key val)) or the more compact form of (key val key val) or you could (ab)use the hint system, such as ([key]value [key]value). Whatever choice you make, it will be specific to your application and someone who reads the document will need to think about the choices you made beforehand. Or if you're inheriting data in this format, you may end up having to guess at the meaning of the data structure.

This type of step is necessary for many serialization formats. In some, like Protobufs, it's a requirement. In XML, it was not strictly necessary but almost always done, and in some applications using JSON, it may not be necessary at all.

Canonical S-Expressions occupy a strange middle ground where having a formal schema is not strictly necessary, as it's schema-less, but it's also challenging to work without one.

Flexible (Schema-less) Data Serialization Formats

Flexible data formats are a topic of deep discussion and debate. In the 90s, it seemed that the world had converged on XML as the One Format to Rule Them All. The problem with XML is that even though the format is self-documenting in some ways, ie <tag></tag>, the value inside tags needed to be converted during a secondary reader, separate from the parser.

Since this distinction isn't always clear, the parser parses the raw data into a machine readable data structure, while a reader parses the data (usually post-parsed) into application specific data structures.

Canonical S-Expressions have the same problem in regards to needing a reader that XML does, but unlike XML, you don't have the storage or bandwidth issues of the tags.

JSON seems to have won out the generic data format wars by offering some types, making writing a reader trivial (or in some cases, unnecessary) but anyone who has ever had to work with JSON knows that its thin layer of types is misleading. As an example, “How do you store a date in Javascript?”

You could store it as Unix time, seconds after the epoch, or you could store it in an ISO 8601 formatted string, ie "2008-09-15T15:53:00+05:00" or an RFC 822 date format, or something else entirely. Your parser will happily give you a string, but you're stuck needing a reader to do that final conversion, just like you did with XML.

JSON-LD solves some of this by giving your values semantic meaning, but it makes the parser more complex.

And neither XML nor JSON handle binary data well. To store binary data in either format, you must first convert it to Base64, which introduces an enormous amount of storage and transmission overhead.

Canonical S-Expressions offers none of the overhead of XML and doesn't claim to do type conversions. Since you'll need a reader anyway, you can do your type conversions in that step.

Further Thoughts and Alternatives

In practice, having some type data assistance does offer benefits. It makes your reader simpler, and it makes the format more pleasant to work with, and so while I appreciate cxesp's simplicity, I find working with it to be more challenging than it should be.

One thought that I keep having while I'm using csexp is to use the type hints to store information such as the data type. Imagine if instead of:

20:2019-10-02T07:11:07Z

We instead stored:

[iso8601]20:2019-10-02T07:11:07Z

That would give us the data type and we could let the reader take some of the work off of our programming logic. This is similar to JSON-LD's method of storing semantic data.

I personally like this idea, but it requires changes to the readers to recognize a new “Semantic Canonical S-Expression”.

A simpler idea would be to store some type information alongside the data, so instead of 3:253, you might store I3:253, with “I” indicating that the value is an integer. This is exactly what the Bencoding format does. Bencoding offers many of the same benefits of CSEXP, but because it also supports types, is a bit easier to work with. The downside, as always, is that this helpfulness comes at the cost of storage and bandwidth.

Other formats exist as well. I previously mentioned Bencoding, but there is also MessagePack, ASN.1, CBOR, and the newest, Preserves. Each of these has a different approach, though they center around the same problem- making it easy to store arbitrary data, especially binary data, on disk and on the network.

It's beyond the scope of this post to delve into each of them. I think Preserves is the most interesting of the formats. It's certainly the most expressive despite being compact, but since I haven't used it I don't know if that expressiveness will be something I need or if I could simply use Bencoding or MessagePack to the same effect.

Conclusion

Canonical S-Expressions are a great, flexible, compact data format. It's very fast and efficient. If you have straightforward needs, it's certainly worth checking out. In my use case, Datashards, it fits our current needs. If we end up wanting to store more complex data structures in the format, such as associative arrays, that will be the time to re-evaluate the format choice to see if something else would be a better fit.

On Long-Form Blogging

In 1995, I got my first taste of the World Wide Web. That's a funny thing to think about now, but at the time it was very new and most websites that I found were weird, off the wall, and amazingly amateurish. I found sites about Bonsai Kittens, connecting soda machines to the internet, lucid dreaming and a bunch of vanity websites from people just wanting other people to know they existed.

In 1997, I ran my very own website from my dorm room. It was thanks to Microsoft Personal Web Server, and it let my humble desktop PC present me to the world. I used it to host essays for school... before it crashed.

In the early 2000s, I found LiveJournal and at the time, LiveJournal filled the same role in my life that the Fediverse does now. I had real life friends who followed me on LiveJournal. I had friends from LiveJournal, I met people through people on LiveJournal and was exposed to new thoughts, ideas and experiences through reading about others' lives.

I loved it so much, I was not only a paid subscriber, but I paid for a lifetime membership. ...Until the site was bought out by a Russian company and I closed my account.

When Twiter came onto my radar, it was through geeky friends who had seen it at a conference. It was a Rails project and it was a bridge to SMS texting. It felt more like IRC than blogging. Blogging was at least a few paragraphs, and they spoke to something about the person's experience. They might be personal, or technical, but they felt intimate and connecting. Twitter was 140 characters.

Mastodon made a choice to be 500 characters, which was more than three times better! But as time has gone on, I've found myself writing posts that span three, four or five toots. This isn't a limitation of ActivityPub- it's a design choice of Mastodon itself to limit itself to microblogging.

But I miss blogging, and if Medium has taught me anything, it's taught me that other people miss it too, and they're even willing to put up with Medium to have it!

So I'm using Write Freely/Write.as to blog again. With ActivityPub, people can subscribe to my posts just as easily as they could on LiveJournal, either from ActivityPub or RSS. And who knows, maybe this whole thing will take off and I'll be able to feel like I really know people's thoughts and feeling again. Maybe we can bring the humanity back to social networking.