Introduction

Over the past couple of months, I’ve been recording a series of live-coding sessions exploring Crystal’s concurrency by means of building a simple terminal-based app.

I decided to collect them all here with some comments and errata. You can find the source code on github, in case you feel like following along.

What are we building?

In this series, we’ll build a URL checker: a tool to fetch a user-defined set of URLs periodically, reporting on their health.

Along the journey, we’ll implement basic tracking and alerting functionality, as well as a polished terminal UI to display information. While doing so, we’ll introduce and explore concurrency concepts and patterns from a paradigm called Communicating Sequential Processes (CSP).

Session 1 - Getting started

In this session, we lay the foundations for our terminal-based, concurrent application written in Crystal.

The video is quite long, but you can use the items in the list below as bookmarks. Remember you can speed up the playback, if things get a bit too slow for you 🏃‍♂️

  1. Initialising a Crystal app
  2. Making HTTP calls
  3. Reading from config files
  4. Concurrently checking URLs with Channels and Fibers
  5. Printing tables on the terminal with Tablo

Session 2 - Classes, modules, tasks

In this session, we start organising our code by splitting concerns and encapsulating logic into modules and classes.

If you’re new to Crystal and are not familiar with Ruby, then I think you’ll find these valuable 💰

  1. Classes and aliases
  2. Extracting tasks into modules
  3. Scheduling periodic tasks

Session 3 - Around the clock

In this session, we expand on configuration handling and logging. We also introduce some new concurrency constructs: timers ⏰ and the select statement.

Macros are mentioned briefly. If you want to know more, then the reference manual is a good place to start.

  1. Type-safe config handling
  2. Sensible monkey patching
  3. Logging across Fibers
  4. Signals, timers and select statement

Session 4 - Fibers: terminated

When sharing a channel between fibers, it’s important that we are clear on each fiber’s responsibility. Some will write to the channel, some will read from it, and some will mark the end of the communication by closing it.

In this session, we talk about channel ownership and discuss some fairly advanced termination strategies.

The video quality is not great 😞, but I recommend you squeeze every pixel out of this one, as the concepts exposed here are fundamental to work effectively with channels and fibers.

  1. Fibers owning Channels
  2. Terminating groups of Fibers
  3. Propagating Channel closure throughout a pipeline
  4. Waiting for a pipeline to be done

Session 5 - two-way communication between Fibers

In this session, we talk about two-way communication between Fibers, taking inspiration from Elixir’s GenServer and Akka’s Actor. If the topic interests you, then you should also check this deep-dive out.

Session 6 - Operations on channels

Partitioning is a powerful abstraction that lets us process streams of data differently based on some rule.

In this session, we show how to partition (and then merge) the data sent to a Channel based on a predicate.

We also define a stateful fiber to compute a moving average.

  1. Partitioning and merging channels
  2. Processing data on a sliding window.

Errata

The module AvgResponseTime is affected by the following bugs:

  1. most_recent.reduce(&.+) is equivalent to most_recent.reduce {|a| a.+}, which is definitely not what I was going for. You can use most_recent.reduce {|a,b| a + b} instead, or opt for the more compact most_recent.sum
  2. On the same line, we should be dividing the sum of the response times by the size of most_recent, rather than by width. Dividing by width produces the wrong result until most_recent is full.
  3. If you look carefully, you’ll notice that we’re computing the average response time over all, rather than computing the average response time by URL. We define a suitable data structure to store aggregated data by URL in video 8.2.

Session 7 - Testing concurrent code

When testing concurrent code, things get a lot easier when we split concurrency and business logic, so that we can test the two in isolation.

In this session, we look into practices to make our concurrent code testable and simple strategies to test it.

  1. Writing robust tests for our channel partitioning method 🏋️‍♀️
  2. Refactoring to decouple concurrency and business logic 🧜‍♂️
  3. Testing non-deterministic output 🤷‍♀️
  4. Writing time-dependent tests ⏲️

Session 8 - Wrapping up

We close the season with a reprise on termination strategies, more stateful fibers and a shiny new UI 🚀

  1. Bringing rogue fibers to order with a more robust termination strategy
  2. Adding an alerting stage to our pipeline 🚨
  3. Polishing the terminal UI with ncurses

Notes

Once you add the dependency on crt, you might see the error cannot find -lgpm when compiling the app. Installing libncursesw5-dev should solve the issue.

This is it! I hope you had some fun and learned something while watching these videos. As for myself, I am currently looking for new, concurrency-releated topics to dig into, so I’d love to hear your suggestions on new topics or apps I could live-code - just leave a comment below to have your say 👇

If you’d like to stay in touch, then subscribe or follow me on Twitter.