© Copyright Kirk Rader 2023. All rights reserved.
Home
Music
Or search for "Kirk Rader" on the music service you prefer
Software
Philosophy of Science
© Copyright Kirk Rader 2023. All rights reserved.
Music
Or search for "Kirk Rader" on the music service you prefer
Over the years I have used a variety of traditional instruments and electronic gear to make, er, um, "music?" Yes, let's call it "music."
Studio Setups
Approaches & Techniques
My process for creating music has evolved as new technologies have become available, but conceptually is not all that different from how I started out as a composer. I have loved listening to many types of music all my life. The first album that I bought with my "own" money was a recording of Schubert's Unfinished Symphony. I had worn the grooves out of it by the time I turned seven years old. By the time I was in my late teens, I was writing papers for my degree in Formal Linguistics and, a couple of years later, teaching myself to program computers while listening to a personal soundtrack that featured Lou Reed's Metal Machine Music  yes, the 4LP sides' worth of guitar feedback and effects noise that ended his career just as it was starting to take off in the late 1970's. You will find strong hints of all those influences  classical and experimental music together with mathematics and a love of algorithms  throughout my musical oeuvre. To me, an analog synthesizer is a device for translating what are essentially mathematical formulas into acoustic impulses, which are occasionally and serendipitously esthetically pleasing (at least to someone with my extremely idiosyncratic tastes). Computer software makes that even more literally true.
Sonic Pi Ruby Code
Sonic Pi is programming language, musical instrument and MIDI sequencer all at the same time.
 Au_Quai.rb
 Decommisioned.rb
 Farandole_Lamentoso.rb
 Farandole_in_Tempore_Belli.rb
 Ghostly_Reminiscences.rb
© Copyright Kirk Rader 2023. All rights reserved.
Musicography
Or search for "Kirk Rader" on the music service you prefer
I started creating electronic music on the West Coast, in the "West Coast Style," long before I knew that there was such a thing. My musical tastes have always been outside the mainstream. The first album I purchased with my "own" money was a vinyl recording (there were no other commercially significant recording media for sale in the mid 1960's) of Schubert's Unfinished Symphony which I had worn the grooves out of by the time I turned 7 years old. When I was studying formal linguistics and teaching myself to program computers in my late teens and early twenties, I did so to a personal soundtrack that featured Lou Reed's Metal Machine Music (yes, the album consisting of 4 LP sides of electric guitar feedback and effects pedal noise that literally ended his career just as it was beginning to take off in the mid 70's and then went on to become the stuff of legends decades later).
My favorite composers include Tallis, Monteverdi, Vivaldi, Bach, Handel, Beethoven, Glass and Byrne as well as hardcore midcentury experimentalists like Stockhausen and Subotnick. If you have never heard any of my own stuff, consider the combination of those influences as fair warning.
Ringtones
Tone in C Major
UPC: 859777027563
ISRC: QZXKT2300101
Released: 2023
Singles
Undine
UPC: TBD
ISRC: QZXKT2300100
Released: 2023
Albums
Undecidable^{*}
UPC: 885767506688
Released: 2011 (recorded 1978  1985)
Track No  Track  ISRC 

1  Opus 1  QMYYH1100001 
2  Opus 2  QMYYH1100002 
3  Sample and Hold 2  QMYYH1100003 
4  Segnahc Cigam  QMYYH1100004 
5  Sample and Hold 1  QMYYH1100005 
6  Opus 6  QMYYH1100006 
Dum Vivimus
UPC: 198015910503
Released: 2023
Track No  Track  ISRC 

1  2600 Redux  QZXKT2300001 
2  Dum Vivimus  QZXKT2300002 
3  Vivamus  QZXKT2300003 
4  Sliders and Knobs  QZXKT2300004 
5  and Cables  QZXKT2300005 
6  One Modulator to Rule Them All  QZXKT2300006 
7  Sonatina in No Particular Key Part 1  QZXKT2300007 
8  Noise to Signal  QZXKT2300008 
Untitled
UPC: 198015912095
Released: 2023
Track No  Track  ISRC 

1  Untitled  QZXKT2300009 
2  Whoop Whoop  QZXKT2300010 
3  Ring of Sines and a Square  QZXKT2300011 
Songs Not Yet and No Longer Sung
UPC: 198015955498
Released: 2023
Track No  Track  ISRC 

1  Sonatina in No Particular Key Part 2  QZXKT2300012 
2  Fire and Forget  QZXKT2300013 
3  Divergence  QZXKT2300014 
4  Douse and Remember  QZXKT2300015 
5  Faerie  QZXKT2300016 
6  Múspellsheimr  QZXKT2300017 
7  Niflheimr  QZXKT2300018 
8  Lament  QZXKT2300019 
9  Ostinato  QZXKT2300020 
March
UPC: 198025013270
Released: 2023
Track No  Track  ISRC 

1  March  QZXKT2300021 
2  Ludwig van Synthpop  QZXKT2300022 
3  E Minor Tune  QZXKT2300023 
April
UPC: 198025078903
Released: 2023
Track No  Track  ISRC 

1  Sonatina in No Particular Key Part 3  QZXKT2300025 
2  Krell National Anthem  QZXKT2300024 
3  Triad  QZXKT2300026 
4  Iaulath Berúthiel  QZXKT2300027 
5  Sinusoidal Syncopation  QZXKT2300029 
6  PWM  QZXKT2300030 
7  Fall, Rise  QZXKT2300028 
No Chatbots Were Harmed in the Making of This Album
UPC: 198025252013
Released: 2023
Track No  Track  ISRC 

1  Analogy in C Minor  QZXKT2300036 
2  π  QZXKT2300031 
3  Regret  QZXKT2300032 
4  YNBBB  QZXKT2300033 
5  Chicxulub  QZXKT2300035 
6  Rainy Day Fanfare  QZXKT2300037 
7  At His Heels, Leashed In Like Hounds  QZXKT2300038 
8  Crouched for Employment  QZXKT2300039 
9  Analogy in C Major  QZXKT2300034 
Ambivalence
UPC: 198025421488
Released: 2023
Track No  Track  ISRC 

1  Effulgence  QZXKT2300045 
2  Vigilance  QZXKT2300040 
3  Perseverance  QZXKT2300041 
4  Coherence  QZXKT2300042 
5  Reluctance  QZXKT2300043 
6  Ebullience  QZXKT2300044 
7  Somnambulance  QZXKT2300046 
8  Emergence  QZXKT2300047 
9  Disturbance  QZXKT2300049 
10  Transcendence  QZXKT2300048 
The Modern Temple of Amusement
UPC: 198025591679
Released: 2023
Volume 1
Track No  Track  ISRC 

1  Floating Theater  QZXKT2300050 
2  Mount Vesuvius  QZXKT2300051 
3  Soliloquy  QZXKT2300052 
4  Dramma Bernesco  QZXKT2300053 
5  Ghostly Reminiscences  QZXKT2300054 
6  Au Quai  QZXKT2300055 
7  Farandole Lamentoso (8bit mix)  QZXKT2300056 
8  Farandole Lamentoso (2600 remix)  QZXKT2300057 
9  Steamboatman  QZXKT2300058 
10  Foggy River  QZXKT2300059 
11  Decommissioned  QZXKT2300060 
Volume 2
Track No  Track  ISRC 

1  Repercussions 1  QZXKT2300061 
2  Repercussions 2  QZXKT2300062 
3  Repercussions 3  QZXKT2300063 
4  Repercussions 4  QZXKT2300064 
5  Repercussions 5  QZXKT2300065 
6  Repercussions 6  QZXKT2300066 
7  Repercussions 7  QZXKT2300067 
8  Repercussions 8  QZXKT2300068 
9  Repercussions 9  QZXKT2300069 
10  Repercussions 10  QZXKT2300070 
Album 10
UPC: 198025739965
Released: 2023
Track No  Track  ISRC 

1  Incipio ex regulum  QZXKT2300071 
2  Introduction  QZXKT2300072 
3  Variation 1  QZXKT2300073 
4  Interlude 1  QZXKT2300074 
5  Variation 2  QZXKT2300075 
6  Interlude 2  QZXKT2300076 
7  Variation 3  QZXKT2300077 
8  Interlude 3  QZXKT2300078 
9  Recapitulation  QZXKT2300079 
10  Reductio ad absurdum  QZXKT2300080 
11  Epilogue  QZXKT2300081 
Rhythms
UPC: 198025759802
Released: 2023
Track No  Track  ISRC 

1  Rhythm 1  QZXKT2300082 
2  Rhythm 2  QZXKT2300083 
3  Rhythm 3  QZXKT2300083 
4  Rhythm 4  QZXKT2300085 
5  Rhythm 5  QZXKT2300086 
6  Rhythm 6  QZXKT2300087 
7  Euclidean Tablas and Random Drones  QZXKT2300088 
8  Sinfonietta for Cowbell Orchestra  QZXKT2300089 
9  Concerto for Cowbell, Low Percussion and Manic Choir  QZXKT2300090 
10  Farandole in Tempore Belli  QZXKT2300091 
Digitalis
UPC: 198025865497
Released: 2023
Track No  Track  ISRC 

1  Digitalis in C Minor  QZXKT2300092 
2  Poison  QZXKT2300093 
3  Synaesthesia  QZXKT2300093 
4  Chromatic Parrots  QZXKT2300095 
5  Shuffle Off  QZXKT2300096 
6  Tethyc Tempo  QZXKT2300097 
7  Tethyc Melody  QZXKT2300098 
8  Arise, Fair Sun  QZXKT2300099 
XIII
UPC: 198053826262
Released: 2023
Track No  Track  ISRC 

1  Undine  QZXKT2300100 
2  I Alone Am Left  QZXKT2300102 
3  Whistling Gongs  QZXKT2300103 
4  Y. A. M. P.  QZXKT2300104 
5  Tower of Sines  QZXKT2300105 
6  Lean To  QZXKT2300106 
7  Contrary Trigonometry  QZXKT2300107 
8  Growling Bells  QZXKT2300108 
9  Schoenbergian Steampunk  QZXKT2300109 
10  Twelvish Tones  QZXKT2300110 
11  Twelvish More Tones  QZXKT2300111 
12  Without Indication  QZXKT2300112 
13  XII  QZXKT2300113 
Empyrean
UPC: 198027297876
Released: 2023
Track No  Track  ISRC 

1  Whome the Gods Destroy  QZXKT2300114 
2  They First Make Mad  QZXKT2300115 
3  Steady State  QZXKT2300116 
4  Traces Through the Cloud Chamber  QZXKT2300117 
5  Unstead State (Steady State Redux)  QZXKT2300118 
6  Noisy Melody  QZXKT2300119 
7  Ready, Steady  QZXKT2300120 
8  Airy Fairy  QZXKT2300121 
9  March of the Airy Fairies  QZXKT2300122 
10  Migration  QZXKT2300123 
^{*} Even though you didn't ask and probably weren't wondering, the title Undecidable and its cover art featuring a formula of the lambda calculus is an inside joke with myself in reference to my studying formal linguistics and symbolic logic at UCLA — including taking classes from Alonzo Church, himself — during the period when many of those tracks were recorded.
Given:
\[ \begin{align*} \text{Let } \Omega & = \omega \hskip0.25em \omega \\ \text{where } \omega & = \lambda x.x \hskip0.25em x \end{align*} \]
The value of \(\Omega\) is "undecidable" according to the rules of the lambda calculus because the calculation of \(\lambda x.x \hskip0.25em x\) never terminates when applied to itself. I experimented a lot with feedback from the signal path back into the control path (the sort of thing you can really only do with modular analog synthesizers) so I adopted this selfreferential mathematical formula as a logo. This resonated (pun intended) with me because undecidability due to selfreference is the key not only to the ancient liar's paradox, it is also the basis of Goedel's incompleteness proof in formal linguistics, the explanation of why there is no general solution to the halting problem in computer science and underlies many other similarly consequential results in math and science, while feedback (a physical manifestation of "self reference") is a way to produce complex (and often catastrophic) results from simple mechanical or electronic systems.
As an even less relevant aside, I chose \(\Omega\) (the last letter of the Greek alphabet) as the name of the undecidable result because it is easier to render using computer keyboards than \(\aleph_{0}\) that is the name traditionally used when referring to the cardinality of the set of natural numbers and that, according to accepted wisdom, is the smallest infinite magnitude. It also happens to be the "stack depth" of an infinitely recursive function. Why is that amusing? Well, I was also teaching myself computer programming later in the same period that I was creating those tracks, so... maybe it isn't all that amusing. But it makes me smile to think that the most abstruse branches of "pure" mathematics like computability theory can inadvertently produce literally worldchanging effects such as the postindustrial "Information Age." According to common terminology, \(\Omega\) more properly should refer to \(\aleph_{1}\) rather than \(\aleph_{0}\), the former being the cardinality of the set of all countable numbers. Since the cardinality of a set's power set must be larger than that set's own cardinality, \(\aleph_{0} < \aleph_{1}\) so \(\aleph_{0} \neq \Omega\). So if I'd had access to better graphical tools at the time I would have used this as the logo:
\[ \begin{align*} \text{Let } \aleph_{0} & = \omega_{0} \hskip0.25em \omega_{0} \\ \text{where } \omega_{0} & = \lambda x.x \hskip0.25em x \end{align*} \]
If I ever publish a reissue or remixes of any of the tracks on Undecidable, you will already be in on the joke of the logo I plan to use.
Amazon Music
https://music.amazon.com/artists/B004L4HW52/kirkrader
Apple Music
https://music.apple.com/us/artist/kirkrader/417090159
Bandcamp
https://kirkrader.bandcamp.com/music
Spotify
https://open.spotify.com/artist/06lMz4EjJn3pYej2kGIL5t
Youtube Music
https://music.youtube.com/channel/UCp__q4DYBXYhq9uiD2Y8vUg
Yesterday
Here is the setup I used when creating tracks in the 70's and 80's, a few of which eventually were published as the album Undecidable in 2011, once selfpublishing music started to become a thing:
graph LR subgraph " " subgraph early["Early Days"] piano["Piano"] arp2600["ARP 2600"] mic["Microphone"] portastudio["TASCAM PortaStudio 244"] end subgraph later["Added Over Time"] ob8["Oberheim OB8"] dx7["Yamaha DX7"] end cassette(["Cassette Tape"]) piano > mic  "analog\naudio" > portastudio arp2600  "analog\naudio" > portastudio ob8  "analog\naudio" > portastudio dx7  "analog\naudio" > portastudio portastudio  "analog\naudio" > cassette classDef group strokedasharray: 5 5 class early,later group end
The Knabe piano I had as a kid was built in the 1930's and sounded far better in real life than on the surviving cassette recordings. Most of these recordings were made in my childhood bedroom or various apartments I had as a college student and young adult using decidedly consumerlevel microphones and recording technologies of the era. The jankiness of the audio quality appeals to me as an artifact of the period of my life and the circumstances in which they were created, but I cannot argue with anyone who finds it less charming than I.
Today
The albums I have been publishing more recently have been created using variations of this setup:
graph TB subgraph "My Basement" keyboard["Akai Pro MPK261"] b2600["Behringer 2600"] eurorack["Various\nEurorack\nModules"] adc["PreSonus AudioBox iTwo"] subgraph computer["Computer"] audacity["Audacity"] ableton["Ableton Live"] sonicpi["Sonic Pi"] plugins["VST"] flac[("FLAC,\nWAV,\nMP3")] plugins <> audacity > flac plugins <> ableton > flac sonicpi > flac end monitor["PA / Headphones"] end subgraph cloud["Online Publishers"] cdbaby["cdbaby.com"] bandcamp["bandcamp.com"] end subgraph "Streaming Services (plus many more not shown)" itunes["iTunes"] youtube["YouTube Music"] spotify["Spotify"] amazon["Amazon Music"] end subgraph "Licensing & Royalty Collection" soundexchange["SoundExchange"] end keyboard  "MIDI\n(DIN)" > b2600 b2600 < "trigger,\ngate,\ncontrol voltage,\nanalog audio" > eurorack b2600  "analog\naudio" > adc adc < "digital audio,\nMIDI\n(USB)" > computer adc  "monitor" > monitor computer < "MIDI\n(USB)" > keyboard computer  "publish" > cdbaby computer  "publish" > bandcamp cdbaby  "distribute" > itunes cdbaby  "distribute" > youtube cdbaby  "distribute" > spotify cdbaby  "distribute" > amazon cdbaby  "register" > soundexchange
Hardware
 Akai Pro MPK261 keyboard
 Behringer 2600 modern clone of the ARP 2600 I used as a kid
 PreSonus AudioBox iTwo ADC
...plus various Eurorack modules that change embarassingly frequently
Software
 Audacity Digital Audio Workstation (DAW)
 Ableton Live realtime digital synthesis and effects
 Sonic Pi Ruby dialect and runtime platform for creating digital music and controlling MIDI instruments
...running on a creaky old MacBook Pro laptop
Mostly Analog
The majority of my pieces are created using a process not very different from how I worked in the 70's. I usually begin with a concept for a modular synth patch. These are usually pretty basic notions of ways to connect (and sometimes crossconnect) modules in the signal and control paths, such as "use the interference pattern of two lowfrequency oscillators to create a rhythm," "use pitch sweeps driven by an envelope generator on the inputs to a ring modulator to create dynamic timbre" and the like. I then simply play around with such patches  changing the underlying wave forms, inverting the phase of some of the lowfrequency oscillator signals, changing the base frequency relationships when using FM, and so on  until I find something that appeals to my very idiosyncratic tastes sufficiently to be worth recording. I then repeat the same process to create multiple layers of sound for a given composition. Where I used a multitrack analog recorder back in the day, I now use Audacity to record each sonic layer. Finally, I align and trim the tracks, add fades and similar basic editing in the DAW to produce a final mix as a FLAC file. That is where the greater flexibility of a DAW shines, but I do miss some of the tricks you could play with tape. One of these days I'll have to find or write a timereversal plugin for Audacity that would work like turning the cassette over between recording two tracks on my old PortaStudio.
Most of the time, when I "play" a piece during the recording process it is by adjusting potentiometers to alter the patch in real time. Often I don't do even that during the recording but, instead, allow the "logic" of a given patch to play itself out over time. When I use a keyboard at all when working this way, it is usually to provide trigger and gate voltages rather than to play "notes" in the traditional sense. One characteristic of modular analog synthesis is that small changes to a given patch can produce dramatic differences in the resulting sound. Multiple tracks on a given album may be the result of the process I just described, using relatively minor variations of a single patch.
An aspect of this approach is that there usually is neither a musical score in the conventional sense nor any way to recover the exact combinations of settings and initial phase relationships of the various modules contributing to a patch. Once a track is recorded, that recording is the only representation of that exact "composition" and there would be no way for me or anyone else to perform it again and get the same result. This is one of the features of "West Coast" analog electronic music that I find conceptually attractive. There is a complete break not only from traditional tunings, structures and forms but the totality of a musical composition created in this way is represented entirely by its acoustic signature and nothing else.
More Than a Little Digital
All of that said, I have increased my use over the years of tools and techniques that simply did not exist when I was young. Some of the sounds on my albums were produced using completely digital synthesis. These are often in the form of digital audio samples or the output of VST plugins using software like Ableton Live and Sonic Pi. Using Ableton Live, I can play a keyboard (to within the very narrow limis of my skill) to produce more conventionally tonal music. Some pieces consist only of tracks recorded in this way, while others combine analog and digital sound sources in various ways.
For example, the two versions of Farandole Lamentoso on my recent album The Modern Temple of Amusement were created using a single Sonic Pi program generating output in two ways: the "8bit mix" consists of a digital audio file generated directly by Sonic Pi using some of its builtin digital synthesizers. The "2600 remix" uses the Behringer 2600 as the sound source, driven by the same Sonic Pi program, but by sending a sequence of MIDI commands rather than generating digital audio directly. Because the 2600 is monophonic, the latter version required running the program multiple times with slight modifications each time to produce the MIDI sequence for each voice separately.
Another piece, Ghostly Reminiscences, on the same album as the Farandole, was produced entirely as the output of another Sonic Pi program. Even when using digital tools like Sonic Pi, I tend to use randomization and sonic layering to produce unconventional timbres, rhythms and harmonies that will vary endlessly no matter how long they are looped. These Ruby source code files are the closest I am likely to come to writing "scores" for any of my music. This is my "West Coast" approach applied to using "East Coast" inspired gear and software.
 Au_Quai.rb
 Decommisioned.rb
 Farandole_Lamentoso.rb
 Farandole_in_Tempore_Belli.rb
 Ghostly_Reminiscences.rb
# Copyright 2023 Kirk Rader. All rights reserved.
# Au Quai
use_random_seed 0
use_random_source :perlin
use_synth :sine
terminate = false
define :randomatic do min_n, max_n, min_d, max_d
in_thread do
loop do
if terminate
stop
end
n = note(rrand_i(min_n, max_n))
d = rrand(min_d, max_d)
# midi n, sustain: d
play n
sleep d
end
end
end
randomatic(45, 75, 0.25, 0.75)
sleep 100
terminate = true
# Copyright 2023 Kirk Rader. All rights reserved.
# Decomissioned
terminate = false
define :drone do name
in_thread do
with_random_source :perlin do
with_sample_bpm name do
loop do
if terminate
stop
end
rate = rrand(0.5, 2.0)
delay = 1.0 / rate
sample name, rate: rate
sleep delay
end
end
end
end
end
define :melody do name
in_thread do
with_synth name do
with_random_source :pink do
loop do
if terminate
stop
end
duration = rrand(0.5, 2.0)
play rrand(45, 65), sustain: duration
sleep duration
end
end
end
end
end
name = :ambi_glass_hum
drone name
drone name
#drone name
melody :dark_ambience
melody :dark_ambience
sleep 100
terminate = true
# Copyright 2023 Kirk Rader. All rights reserved.
# Farandole Lamentoso
midi_all_notes_off
sleep 1
arp_synth = :dpulse
bass_synth = :bass_foundation
seed = 0
next_seed = 7
arps = 2
arp_tempo = 0.33
send_midi_arp = false
send_midi_bass = false
play_notes = true
define :play_arp do r, c
use_synth arp_synth
c.times do
use_random_seed seed
seed = seed + next_seed
r.shuffle.length.times do
n = r.tick
if send_midi_arp
midi n
end
if play_notes
play n
end
sleep arp_tempo
end
end
end
define :play_bass do n, s
use_synth bass_synth
if send_midi_bass
midi n, sustain: s
end
if play_notes
play n, sustain: s
end
end
define :play_bass_melody do n, s
in_thread do
n.length.times do
play_bass n.tick, s
sleep s
end
end
end
a_min = (ring :A3, :C4, :E4)
d_min = (ring :A3, :D4, :F4)
g_maj = (ring :G3, :B3, :D4)
c_maj = (ring :G3, :C4, :E4)
f_maj = (ring :F3, :A3, :C4)
b_min = (ring :F3, :B3, :D4)
a_min_inv = (ring :E3, :A3, :C4)
e_maj_sus = (ring :E3, :A3, :B3)
e_maj = (ring :E3, :Gs3, :B3)
bass_melody = (ring :C3, :A2, :B2, :Gs2)
2.times do
play_bass :A3, (arps * a_min.length * arp_tempo)
play_arp a_min, arps
play_bass :D3, (arps * d_min.length * arp_tempo)
play_arp d_min, arps
play_bass :G3, (arps * g_maj.length * arp_tempo)
play_arp g_maj, arps
play_bass :C3, (arps * c_maj.length * arp_tempo)
play_arp c_maj, arps
play_bass :F3, (arps * f_maj.length * arp_tempo)
play_arp f_maj, arps
play_bass :B3, (arps * b_min.length * arp_tempo)
play_arp b_min, arps
play_bass_melody bass_melody, (a_min_inv.length * arp_tempo)
play_arp a_min_inv, arps
play_arp e_maj_sus, 1
play_arp e_maj, 1
end
play_bass :A2, (arps * a_min_inv.length * arp_tempo)
play_arp a_min_inv, arps
if send_midi_arp
midi :A4
end
if play_notes
play :A4
end
play_bass :A2, (arps * a_min_inv.length * arp_tempo)
# Copyright Kirk Rader 2023. All rights reserved.
# Farandole in Tempore Belli
# sleep duration for each beat
tempo = 0.2
# beats per measure
measure = 16
# drum rhythm
spread_lo = (spread 5, measure)
spread_mid = (spread 7, measure)
spread_hi = (spread 3, measure / 2  1)
spread_bass = (spread 1, measure)
# start of each measure
spread_measures = (spread 1, measure)
# arpeggio rhythms
spread_arp_1 = (spread measure / 3, measure / 2)
spread_arp_2 = (spread (measure / 2) + 1, measure)
chords = (ring
(chord :a4, :minor),
(chord :d4, :minor, invert: 2),
(chord :g4, :major),
(chord :c4, :major, invert: 2),
(chord :f4, :major),
(chord :b3, :diminished, invert: 2),
(chord :a3, :minor, invert: 2),
(chord :e4, :sus4),
(chord :e4, :major))
bass = (ring
(note :a2),
(note :d1),
(note :g1),
(note :c1),
(note :f2),
(note :b1),
(note :c2),
(note :e1),
(note :e1))
# duration of one measure
measure_hold = tempo * measure
# number of beats in a full phrase
phrase_length = chords.length * measure
# number of beats to repeat the full phrase 6 times
repeats = phrase_length * 6
# current beat number being played each time :master is cued
count = 0
# current phrase number being played each time :master is cued
phrase = 0
# stop threads next time :master is cued
terminate = false
# flags to allow different voices to be played out
# as separate tracks
enable_drums = true
enable_ambience = true
enable_chords = true
enable_arp_1 = true
enable_arp_2 = true
enable_bass = true
# flag to emit singlenote voices (arp_1,
# arp_2 and bass) as midi events rather
# than playing audio output
enable_midi = false
# play arpeggio for 1 measure
define :play_arp do notes, spread
in_thread do
with_fx :reverb, room: 0.75 do
with_synth :chiplead do
measure.times do
stop if terminate
if spread.tick
if enable_midi
midi notes[0], sustain: tempo
else
play notes[0], sustain: tempo, amp: phrase * 0.1
end
notes = notes.rotate
end
sleep tempo
end
end
end
end
end
with_fx :reverb, room: 0.75 do
begin
# chords & arpeggios
in_thread do
with_random_source :perlin do
loop do
# await next beat
sync :master
stop if terminate
# only take action once per measure
if spread_measures.tick
# play current chord using ambient voice
with_synth :dark_ambience do
with_transpose 12 do
play chords[0], sustain: measure_hold, amp: 1.5 * phrase, pan: rrand(0.5, 0.5) if enable_ambience && phrase > 0
end
end
# play current chord as arpeggio using first pattern
play_arp chords[0], spread_arp_1 if enable_arp_1 && phrase > 1
# play current chord as arpeggio using second pattern
play_arp chords[0], spread_arp_2 if enable_arp_2 && phrase > 2
# play current cord using melodic voice
with_synth :chipbass do
play chords[0], sustain: measure_hold * 0.8, amp: phrase * 0.1, pan: rrand(0.5, 0.5) if enable_chords && phrase > 2
end
# play current note in bass line
with_synth :bass_foundation do
if enable_bass && phrase > 3
if enable_midi
midi bass[0], sustain: measure_hold
else
play bass[0], sustain: measure_hold, pan: rrand(0.5, 0.5)
end
end
end
# advance to the next measure
chords = chords.rotate
bass = bass.rotate
end
end
end
end
# percussion
if enable_drums
in_thread do
with_random_source :white do
loop do
sync :master
stop if terminate
tick
sample :drum_tom_lo_hard, rate: rrand(0.75, 1.25), amp: rrand(0.2, 0.5), pan: 0.25 if spread_lo.look
sample :drum_tom_mid_hard, rate: rrand(0.75, 1.25), amp: rrand(0.2, 0.5), pan: 0.25 if spread_mid.look
sample :drum_tom_hi_hard, rate: rrand(0.75, 1.25), amp: rrand(0.2, 0.5) if spread_hi.look
sample :drum_bass_hard, rate: rrand(0.75, 1.25), amp: rrand(0.5, 0.8) if spread_bass.look
end
end
end
end
# clapper for syncing tracks in a DAW
with_synth :beep do
play :a4
midi :a4, sustain: 1
end
# give space after clapper
sleep measure_hold
# master clock for all threads
repeats.times do
cue :master
count = count + 1
phrase = phrase + 1 if count % phrase_length == 0
sleep tempo
end
ensure
terminate = true
sleep tempo * 2
cue :master
sleep tempo * 2
midi_all_notes_off if enable_midi
end
end
# Copyright 2023 Kirk Rader. All rights reserved.
# Ghostly Reminiscences
live_loop :plingk do
sample :perc_bell, rate: rrand(0.2, 2)
sleep rrand(1, 5)
end
live_loop :clangk do
sample :perc_bell2, rate: rrand(0.2, 2)
sleep rrand(1, 5)
end
live_loop :eery do
sample :ambi_haunted_hum, rate: rrand(0.2, 2)
sleep rrand(0.75, 3)
end
live_loop :spooky do
sample :ambi_glass_hum, rate: rrand(0.2, 2)
sleep rrand(0.75, 3)
end
live_loop :chugga do
sample :loop_industrial, rate: rrand(0.2, 2)
sleep 5
end
live_loop :bzzz do
sleep 2.5
sample :loop_drone_g_97, rate: rrand(0.2, 2)
sleep 7.5
end
© Copyright Kirk Rader 2023. All rights reserved.
Software
Scheme
Home Automation
The Real SDLC
graph LR subgraph " " business["Business"] support["Customer Support"] product["Product"] architecture["Architecture"] development["Development"] qa["Quality Assurance"] operations["Operations"] product < "request\nfeatures" > business support  "metrics" > business support < "request\nfeatures" > product product < "requirements" > architecture architecture < "design" > development development  "release\ncandidate" > qa qa  "report\nissues" > development support  "report\nissues" > development qa  "release" > operations operations  "report\nissues" > development operations  "analytics" > business business  "analyze" > business product  "iterate" > product architecture  "iterate" > architecture development  "iterate" > development qa  "test" > qa operations  "configure,\nmonitor" > operations end
© Copyright Kirk Rader 2023. All rights reserved.
Career Advice for Software Engineers
 Have a portfolio
 Create your own publicly visible repositories on GitHub or the like
 Contribute to open source projects
 The type of projects are less important than quality
 Find software projects that connect to your interests and hobbies so as to
motivate you to put in sufficient effort while still being fun / valuable
to do in your own (unpaid) time
 Games
 Home automation / security
 CNC / 3D printing
 Music
 Graphical arts
 Find software projects that connect to your interests and hobbies so as to
motivate you to put in sufficient effort while still being fun / valuable
to do in your own (unpaid) time
 Demonstrate proficiency in at least two widelyused programming languages
 Reasonable choices at the time of writing (changes every few years!):
 Javascript
 C / C++
 Go
 Python
 Rust
 Java
 Reasonable choices at the time of writing (changes every few years!):
 Demonstrate good programming style and discipline in all publicly visible code
 Well documented
 Well formatted
 Use build automation that includes
 Linting / static code analysis
 Automated tests
 Packaging
 NPM compatible packages for Javascript
 Executables and shared libraries for C / C++
 Go modules
 Mavenrepository compatible JAR's for Java
 Deployable units for frontend (varies widely by underlying tech stack)
 You don't necessarily have to publish such packages to some "official" repository but demonstrate that you understand how such packaging works
 That said, having some published packages along with your portfolio is a good thing
 For extra credit, have a blog or some such
 Be active in coding communities
 Answer questions on stackoverflow etc.
 Create tutorials
 Be active in the wiki / blog / etc. for open source projects you use or on which you collaborate
 Do not overspecialize
 Keep up to date on trends
 Programming languages commonly used in particular industries
 Operating systems
 Frameworks
 Application domains
 ...but there is a fine line between keeping up to date and chasing fads
 Depth of understanding in a few, but diverse, areas of concern using timehonored tools and techniques is more important than constantly dropping the latest buzzworthy jargon
 The "Full Stack Developer" is a mythological creature, still:
 Include a range of areas of concern in your portfolio
 Front end
 HTML / Javascript demonstrating understanding of the DOM
 Frameworks like Vue/Vuetify, Angular, React
 Mobile
 Back end
 Relational databases
 Time series databases
 Key / value storage
 Real time
 Home automation
 CNC
 AI
 Query engineering
 LLM development
 Another area that changes rapidly and is always and forever full of the next overhyped thing that is about to but hasn't yet quite changed every paradigm of human existence... so demonstrate awareness but don't overinvest
 Front end
 You do not need every one of all of the above in your portfolio
 ...But you do need a few from diverse areas of concern to demonstrate understanding of software architecture beyond a narrow niche
 Include a range of areas of concern in your portfolio
 Keep up to date on trends
And, at the risk of invoking some "ok, Boomer" eye rolls:
 To really, truly understand how the code you write actually works
 Learn the assembly language of at least one microprocessor
 ARM would be a good choice these days (this text is being composed on a Raspberry Pi running on an ARM CPU)
 Trusty old x86 would still not be unreasonable
 Learn Scheme
 Learn the assembly language of at least one microprocessor
Really.
As hardware capacity and performance increased at a pace that far outpaced
advances in software design and implementation techniques over the past few
decades, the software engineering industry grew increasingly complacent
regarding performance and resource management. Generations of programmers have
come of age without ever having been expected to think too hard about how the
code they write in programming languages like Javascript, or even Go or C,
actually works from the point of view of the hardware on which it is running and
the networks over which their data is transported. The demands of Big Data and
now LLM's are forcing a reevaluation of this attitude. If you are looking for a
competitive edge in the hiring process, be able to talk confidently about the
nitty gritty of compiler implementation concerns like CPS (Continuation Passing
Style), memory management (e.g. various garbage collection strategies vs.
reference counting vs. OG malloc()
) and the like. In addition, being able
communicate a deep understanding of what optimizations are even possible when
manipulating code at the lowest level will make you look like a rock star. Being
able to back up such talk with lowlevel, "here's how it's really done" assembly
code in your portfolio helps tremendously.
At the opposite end of the programming language spectrum from assembler, among the gravest disservices MIT ever performed for their students was substituting Python for Scheme as their default programming language. This is not to denigrate Python or its fans (though it is my least favorite among currently popular languages — and I would have trouble counting all the programming languages in which I have been paid to write nontrivial amounts of commercialquality code, starting with Pascal and 68000 assembler in the early 80's and currently including Go). Note that I included Python, and not Scheme, among the languages you should consider showing off in your portfolio. That said, Scheme is the ultimate evolution of the Lisp family of programming languages. It is the most pure expression of Church's Lambda Calculus represented as an actual programming language the world is ever likely to achieve. For fans of functional programming: it all started with Lisp. For fans of objectoriented programming: consider the old Lisp programmer's adage, "objects are the poor man's closures." To fully appreciate what people mean when they talk about anonymous functions as "lambdas," to experience directly what is possible using continuations as a firstclass programming construct, to be able to really comprehend how any programming construct, from simple loops to asynchronous coroutines, can be implemented (as they are, under the covers, by compilers) as a set of mutually recursive functions... learn Scheme.
Practicing What I Preach
 https://github.com/parasaurolophus/homeautomation
 Homegrown, inhome (no clouds!) home automation
 This really does drive my lighting and window shades so I am motivated to keep it functional
 Uses https://nodered.org, a community in which I participate
 Extensive use of Node.js compatible Javascript
 Inline Javascript in the NodeRED flows
 Uses separately packaged and published NPM modules I also created (described below)
 Accesses API's provided by Philips Hue and HunterDouglas devices using HTTP and eventsource based services
 Includes Vue / Vuetify based frontend code
 Demonstrates
 Separation of concerns
 Modularity
 Eventdriven programming techniques
 Note the fairly extensive README.md
 https://mermaid.js.org/ based architecture diagram
 Detailed installation / configuration instructions
 Homegrown, inhome (no clouds!) home automation
 https://github.com/parasaurolophus/noderedeventsource
 One of two library NPM packages I developed for use by my home automation system
 The other is https://github.com/parasaurolophus/nodereddnssd
 Published to NPM and in the NodeRED flow library so that it can be used by the NodeRED community
 Passes all automated quality checks performed by the NodeRED flow library
 README.md with screenshot, usage instructions
 One of two library NPM packages I developed for use by my home automation system
 https://www.rader.us/music/digital.html
 And now for something completely different: Ruby source code as musical "scores"
 https://www.rader.us/software/scheme/tailrecursion.html
 One of many possible examples of why every programmer should learn Scheme (along with the assembly language of at least one microprocessor) if they want truly to understand how software really works — no pressure!
© Copyright Kirk Rader 2023. All rights reserved.
Scheme
Scheme is the ultimate stage in the evolution of the Lisp family of programming languages. Lisp and Fortran, originally developed at about the same time in the late 1950's, were the first two widely used and studied "high level" computer programming languages. Prior to their appearance, programming was exclusively done using assembly language or by entering binary machine code directly using all those quaint switches, knobs and dials visible on 50's vintage computers.
Lisp and Fortran take fundamentally different approaches to language design. Fortran was intended to appeal directly to engineers familiar with the branches of mathematics used in the applied sciences and with the kind of structured design documentation used in electrical engineering and similar eminently practical endeavors. The majority of popular programming languages to this day owe a lot of their syntactical and semantic conventions to Fortran.
Lisp was intended to be as a direct an embodiment of Church's Lambda Calculus as can be achieved in the real world. Scheme comes the closest of all Lisp dialects to achieving that aim. For that reason alone, any programmer who wishes to understand something about the origin of the very concept of "programming language" would do well to learn Scheme.
For example, to represent what Church would have a written on a chalk board at
UCLA as \(\lambda x.+ \hskip0.25em x \hskip0.25em 1\) (yes, he preferred
prefix notation) as a Scheme expression, one writes (lambda (x) (+ x 1))
(and,
yes, Lisp syntax introduces a lot of parentheses even though the ability to do
without them is one of the motivations for prefix notation in the first place).
Both expressions represent a function which adds 1 to its argument, \(x\). The
first as a mathematical formula and the second as a compilable, executable
snippet of source code. The primary semantic difference between them is that
\(\lambda x.+ \hskip0.25em x \hskip0.25em 1\), being a mathematical formula,
has no theoretical limit on the magnitude of \(x\) while
(lambda (x) (+ x 1))
is limited by the magnitude of the bignum
data type on
a given machine with a given amount of heap space allocated to the process
running the Scheme program.
The following descriptions of particular Scheme features will make the most sense if you already know at least one Lisp dialect, such as Common Lisp. A complete tutorial on Lisp in general or Scheme in particular is (currently) beyond the scope of these pages.
© Copyright Kirk Rader 2023. All rights reserved.
Tail Recursion and Loops in Scheme
Consider the traditional definition of the factorial function:
\[ n! = \begin{cases} 1 & \text{if } n \leq 1 \\ n (n  1)! & \text{otherwise} \end{cases} \]
This is an example of a recursive^{*} function; i.e. a function that invokes itself in the course of carrying out some calculation. For example:
\[ 3! = 3 \times 2! = 3 \times 2 \times 1! = 3 \times 2 \times 1 = 6 \]
If implemented literally as defined above, in Scheme this would be:
(define (factorial n)
(if (<= n 1)
1
(* n (factorial ( n 1)))))
The preceding is a mathematically correct representation of the traditional
definition of \(n!\) as laid out above, but suffers from the problem that it
only works for fairly small values of \(n\). Any moderately experienced
software engineer would be able to explain that the preceding implementation of
factorial
is subject to a stack overflow error^{**}. In
other words, (factorial 3)
will produce 6
, as expected; (factorial 1000000)
probably will cause a run time error due to too great a recursion
depth. The exact value of n
that will cause factorial
to fail will vary
depending on the amount of available stack space allocated to the process
running it at the time it is invoked.
This can be improved through a slight modification to the definition of
factorial
, involving a helper function:
\[ \begin{align*} \text{let } n! & = f(n, 1) \\ \text{where } f(x,a) & = \begin{cases} a & \text{if } x \leq 1 \\ f({x  1}, {a x}) & \text{otherwise} \end{cases} \end{align*} \]
In this case the implementation helper, \(f\), is defined as taking two parameters. The first parameter, \(x\), is the number whose factorial value is to be computed and is initially the original \(n\). The second parameter, \(a\), is the currently accumulated result of the calculation in progress, initially \(1\). The value of \(f(1, a)\) is \(a\). The value of \(f(x, a)\), while \(x \gt 1\), is the result of directly calling \(f\) again, with \(x  1\) and \(a \times x\) as parameters.
In Scheme this becomes:
(define (factorial n)
(letrec ((f (lambda (x a)
(if (<= x 1)
a
(f ( x 1) (* a x))))))
(f n 1)))
While these two implementations of factorial
in Scheme are mathematically
equivalent and both are recursive, the second version that relies on the
twoparameter function f
is immune from stack overflow. This is because the
Scheme specification requires that "tail calls" not grow the stack. But what
does that mean, exactly?
Note that in the first implementation of factorial
, the function calls itself
inside an invocation of *
within the "else" clause of an if
special form. In
such a case the Scheme compiler, just as that for any programming language,
must arrange to remember the sequence of invocations of factorial
so that it
can perform the *
calculation to the value returned by each inner call to
factorial
before returning from the outer call. The second implementation of
factorial
, which uses the locally defined helper function f
, performs the
same mathematical operations arranged in a slightly different sequence. In that
case, both the subtraction in ( n 1)
and multiplication in (* a n)
are
fully evaluated before the inner call to f
. At each invocation of f
, the
value returned from the outer invocation is simply that which is returned by the
inner invocation without any additional computation. Such an inner invocation is
said to be in "tail position" and it is not necessary to grow the stack when
making the inner call. In effect, the inner call reuses the existing stack
frame of the outer call without adding a new one of its own. Thus, the only
limit on the value of n
that exists for this second version of factorial
is
the maximum range of the bigint
data type that is the result of the innermost
invocation of (* a n)
. Another way to look at the difference is that the
tailcall version of factorial
"remembers" the sequence of calculations using
the accumulated value in the variable a
so that it does not have to be saved
on the stack. This is an important concept for functional programming generally:
never use an object or stack frame to store state that could more efficiently be
retained in a variable within some closedover lexical environment.
This special handling of recursion in tail position means that you do not
need^{***} explicit looping constructs like while
,
do
, for
etc. in Scheme. The standard idiom for loops in the functional
programming paradigm is to use tail recursion. Scheme facilitates this with
named let
. Here is the preceding example, rewritten more idiomatically using
a named let
:
(define (factorial n)
(let f ((x n)
(a 1))
(if (<= x 1)
a
(f ( x 1) (* a x)))))
The second and third versions of factorial
shown above are semantically
identical. They perform the same set of operations, in the same order. The more
compact syntax used in the latter version emphasizes the fact that for Scheme,
there is no difference between recursion in tail position and looping. To show
what that means, here is how one might define factorial
in a language like
C, Java etc. without special handling of tail recursion but with a special
while
loop construct:
int factorial (int n) {
int a = 1;
while (n > 1) {
a *= n;
}
return a;
}
The thing to undersand about the latter two Scheme examples and the final
Java example is that all three are semantically equivalent, performing
essentially identical sequences of operations. Anywhere you would use while
,
for
etc. in Java, C or the like you can use tail recursion in Scheme
instead. Further, tail call optimization applies whenever a function is invoked
in tail position, not just cases of loops implemented using functions that call
themselves. For example, firstclass continuations can be
invoked in tail position to implement a true continuation passing style without
recourse to special forms like trampolines. But that is a discussion for
another time....
^{*} Historically, when mathematicians have talked about "recursive functions" they meant any case of one function calling another. For a mathematician, \(g\) is being called recursively:
\[ \begin{align*} \text{let } n & = f(g(x)) \\ \text{where } g & = \lambda y.+ \hskip0.25em y \hskip0.25em 1 \end{align*} \]
Computer programmers have coopted the term "recursion" to refer only to what a mathematician would regard as the special case of selfrecursion. Because dialects of Lisp like Scheme hearken back to the origin of all programming languages in the Lambda Calculus, Scheme texts will often use the terms "call in tail position," "tail call" and "tail recursion" interchangeably whether or not a function is calling itself, directly or indirectly.
^{**}The way that function call and return is implemented conventionally
in programming languages targeting Von Neumann
architecture machines
is via a stack. A program must have a way to remember the current state of a
computation before invoking a subroutine. It must also be able to restore that
state so that it can continue from where it left off after the subroutine
completes its work. This is done by pushing a stack frame containing a
snapshot of certain critical information about a program's current state as the
topmost entry in a special area of memory, the stack, which is organized as a
LIFO (Last In, First Out) queue. The stack frame includes information about
where the program should resume after the subroutine completes its work. It then
invokes the subroutine and the subroutine consults the stack frame in order to
return to the point from which it was called. The stack frame is popped off the
stack and execution continues. If program A calls subroutine B and subroutine B
calls subroutine C before returning to A, the stack must have enough free
capacity to store frames for both B and C for the duration of the execution of
C. If the program detects that there is no more room on the stack at the point
at which it is about to call a subroutine, it signals a "stack overflow" error
and the current computation is aborted. Given the first definition of
factorial
shown above, the stack would need room for at least n
frames
whenever it is called. Since computer memory is a finite resource, it does not
require enormously large values of n
to cause a stack overflow for most real
world hardware.
^{***}Emphasis here on need. Early versions of Scheme actually had no
explicit loop constructs but a few were added over time due to popular demand.
The first such, the do
special form, is modeled on C/C++/Java etc. for
loops
but uses a syntax that is so tortured that one wonders whether or not the
original authors were trying to make a point and steer people toward tail
recursion based loops or continuation passing by making do
much harder to use.
In other words, the "popular demand" mostly came from people trying to learn
Scheme and the functional paradigm after having already become very used to
procedural programming using socalled "structured" languages.
© Copyright Kirk Rader 2023. All rights reserved.
Lexical Closures
The observant reader may have noticed that examples of mutually recursive
lambda
expressions in the description of tail recursion
were shown using disjoint sets of variable names, where factorial
was defined
as having an argument named n
while its local helper function had a
corresponding argument named x
. This was done for clarity of exposition: it is
very obvious which variables belong to which functions when they all have
different names where their lexical scopes overlap.
This is not actually a requirement of Scheme, though a good practice from the
point of view of maintability, where normal lambda
bindings are lexically
scoped. For example, the same example of function composition could have be
written with both functions having overlapping sets of variable names with no
change in meaning or outcome:
(define (factorial n)
(let f ((n n)
(a 1))
(if (<= n 1)
a
(f ( n 1) (* a n)))))
It simply would have been a bit harder to describe what was going since in this
case there are two different variables, both named n
.
Historically, not all Lisp dialects have been lexically scoped in this way. Bindings for all variables existed in a single global environment in very early implementations of Lisp. This fact meant that there were times that the programmer actually did have to take care to use disjoint variable names for mutually recursive functions to avoid "name collisions" that could affect the outcome of a computation. That "bug" was sometimes exploited as a "feature" by some algorithms, so modern lexically scoped Lisp dialects often have alternative versions of special forms like
lambda
,let
etc. that share a single, global name space. Some Scheme dialects, for example, have defined things like afluidlet
special form that works like ordinarylet
but whose bindings are in a single "fluid" (a.k.a. "dynamic") environment.
But wait! There's more!
If all there were to a lambda
function's variable bindings that they are
lexically scoped, they would not be substantially different from, say, pointers
to functions in languages like C. The lexical environments that are created by
lambda
binding in Scheme (and other modern dialects like Common Lisp
)
represent "closures" over their entire runtime context such that a function
object, when treated as a data value, "remembers" the lexical environment in
which it was created. Mutliple invocations of the same such function create
distinct such "closed" environments, also known as "closures." Consider:
(define (makefoo a)
(lambda () a))
(define foo1 (makefoo 1))
(define foo2 (makefoo 2))
After evaluating the three preceding expressions in a Scheme environment, you will have three global variables, all of which are bound to functions.
The function makefoo
returns an anonymous function whose lexical closure
includes a definition for the local variable a
. Each anonymous function
returned by makefoo
"remembers" the value which was passed to makefoo
when
it was invoked with its own "closed over" binding of a
. Both foo1
and foo2
are bound to functions made using calls to makefoo
, but with different values
for a
. In particular, invoking foo1
returns 1 while invoking foo2
returns
2:
(foo1)
1
(foo2)
2
What this means is that anonymous functions in Scheme can be used in many cases
where you might need to define some custom data structure in other programming
languages. In fact, lexical closures are so powerful that there is a saying
among Scheme programmers to the effect that "objects are a poor man's closures."
One of (if not the) earliest commercially significant objectorietned
programming systems was
Flavors that ran
on Symbolics' Lisp Machines. The
earliest versions of Flavors implemented objects using lexical closures
directly, rather like a more sophisticated version of makefoo
. The "message
passing style" of method invocation this created was a heavy influence on the
design of Smalltalk. Flavors was
later rewritten to have a very different style of syntax (and far more
sophisticated features) but lexical closures remained deeply buried within it.
That later style was then the original model for
CLOS when Common
Lisp running on generic hardware replaced Zetalisp running on Lisp Machines
as the both the de facto and de jure standard for commercially deployed
Lisp.
[This author still misses many features of Flavors / CLOS — especially the MOP (Meta Object Protocol). While a pale shadow lives on for Java programs in the form of aspectj, the full power of lexical closures and a MOP can really only be experienced in dialects of Lisp to which CLOS based frameworks have been ported.]
© Copyright Kirk Rader 2023. All rights reserved.
First Class Continuations
As noted in Tail Recursion in Scheme, procedure call and return has traditionally been conceptualized, and often literally implemented, by storing information that needs to be shared between the caller and callee in a LIFO stack. In this model, each procedure invocation corresponds to a LIFO entry, called a stack frame, containing the callee's parameters and the caller's return address — the point in the code where execution should continue after the callee exits. Compilers for many programming languages also create storage for other things like locally defined lexical variables and a placeholder for a function's return value as part of a procedure's stack frame.
The idea of a stack frame containing a return address is supported directly by most CPU instruction sets. The very term, "return address," comes from assembly language programming. Specifically, most CPU instruction sets include some form of "jump to subroutine" instruction for pushing onto the stack the memory address of the next instruction following the "jump to subroutine" while simultaneously transferring execution to some other address in memory, i.e. the starting instruction of a subroutine being called. The called procedure exits by invoking a "return from subroutine" primitive that pops the return address, i.e. the the memory address of the instruction following the original "jump to subroutine" instruction, off the stack and jumping there. The net effect is that execution resumes at the instruction just after the invocation of the subroutine with the stack restored to the state it had just before the subroutine was called.
These ideas of a "stack frame" containing a "return address" along with other aspects of a procedure's invocation such as parameters being passed to it can be generalized beyond the features supported directly by a CPU's instruction set. Compiler writers talk about a procedure's continuation rather than its "return address" and its execution context rather than its "stack frame." Compilers can, and clever compiler writers do, find ways to detect and exploit certain special cases to enable CPS (Continuation Passing Style) as an optimization where practical. But due to the design of most programming languages this is really just a matter of terminology from the point of view of someone using the compiler to write code in some programming language. In nearly every popular language, procedure call and return occurs in patterns dictated by the rules of a LIFO stack and is subject to "stack depth" limitations requiring the introduction of lots of special purpose looping constructs and similar complexities.
Scheme takes a radically different approach. For one thing, Scheme's definition mandates that the compiler explicitly support one particular feature of CPS, often referred to as tail recursion. While compilers for other languages may or may not implicitly employ tail call optimization in certain cases, Scheme compilers are required to do so according to explicit, welldefined rules making it far easier to use safely and correctly when writing code. Similarly, Scheme mandates that every procedure invocation corresponds to the creation of a lexical closure. Among other significant advantages this completely separates the concerns of a program's "execution context" from its "continuation." Scheme takes advantage of this separation of concerns by introducing first class continuations as an explicitly defined data type. A continuation, in Scheme, is the literal embodiment, accessible to ordinary Scheme programs, of what for most programming languages is a very abstract notion only directly accessible to the compiler.
What this means is that Scheme's syntax is very much smaller and much more
consistent than most popular programming languages while supporting features
most other languages either support in only limited ways or completely lack. For
example, there are no looping constructs (while
, do
, for
etc.) in Scheme.
There is no catch
/ throw
for exceptions. There is not even a return
statement. Yet all of these constructs, and far more sophisticated flows of
control, can be implemented easily in Scheme using a combination of just if
and callwithcurrentcontinuation
(usually abbreviated in documentation and
source code as call/cc
).
Scheme's if
special form behaves the way you would expect. It executes one of
two branches of code depending on the result of evaluating an expression:
;; invoke `doiftrue` if `test` returns a value other than #f,
;; otherwise, invoke `dootherwise`
(if (test)
(doiftrue)
(dootherwise))
While if
is sufficent for simple branching in otherwise completely sequential
flows of control, call/cc
enables literally any conceivable nonsequential
flow of control. It is a builtin procedure which takes a procedure as an
argument. That procedure is invoked, passing to it another procedure which
represents the continuation of the flow of control of the original call to
call/cc
:
;; the given lambda is invoked by call/cc
;;
;; k is a procedure which, when invoked,
;; causes the invocation of call/cc to return
(call/cc (lambda (k) ...))
As stated in the comments in the preceding example, call/cc
calls the
procedure it is passed as an argument. That function is passed another procedure,
named k
in the example, as its parameter. What is special about k
is that if
it is ever called, the original call to call/cc
will then return. In addition,
whatever value is passed as a parameter to k
will become the return value from
call/cc
.
To see how any of that can be useful, consider the return
statement built into
many programming languages. Scheme does not define it as a built in procedure or
special form, but it can be easily implemented using call/cc
:
;; writes 1 to stdout followed by a newline
;;
;; returns 42
;;
;; execution never reaches the code that would
;; write 2 to stdout
(define (foo)
(call/cc
(lambda (return)
(display 1)
(newline)
(return 42)
(display 2)
(newline))))
Invoking foo
will cause 1 to be written to stdout
while foo
's return value
will be 42. Execution of foo
will not reach the code that would write 2 to
stdout
. Note that while call/cc
is pretty magical there is nothing magical
about the name return
. The parameter name of the anonymous function could have
been anything without changing the behavior of the program.
;; writes 1 to stdout followed by a newline
;;
;; returns 42
;;
;; execution never reaches the code that would
;; write 2 to stdout
(define (foo)
(call/cc
(lambda (k)
(display 1)
(newline)
(k 42)
(display 2)
(newline))))
What is significant is that the continuation procedures obtained using call/cc
are firstclass data objects that can be bound to variables and returned as
values from procedures. Combined with tail call optimization, they make CPS
available as a standard coding idiom in Scheme, available to Scheme programmers
and not just compiler writers, as a natural part of the definition of the
language.
To reiterate for emphasis: most programming languages introduce a number of distinct syntactic constructs to provide access to CPS features in particular special cases while Scheme supports CPS directly as a natural idiom, eliminating the need for a lot of redundant, specialpurpose structured programming syntax. In doing so, it provides the means for programmers to have very finegrained control as well as define their own custom flowofcontrol constructs.
Given:
(define (fn2)
(call/cc
(lambda (return)
(display 1)
(newline)
(display (call/cc (lambda (j) (return j))))
(newline)
(display 3)
(newline))))
(define (fn1)
(let ((k (fn2)))
(if (procedure? k)
(k 2))))
Invoking fn1
will display:
> (fn1)
1
2
3
This is because:
fn1
callsfn2
, binding what it returns to the variablek
fn2
creates a continuation for itself namedreturn
fn2
writes 1 followed by a newline tostdout
fn2
creates a continuation for the point in its flow it has reached, binding it toj
fn2
uses itsreturn
continuation to pass the continuationj
to its caller the combination of the preceding steps means that
k
infn1
is bound toj
fromfn2
when execution first returns to it fn1
testsk
to see if it is a procedure since
k
is a continuation,procedure?
returns#t
and sofn1
invokes it, passing 2 as a parameter  passing 2 to the continuation in
k
causes thecall/cc
that created it infn2
to return with the value 2 (since that is what was passed to it fromfn1
) fn2
writes 2 (the value received fromfn1
) followed by a newline tostdout
fn2
writes 3 followed by a newline tostdout
and returns whatever thenewline
procedure returned as its own value the invocation of
fn2
returns tofn1
a second time, this time with the final result returned fromfn2
's invocation ofnewline
 since
newline
does not return a procedure,fn1
simply exits without taking any further action
In other words, fn1
and fn2
demonstrate how firstclass continuations can be
used to implement similar functionality to the return
, catch
and throw
constructs of other languages. Further, they show that in Scheme "exceptions"
are easily resumable, in ways that many other programming languages do not
support. This is just an amuse bouche sized taste of what can be achieved
using Scheme continuations.
There are a number of things to note about the preceding:

Continuations can be used for twoway communication between execution contexts
Just as
fn1
receives data as return values fromfn2
,fn2
receives data as the parameter to its inner continuation when it is invoked byfn1
. 
A given contination can be invoked from multiple points in a flow
See the scansample example, which is a more elaborate version of
fn1
/fn2

Multiple continuations can exist for different points in the flow through an execution context
Both
return
andj
are continuations forfn2
's execution context, but at different points within its body. Care must be taken to use the correct continuation to reach the desired point a nonsequential flow of control. 
Using continuations often ends up looking like a loop of some kind
Here, the same call to
fn2
exits twice, invoking the body offn1
multiple times even though the code as written appears at first glance completely sequential. This is a standard pattern to which you should get used, and which you can exploit to good advantage.
The "hello world" level of examples here only scratch the surface of what is possible with firstclass continuations. See Engines from Continuations for an extended example of how a particular style of multitasking can be implemented using continuations.
The nonsequential flows of control enabled by continuations raise the same
kinds of concerns as those for which Common Lisp's unwindprotect
exists,
along with the try... finally...
construct available in many languages. For
Scheme the stakes are even higher since, as demonstrated above, continuations
are not a "one way trip" out of an excecution context like return
or throw
.
Continuations can also be used to reenter a previously exited context. This the
purpose of dynamicwind is semantically similar to
unwindprotect
, except with support for running code both before and after the
protected code every time the protected code is entered or exited. See
scansample for a much further elaborated
exception handling example demonstrating how the relationship between fn1
and
fn2
shown above can be extended to handle something closer to a real world use
case.
© Copyright Kirk Rader 2023. All rights reserved.
dynamicwind
The builtin dynamicwind
procedure is Scheme's equivalent to Common Lisp's
unwindprotect
or the try ... finally ...
construct in languages like C#,
Java or JavaScript. The name of Scheme's version takes into account that certain
code blocks must be protected not only when the stack is being "unwound" upon
exit from a function but also, thanks to firstclass continuations, when the
stack is being "rewound" upon reentry. Both unwindprotect
and try... finally...
(the latter was modeled on the former) define two sections of code:
a section being protected and a section that is guaranteed to execute when the
protected section is exited, even in case of exit due to nonsequential flow of
control. Scheme's dynamicwind
adds a third section that is guaranteed to
execute before the protected section begins, no matter how many times control
flows in and out of it.
;; beforethunk will execute before protectedthunk
;; every time protectedthunk's body is entered
;;
;; afterthunk will execute after protectedthunk
;; every time protectedthunk exits
;;
;; dynamicwind returns whatever is returned by
;; protectedthunk each time it exits
(dynamicwind beforethunk protectedthunk afterthunk)
XRay Flourescence (XRF) spectrometry is used to determine the composition of
materials, for example in scrap yards and recyclying facilities. An industrial
XRF scanner can emit levels of radiation that are quite dangerous if, for
example, someone were to expose a hand or other body part to it. Imagine a
program to control such a device. It might have a function, scansample
as the
main entrypoint, with the following requirements:

The sample chamber must have a door which prevents users from reaching inside when it is locked

The XRay emitter can only be turned on when the door is locked

The XRF spectrogram is recorded and made available to the user after a successful scan
One can imagine helper functions, called by scansample
, for each of these operations:
;; naive (unsafe!) implementation of scansample
(define (scansample)
(lockdoor)
(energiseemitter)
(recorddata)
(deenergizeemitter)
(unlockdoor))
As long as everything proceeds as expected, the naive definition implements the requirements for scanning a sample laid out above. But sometimes unexpected things can happen. Imagine that the XRF scanner is able to detect when a sample must be repositioned within the chamber in order to obtain a complete spectrogram. In that case, it needs to be able to interrupt the scanning process, request that the user reposition the material within the chamber and then resume from where it left off. Scheme continuations can be used to support such requirements.
What follows is a complete implementation of a simulation of scansample
that
uses stdout
to log the simulated operations. It uses continuations to throw
resumable exceptions when user intervention is required together with
dynamicwind
to ensure that the emitter is always off when the door is
unlocked.
;; demonstrates:
;;
;;  continuations for nonsequential flowofcontrol
;;
;;  dynamicwind to protect blocks of code when using nonsequential flowsofcontrol
;;
;;  locally defined variables for encapsulation
;;
;;  macros for modularity
;; note from the output that the emitter is only on when the door is locked and the
;; sample is only repositioned when the door is unlocked. note also that the counter
;; shows that not only are the safety constraints being respected, the operations are
;; being done in the correct order
(define (scansample)
(let ((displaymessage (lambda (message sample)
;; common helper that prints a message to stdout
(display (stringjoin (list message (number>string sample) "\n") "")))))
;; deliberately not mutuallycallable procedures representing various states of the xrf
;; spectrometer hardware
(let ((lockdoor (lambda ()
;; simulate locking the sample chamber door
(display "door locked\n")))
(energizeemitter (lambda ()
;; simulate turning on the xray emitter
(display "emitter energized\n")))
(deenergizeemitter (lambda ()
;; simulate turning off the xray emitter
(display "emitter deenergized\n")))
(unlockdoor (lambda ()
;; simulate unlocking the sample chamber door
(display "door unlocked\n")))
(recorddata (lambda (sample)
;; simulate recording a xrf spectrogram
(call/cc (lambda (return)
(displaymessage "scanning " sample)
(display "please reposition sample\n")
(set! sample (call/cc (lambda (resume) (return resume))))
(displaymessage "scanning " sample)
(display "please reposition sample\n")
(set! sample (call/cc (lambda (resume) (return resume))))
(displaymessage "scanning " sample)
(display "data recorded\n"))))))
(letsyntax ((withemitterenergized (syntaxrules ()
;; ensure xray emitter is on only while the protected
;; forms are executing
((withemitterenergized protected ...)
(dynamicwind
energizeemitter
(lambda () protected ...)
deenergizeemitter))))
(withdoorlocked (syntaxrules ()
;; ensure the sample chamber door is locked while the
;; protected forms are executing
((withdoorlocked protected ...)
(dynamicwind
lockdoor
(lambda () protected ...)
unlockdoor)))))
(letrec ((count 1)
(resume (withdoorlocked (withemitterenergized (recorddata count)))))
;; keep scanning and following prompts to reposition the sample until recorddata
;; signals it is finished by not returning a continuation
(if (procedure? resume)
(begin
(displaymessage "repositioning " count)
(set! count (+ count 1))
(resume count))
(displaymessage "samples scanned: " count)))))))
See xrf.scm
While all of the subroutines and helpers are defined inside the body of
scansample
for encapsulation, the various let
, letsyntax
and letrec
forms are nested in ways that also helps enforce the requirements. In
particular, none of the main subroutines, lockdoor
, energizeemitter
and so
on, can call one another because they are all deliberately defined in a single
let
(not letrec
). Only the displaymessage
helper is visible to the
entire body of scansample
because it is in its own outermost let
. The only
letrec
is the innermost scope, and it is used instead of let
only to allow
the invocation of recorddata
to receive count
as a parameter without having
to introduce yet another level of lexical scope nesting.
Here is the result of invoking (scansample)
:
> (scansample)
door locked
emitter energized
scanning 1
please reposition sample
emitter deenergized
door unlocked
repositioning 1
door locked
emitter energized
scanning 2
please reposition sample
emitter deenergized
door unlocked
repositioning 2
door locked
emitter energized
scanning 3
data recorded
emitter deenergized
door unlocked
samples scanned: 3
As can be seen from the output, the correct set of operations are performed, in
the correct order relative to one another so as to conform to the safety
requirements. The numeric counter shows that the operations are also carried out
in the correct order in regards to the endtoend flow. This means that
recorddata
is actually completely sequential from its own point of view while
the body of scansample
uses the invocation of a continuation in tail
position to turn itself into a loop. The invocations of the
before and after logic by the various subroutines is encapsulated in the
withdoorlocked
and withemitterenergized
special forms.
© Copyright Kirk Rader 2023. All rights reserved.
Engines from Continuations
A practical application of continuations to implement sophisticated flows of control.
In Engines from
Continuations, Dybvig and Hieb [1988] describe how the flow of control
abstraction known as "engines" can be implemented using Scheme's firstclass
continuations. The following source code is adapted from one of the versions of
makeengine
described by Dybvig and Hieb.
In this implementation, an engine is simply a Scheme procedure of three
arguments. For example, the following code will invoke the procedure named
thunk
as an engine:
(letrec ((return
(lambda (value remainingticks)
(display "engine returned ")
(display value)
(newline)))
(expire
(lambda (newengine)
(display "engine expired, ")
(display "use the given new ")
(display "engine to resume")
(newline)))
(thunk
(lambda ()
(decrementtimer)
(display "engine running")
(newline))))
((makeengine thunk) 1 return expire))
This is different from simply calling thunk
directly in that it will be given
a finite amount of "fuel" in the form of "timer ticks." If thunk
completes
before the engine runs out of fuel, it will call return
. If the fuel runs out
before thunk
completes, expire
will be invoked with a new engine that can be
used to resume the interrupted computation in thunk
.
See the warnings in the source code comments regarding the use of
decrementtimer
in procedures passed to makeengine
.
This implementation differs from that in the original paper in a few ways, all thoroughly documented in comments in the source code. Of particular interest:

Eliminates the need for the
makesimpleengine
helper function defined and used in the paperSpecifically,
makeengine
wraps the invocation of the procedure it was passed in a call toenginereturn

Adds
engineexpire
for symmetry withenginereturn
Calling
engineexpire
causes the currently executing engine to pass control to its expiration handler immediately in the same way that callingenginereturn
causes an immediate invocation of the return handler.
© Copyright Kirk Rader 2023. All rights reserved.
Home Automation
graph TB browser["Web Browser"] subgraph "Home LAN" huebridge["Philips Hue Bridge(s)"] huedevice["Philips Hue Lights\nand Accessories"] pvhub["PowerView Hub"] pvdevice["HunterDouglas Window Coverings"] subgraph nodered["NodeRED"] vue["Vue / Vuetify\nWeb App"] flows["Flows\nhttps://github.com/parasaurolophus/homeautomation"] dnssd["@parasaurolophus/nodereddnssd"] eventsource["@parasaurolophus/noderedeventsource"] vue < "WebSocket" > flows dnssd > flows eventsource > flows end flows  HTTP > pvhub flows  "HTTPS" > huebridge huebridge  "SSE" > eventsource huebridge  "mDNS" > dnssd huedevice < "Zigbee" > huebridge pvhub <> pvdevice click flows "https://github.com/parasaurolophus/homeautomation" _blank click dnssd "https://flows.nodered.org/node/@parasaurolophus/nodereddnssd" _blank click eventsource "https://flows.nodered.org/node/@parasaurolophus/noderedeventsource" _blank click vue "https://github.com/parasaurolophus/homeautomation/tree/main/dashboard" _blank end browser < "HTTPS /\nWebSocket" > vue
 https://github.com/parasaurolophus/homeautomation
 https://flows.nodered.org/node/@parasaurolophus/nodereddnssd
 https://flows.nodered.org/node/@parasaurolophus/noderedeventsource
Features
 Entirely selfcontained NodeRED based application
 No clouds!!
 No external configuration outside of ~/.nodered/settings.js
 Does require configuration of host running NodeRED and the home LAN router in order to support secure access to the dashboard from outside the home^{*}
 No additional servers or services required other than those provided by the Hue and PowerView hubs
 Consolidated dashboard / webbased UI using
Vue/Vuetify
 Requires separate build step after cloning repo from GitHub^{**}
 Time and date based automation
 Turn on and off lights based on local sunrise / sunset
 Use special occasion lighting themes based on date
 Open and close window coverings based on sun's local altitude and azimuth on any given day
^{*}If remote access is not a priority, just leave out the HTTPS related configuration from ~/.nodered/settings.js. Detailed instructions for obtaining and installing certificates and configuring IP reservations and port forwarding on routers are beyond the scope of this web page.
^{**}An earlier version of this project implemented the dashboard using pure HTML 5 / JavaScript for its dashboard which did not have additional build steps. I admit it. I was led astray by the pretty toggle switches.
© Copyright Kirk Rader 2023. All rights reserved.
Philosophy of Science
© Copyright Kirk Rader 2023. All rights reserved.
How the Parallel Postulate Changed Everything
 Abstract
 Euclid's Parallel Postulate
 Formal Linguistics, Set Theory and Infinite Quantities
 The Computability Problem
 Computability Theory to the Rescue?
Abstract
Mathematicians from Euclid's time to the start of the modern era were strangely obsessed with the fifth, a.k.a. parallel postulate from his treatise on twodimensional Geometry, Elements. Over the course of millenia there were countless attempts to prove whether the parallel postulate was really an axiom or, as was suspected for a long time but eventually disproven, actually a theorem hiding simpler axioms under its petticoats. The solution to this question precipitated a revolution in mathematical theory and practice that ultimately birthed the postmodern "information age" as well as contributing to scientific and engineering advances across many disciplines, including General Relativity and Quantum Mechanics.
No, really.
Euclid's Parallel Postulate
Once upon a time, mathematicians in the Western tradition regarded themselves as engaged in a science, not different in kind from chemistry or physics. In this way of regarding mathematical truth, the fact that \( 2 + 2 = 4 \) is proven by empirical observation: if I hold two pebbles in my hand then pick up two more, I will be holding a total of four pebbles. This empiricism was used not only to justify claims about which mathematical statements were true and which were not, it guided the way that mathematical problems were posed and solved. As mathematical theories advanced and techniques expanded to include an ever increasing degree of abstraction, they remained for a very long time rooted in empiricism. Negative numbers, for example, were initially justified by using the kind of reasoning modern bookkeepers would recognize as a doubleentry ledger. (e.g. \(3 + 2 = 1\) is like telling someone who is holding $2 that they owe you $3; after they pay you all they have, they still owe you $1 — and that is how ancient generations of mathematicians actually justified introducing negative numbers into their calculations).
Euclid's Elements epitomizes this view of the nature and practice of mathematics. While providing the model for an axiomitic approach to mathematical proof, his actual method was firmly based in constructions, which are empirically verifiable operations using a physical straightedge and compass. Euclid's axioms and postulates all hearken directly to such empiricism. They are nearly all very simple declarative statements of easily observable facts or simple steps in a construction. "To extend a point to form a line," "to extend a line to form a plane," and so on.
Then there is the fifth postulate, often referred to as the parallel postulate. It looks nothing like the others. In Euclid's original formulation, it is a nearly paragraphsized sentence rather than a short phrase and can be difficult to puzzle out without aid of a diagram. Even later attempts to produce simpler formulations did not detract from the degree to which it seems like an outlier compared to the other axioms and postulates.
Euclid's parallel postulate:
If a line segment intersects two straight lines forming two interior angles on the same side that are less than two right angles, then the two lines, if extended indefinitely, meet on that side on which the angles sum to less than two right angles.
From Euclid's time right into the start of the modern era, mathematicians were not happy about this. Over the course of millenia, countless attempts were made to show that the parallel postulate must actually be a theorem, derivable from simpler axioms that look more like the others in Euclid's book. While this produced a number of alternative formulations of the axiom, some of which may be easier to visualize than others, none were a reduction to radically simpler axioms. For example:
Alternative formulation of the parallel postulate:
Given a line and a point not on that line, there is exactly one line that can be drawn through the given point that does not meet the given line when both lines are extended indefinitely in both directions.
After enough time and failed attempts, Western mathematicians eventually shifted their focus away from trying to prove that the parallel postulate was really a theorem. Instead they strove to produce a formal definition of why it is actually an axiom. By the early 19th Century, mathematical techniques had evolved to the point that allowed the following kind of indirect proof to be carried out:

Assume some different axiom that contradicts Euclid's parallel postulate in some specific way.

Show that the equivalent of Euclid's constructions and proofs that can be carried out using this different version of the parallel postulate are just as internally consistent (i.e. produce no selfcontradictions) as Euclid's "classical" proofs.
If the second step of the preceding argument can be accomplished for one or more variations from the parallel postulate assumed in the first step, then this proves that Euclid's fifth postulate was an axiom all along. If it were not an axiom but a theorem, altering it would emerge as a contradiction among the more basic axioms from which it would have been derived.
The key here is to separate the idea of "internal consistency" from "empirical truth." The NonEuclidean Geometries invented by Bolyai, Lobachevsky, Riemann et al. each produce different theorems that contradict those described by Euclid, but as long as the axioms for each such NonEuclidean Geometry do not contradict themselves for a given version of the parallel postulate, they demonstrate that each such variation on the theme of "Geometry" is just as good, in some sense, as every other — including Euclid's own. Carrying out proofs in Riemann's Elliptic Geometry is like doing Euclidstyle constructions on the surface of a spheroid where all lines forming "great circles" around the circumference will eventually intersect. Lobachevsky's Hyperbolic Geometry assumes, on the other hand, that you can draw more than one line through any point none of which intersect some other line. This is harder to visualize than Elliptic Geometry. In this case it is like doing Euclidstyle constructions on a surface with a hyperbolic curvature. Neither variation corresponds to what happens when, like Euclid, you use a straightedge and compass on a flat surface. But neither Lobachevskian nor Riemannian Geometries produce selfcontradictory results. And so Euclid's fifth postulate was formally demonstrated to have been an axiom all along.
Mathematicians of their day rejoiced at Bolyai's, Lobachevsky's and Riemann's accomplishments for multiple reasons. They could finally regard the whole question pertaining to the status of the parallel postulate as satisfactorily settled. They could start playing with the new mathematical techiques developed for this enterprise, seeking additional areas of mathematical exploration. But there was a reckoning to be made as well: in order to accept this highly desirable result, mathematicians were forced to alter fundamentally their understanding of their own profession. Mathematics came to be understood as being concerned with validity (internal consistency) rather than (empirical) truth. Euclid's theorems do not merely form an interesting set of mutually consistent mathematical formulas. Euclid's reliance on "constructions" demonstrate that his theorems describe properties of objects and operations carried out in the real world. NonEuclidean Geometries have no such correspondence to observable reality (at least not at the scale at which human senses operate). Not only does every valid NonEuclidean Geometry contradict Euclid's "common sense" Geometry, each contradicts every other. But if validity is the goal rather than correspondence to empirical truth, that does not matter at all. A "mathematical truth" was accepted as proven even though that required abandoning the need or ability to assert anything at all about empirical truth solely on the basis of mathematics.
This is more than a mere matter of terminology (substituting "validity" as if it were a synonym of "truth"). Mathematicians knew they could not have their cake and eat it. In order for the indirect proof to work at all, you must first accept that the only thing that is significant mathematically about any set of axioms is the set of valid theorems they imply. You cannot impose any additional constraint such as that when you break out your straightedge and compass and start drawing out the steps of a construction you will get the shapes and angles the axioms dictate. If the latter kind of empirical verification is retained from Euclidean Geometry, then the NonEuclidean Geometries fail before the check for validity even becomes relevant. The term metamathematics was coined to refer to what now had to be regarded as special cases where a given mathematical theory could be shown to correspond in some way to empirical reality. Euclid's constructions provide the metamathematical theory that connects it to common sense experience that is not available to NonEuclidean Geometries.
Formal Linguistics, Set Theory and Infinite Quantities
The change in outlook from truth to validity opened the way to vast new realms of mathematical inquiry. If mathematicians could no longer rely on counting their fingers or drawing constructions to prove the correctness of their formulas, they needed a new set of robust tools to replace empiricism. Suddenly, the nature of "validity," itself, became a subject of study along with an examination of the nature of axiomatic systems and the formal languages used to define and apply them when carrying out mathematical proofs. The detachment from empiricism also gave permission for mathematicians to consider subjects that had formerly been forbidden because they referred to mathematical objects and states of affairs that could not exist in the real world.
For example, right up to the middle of the 19th Century it was considered "out of bounds" for mathematicians to posit the existence of "completed infinities" in their calculations. After all, you can never hold an infinite number of pebbles in your hand. When Leibniz and Newton separately invented Calculus in the 17th Century, they each had to perform some mental gymnastics that allowed them to treat "infinitessimal" quantities as mathematically sound while still shying away from the mathematical inverses of infinitessimals, i.e. infinite quatities. As a foreshadowing of how the world of science and mathematics would react to NonEuclidean Geometry a couple of centuries later, Leibniz' and Newton's contemporaries found Calculus too compelling and useful to ignore, while consciously repressing the cognitive dissonance required to embrace infinitessimals. That repression finally ended when the generation of mathematicians following the invention of NonEuclidean Geometries fully embraced the consequences: go ahead and construct proofs based on infinite quantities or any other formerly forbidden category of mathematical object. So long as your results are internally consistent then your mathematics are valid. There may or may not be a metamathematical theory mapping your results to properties of the real world. If so, that may be a serendipitous sideeffect, of potential use to scientists and engineers (among whom mathematicians could no longer count themselves), but such a corresondence to reality is no concern of "pure" math in this new way of looking at things.
Simultaneously, the new emphasis on validity begged the question: what exactly is "validity," in the first place? Previously, it had been considered both necessary and sufficient to demonstrate the correctness of a mathematical theory to show how it described properties of real things. Even abstrations like negative numbers were justified by treating them the way that bookkeepers treat balance sheets. Oddities like what happens when you try to divide any number by zero were regarded as special cases needing no further explanation or inquiry than, "that can't happen in the real world, so just skip over it." But once approaches were accepted like those which used NonEuclidean Geometries to prove something about classical Geometry, such empiricism was no longer necessary nor sufficient. The nature of mathematics and mathematical formulas in and of themselves suddenly became a topic of great interest to mathematicians. Thus were Symbolic Logic, Set Theory and Formal Linguistics born as areas of intense mathematical interest.
Symbolic Logic is a mathematical system for determining whether or not a set of statements are mutually consistent, i.e. form a valid argument. Set Theory is an extension of Symbolic Logic that considers the mathematical properties of collections of things. Formal Linguistics is the study of the properties of the sets of symbols and rules for combining them used to express mathematical formulas (or, later, computer programs) in meaningful and useful ways.
Cantor used this new freedom and these novel approaches to consider the following question: is there more than one infinite quantity? If Cantor had been born even a generation earlier, he would have been laughed out of his doctoral program for even posing such a question. Infinite quantities were "right out" (to quote Monty Python) as mathematical objects in the first place, let alone the question of what properties they might have. But by Cantor's day, the world of academic mathematics was ready to give almost anything a go.
The schoolyard understanding of "infinity" is that once you reach it, that's all there is. If a gradeschooler says to her best friend, "I love you," the friend may reply, "I love you more!" The natural response is, "I love you twice as much!" Which elicits, "I love you times 10!" Then "...times 100!" Eventually, one of the amicable combatants will end the game with, "...times infinity," to which, on the school yard at least, there is no retort since (as every child understands) once you reach "infinity" you cannot count any higher. Cantor understood this schoolyard intuition as meaning that what most people think of as \(\infty\) corresponds to the cardinality of the set of natural numbers. As a Set Theoretician would say, when you count up indefinitely from 1 (or 0, depending on your tastes and the context of any given discussion), you are defining the set of natural numbers by intention and the cardinality of that set is \(\infty\).
To understand Set Theory's terminology like "cardinality," "intension vs. extension" and so on, consider the letters of the Latin alphabet as a set of symbols. There are twentysix of them as used in English, so the cardinality of this set is 26. "Cardinality" can be loosely understood as "the number of members of a given set." However, "cardinality" is used to allow the notion to be extended to sets with infinitely many members. Historically, people were hesitant to refer to infinite quantities as "numbers," so a different term was invented for "number of members" when that number's magnitude was potentially infinite.
There is a lot more underlying these distinctions of terminology related to the original motivation for creating Set Theory in relation to ideas regarding the nature of numbers and of mathematics, itself. Mathemeticians use terms like "cardinal numbers," "cardinality" when talking about magnitudes, i.e. quantities. They use terms like "ordinal numbers," "ordinality" when talking about the relationships (ordering) among such magnitudes. Sets composed of ordered magnitudes can be shown to have specific properties of great use in Set Theory. At a deeper level, separating the concept of "cardinality" from the concept of "number" allowed early Set Theoreticians to use the concept of "cardinality of a set" to define the concept of "number" in purely Set Theoretical terms. But I digress....
Sets are defined by enclosing their contents in curly braces \(\left(\{ \dots \}\right)\) in Set Theory. For a set defined by extension, what is inside the curly braces is an explicit list. For example the set of "letters of the alphabet" as used in English is
\[ \{A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z\} \]
as demonstrated by the "...now I've learned my A, B, C's..." chant every American gradeschooler is taught. What makes the preceding collection of letters enclosed in curly braces "the set of letters of the alphabet" is simply the assertion that the explicitly given list is the given set. This is what it means to define a set by extension. Only finite sets can be defined by extension, because only a finite number of items can be listed explicitly.
But what about a set like that of all the natural numbers? Prior to Cantor's day, the answer would have been, "there is no such thing as the set of natural numbers because that would represent a completed infinity." But the new interest in things like Set Theory and the new freedom to explore valid theories whether or not they could be empirically verified allows one to define "the set of natural numbers" by intension. That is to define rules by which anything in the universe can be tested to determine whether or not it belongs in the set. So the set of natural numbers can be defined by intension using a rule.
Definition of the set of natural numbers by intention:
Anything is a natural number if and only if it is the number 0 or is the result of adding 1 to a natural number.
Using the formal language of Set Theory, the preceding definition becomes:
\[ \mathbb{N} = \{ x  (x = 0) \lor \exists y ((y \in \mathbb{N}) \land (x = y + 1)) \} \]
The preceding formula of Set Theory would be read in English as "the set of all \(x\)'s such that \(x\) equals zero or there is a \(y\) where \(y\) is an element of \(N\) and \(x\) equals \(y\) plus one."
 \(\)  "such that" when defining a set by intenstion
 \(\lor\)  disjunction, i.e. "inclusive or"
 \(\land\)  conjunction, i.e. "and"
 \(\exists\)  existential generalization, i.e. "there is" or "there exists"
 \(\in\)  set membership, i.e. "is a member of" or "in"
By the preceding rule, 1 is in the set of natural numbers because it is the result of \(0 + 1\). Given that, so is 2, because it is the result of \(1 + 1\). The number 1.5 is not a natural number because there is no way to produce it by adding 1 to any natural number. The letter "A" is not a natural number because the arithmetic addition operation does not even apply to it. And so on for everything in the universe, including abstractions like numbers.
Note that the preceding rule is selfreferential in that it refers to the set \(\mathbb{N}\) when defining \(\mathbb{N}\) by intension. Such selfreference is characteristic of the definitions of infinite sets, but must be used with care. Selfreferential definitions can easily slip into circular arguments when abused. They can also lead to problems like the "liar paradox." The sentence, "I am lying," is a wellformed utterance in English. It consists of meaningful English words, correctly arranged according the rules of English syntax and grammar. But it cannot be said to be either true or false. If it were true, then I would be lying. But since what I am saying is that I am lying, to be true it must be false and vice versa. It cannot be both true and false, so it must be neither. Later, we will see how such paradoxical selfreference came to be used to prove important things about the nature of sets of numbers and, by Gödel, the nature of formal languages by treating them as sets of numbers.
This allows "the set of natural numbers" to be well defined, but what about its cardinality? Cantor begins his exploration of infinite quantity simply by stipulating that the set of natural numbers when defined by intention has a cardinality, which he called (\(\aleph_{0}\)) (pronounced "aleph null"):
\[ \aleph_{0} = \mathbb{N} \]
where the "magnitude" bars surrounding the name of a set refers to its cardinality, i.e. \(\mathbb{N}\) simply means the "the cardinality of the set \(\mathbb{N}\)."
It is not a coincidence that Set Theory uses the same notation for "cardinality" as that used for "magnitude" in Number Theory. One of the original motivations for defining Set Theory in the first place was as part of an attempt to "derive" Arithmetic, in some sense, from pure Logic. Sets with their cardinalities became the "proxies" for numbers with their magnitudes in that context. (And that is why a different term, "cardinality," for "number of members" was introduced in the first place, as well; i.e. to avoid circularity when using the concept of cardinality when providing a definition of the concept of number.) That whole enterprise failed, but in a far more interesting way than if it had succeeded. What was actually shown was not that Logic, Arithmetic, Algebra, Geometry, Calculus etc. existed in a hierarchy where each level in the hierarchy was somehow derivable from the next level down. (But that is where the term "higher math" comes from, which you will still hear nonmathematicians use to this day.) What was shown is that any formal language with certain properties can be used to "model" the axioms of any other such language. I.e. Set Theory is just as "derivable" from Arithmetic as Arithmetic is from Set Theory — but it is not really "derivation" at all but, rather, "representation."
Since there is no limit to the number of times you can add 1 to a natural number, \(\mathbb{N}\) represents an infinite quantity. But is \(\mathbb{N}\) "all there is" when it comes to infinite quantities, as in the schoolyard intuition regarding \(\infty\)?
Cantor's answer, as it turns out, is, "no, there are infinitely many infinite quantities which differ in magnitude in the same way that the magnitudes of finite numbers differ." To prove this, we need to intoduce some additional jargon: mapping and countability. A mapping is a rule by which elements in one set are associated with elements in another set. For finite sets defined by extension, such mappings can be defined by extension as well. A finite mapping can be represented as a set of ordered pairs where the first member of a pair is from one set and the second member of a pair is from the other set.
As an example, consider two sets, \(n\) and \(l\) where
\[ \begin{align*} \text{Let } n & = \{ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 23, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26 \} \\ \text{and } l & = \{ A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z \} \end{align*} \]
Obviously, \(n\) is the set of natural numbers from 1 through 26 while \(l\) is the set of the 26 letters of the Latin alphabet as used in English. An obvious mapping from \(n\) to \(l\) would be:
\[ \begin{align*} \text{Let } \alpha = \{ & <1, A>, <2, B>, <3, C>, <4, D>, <5, E>, <6, F>, <7, G>, \\ & <8, H>, <9, I>, <10, J>, <11, K>, <12, L>, <13, M>, <14, N>, \\ & <15, O>, <16, P>, <17, Q>, <18, R>, <19, S>, <20, T>, <21, U>, \\ & <22, V>, <23, W>, <24, X>, <25, Y>, <26, Z> \} \end{align*} \]
This is simply associating each letter with its ordering when reciting the "A, B, C" song, which is also the conventional sorting order when alphabetizing entries in a telephone directory, a book's index or the like. This kind of pairingoff between elements of sets is called a "mapping." The set from which the first member of each pair comes is called the "domain" of the mapping. The set from which the second member of each pair comes is called the "codomain" or "range." Another name for a mapping is a "function."
Mappings are categorized by how they associate members of the domain and range. Any "pairing off" between elements of two sets is a mapping, no matter how incomplete or haphazard:
\[ \text{Let } \beta = \{ <3, A>, <17, B>, <17, Q>, <25, B> \} \]
The set \(\beta\) only associates a few elements of \(n\) with even fewer elements of \(l\), since it associates both the numbers 17 and 25 with the letter B as well as associating 17 with Q. Yes, \(\beta\) is a mapping, but a very arbitrary and incomplete one that does not correspond to anything meaningful or useful.
The set \(\alpha\) is different. Note that every element from the domain is included exactly once. That is what is called an injective or "into" mapping. In addition, every element of the range is included exactly once, making it a surjective or "onto" mapping. Since it is both injective and surjective, it is a bijective or "onetoone" mapping. Mappings that are injective or surjective have special properties that can make them useful in making claims about the nature of relationships between sets. Bijective sets have the most interesting and powerful relationships. A bijective mapping represents some function or computation that, given any element of either set, allows you to choose or compute exactly the member of the other set to which it corresponds. In particular, two sets must have exactly the same cardinality in order for there to be a bijective mapping because every element of both sets is included in the mapping exactly once.
As defined, \(\alpha\) is bijective because \(n\) only includes the subset of natural numbers from 1 through 26 which gives it the same cardinality as \(l\). But what if we replace \(n\) with \(\mathbb{N}\) in the definition of \(\alpha\)? In that case, \(\alpha\) remains surjective but is no longer injective, since only a subset of the domain, \(\mathbb{N}\), is included in the mapping. Since it is no longer surjective, it can no longer be bijective. Any set for which there is a surjective mapping from \(\mathbb{N}\) is said to be countable. Any set for which there is a bijective mapping with \(\mathbb{N}\) is said to be countably infinite. Reasoning by analogy with the original, bijective, definition of \(\alpha\), any countably infinite set has, by definition, the same cardinality as \(\mathbb{N}\). I.e.
\[ \begin{align*} \forall s (I(s) \implies s = \mathbb{N}) \\ \text{where } I(s) = s \text{ is countably infinite} \end{align*} \]
or, "for every \(s\), if \(s\) is countably infinite then the cardinality of \(s\) is the same as the cardinality of \(\mathbb{N}\)."
Countability, infinite or otherwise, has other interesting implications. For example, any set whose cardinality is less than or equal to \(\mathbb{N}\) and for which there is a "natural ordering" among its members is countable since you can arrange the members of that set in their natural order and then map the first element to 0, the second element to 1 and so on.
A "naturally ordered" set is simply one for which its elements have some kind of intrinsic sorting order. Sets of numbers are ordered by the "less than" operation. If one accepts the "A, B, C" song as canonical, English words can be sorted alphabetically, and so on.
For Cantor, the test of the schoolyard intuition that once you reach \(\infty\), that is as big as a quantity can be thus became: is the cardinality of every infinite set equal to \(\mathbb{N}\) (which Cantor referred to as \(\aleph_0\))? For sets of numbers, which are ordered, this is equivalent to asking if every such set is countably infinite. His answer was that while some infinite sets are countable, others are not. For example, the sets of integers and of rational numbers are infinitely countable, even though the former includes negative numbers and the latter consists of pairs of natural numbers and so each might have been assumed to have a larger cardinality than \(\mathbb{N}\). So far, so good for naive intuition regarding \(\infty\)!
Cantor's approach hinged on identifying sets that were infinite and naturally ordered but not countable. To show that the set of real numbers, \(\mathbb{R}\), is such a set, start by assuming that you can define a countable set of real numbers between any two real numbers in the continuum (points on the number line). It does not matter what rule you use to construct this list, since the crux of Cantor's argument is to show that any such rule cannot represent all the points of the real number line between the chosen end points. As for any countable set, we can look at it as a mapping from natural numbers to elements of our set. Real numbers are represented as potentially infinite sequences of digits, as in:
\[ \begin{align*} <0, \ &0.427 \dots> \\ <1, \ &0.613 \dots> \\ <2, \ &0.590 \dots> \\ \vdots & \end{align*} \]
Cantor's argument for showing that any such list cannot include all the real numbers within the selected range proceeds by describing a rule for constructing a new sequence of digits representing a real number that cannot already be in our list. In particular, he presents a simple rule for adding each successive digit to the new real number based on the sequences of digits representing the real numbers that are already included. Specifically, the rule is that the \(n \text{th}\) digit in the new sequence must differ from the \(n \text{th}\) digit in the \(n \text{th}\) entry in the original list.
\[ \begin{align*} <0, \ &0.\mathbf{4}27 \dots> \\ <1, \ &0.6\mathbf{1}3 \dots> \\ <2, \ &0.59\mathbf{0} \dots> \\ \vdots & \\ \hline <r, \ &0.\mathbf{502 \dots}> \end{align*} \]
In the preceding example, the first digit in our new sequence, \(r\), is 5. The only signfigance to 5 here is that it is different from 4, the first digit in the first entry in the original list. The second digit is 0, which differs from the second digit, 1, in the second entry. The third digit is 2, which differs from the third digit, 0, in the third row. And so on. The new sequence of digits is guaranteed to differ in at least one decimal place from every sequence in the original list, so it cannot already be included. If we add \(r\) to our list, we can repeat the procedure to construct yet another new sequence, ad infinitum. This means that there are infinitely more real numbers between any two given ones than can be bijectively mapped to the natural numbers. \(\mathbb{R}\) is infinitely larger than \(\mathbb{N}\). So much for naive intuition regarding \(\infty\).
Note some interesting aspects to Cantor's argument. It assumes that the continuum, i.e. the "real number line," is infinitely subdivisable, i.e. there are an infinite number of real numbers between any two real numbers. It also assumes that any sequence of digit symbols corresponds to a real number and that any two real numbers with distinct magnitudes will be represented by distinct sequences of digits. We know that some real numbers, like \(\pi\) would require an infinitely long sequence of digits to represent precisely. So Cantor's argument turns on being able to reason sensibly about infinite sets of infinitely long sequences of digits, none of which could actually be produced in the real world in a finite amount of time. Some people say that "Cantor proved that distinct infinite quantities exist." What he actually claimed is that "if infinite quantities are assumed to exist at all, they must have certain characteristics including that there are more than one of them" which is not at all the same thing. An empricist does not have to disagree with the validity of Cantor's argument to still object to the literal reality of "completed infinities." This distinction is important to keep in mind when talking about metamathematical theories. When constructing mathematical models of reality, there is always the danger of mistaking features of the model with features of reality. In fact, one could as easily interpret Cantor's argument as a reductio ad absurdum argument for why the "set of real numbers" cannot actually be said to exist.
Let me explain.
The Computability Problem
As stated above, the definition of a countably infinite set is that there is a bijective (onetoone) function that maps natural numbers to elements of that set. The accepted understanding of Cantor's arguments is that the set of real numbers is not countable because any mapping from the natural numbers is at best injective and can never be surjective. That means that there are more real numbers, in some sense, than natural numbers. Infinitely more. In fact, a consequence of Cantor's argument is that the percentage of real numbers for which an injective mapping function is possible is so small compared to the real numbers denoted by the continuum (all the points on the number line), the ones that do correspond to a countable set must be very special in some way. And so a new branch of mathematics was born, Computability Theory.
To fully understand the motivation for Computability Theory, you first must know something about Formal Linguistics. As already noted, Formal Linguistics is the study of the kinds of artifical languages used to express mathematical formulas and computer programs. Among the most famous results in Formal Linguistics are Gödel's "incompleteness theorems." One of the things Gödel proved along the way is that the number of wellformed formulas for any language of the kind used for mathematical purposes is countable. I.e. given a finite set of symbols like numeric digits, the decimal point, operators (e.g. \(+, \ , \ \times, \ \div \)) and the rules for combining them into useful formulas of arithmetic, you will only ever be able to produce countably many wellformed formulas in that formal language.
This means that one can reframe the understanding that "the set of real numbers is not countable" as "most real numbers are not computable" — i.e. there are no mathematical formulas for computing their values. This is because there are only countably many formulas but the set of all real numbers is not countable so there are uncountably many real numbers for which there is no formula on the basis of which they could be computed. Computability Theory attempts to answer the question, "what makes a real number computable?"
Computability Theory to the Rescue?
But how does one go about characterizing what makes a number computable? (Leaving aside, for now, the question of how "real" a number can be if it cannot be computed, i.e. identified or characterized in any way.) Starting in the late 19th Century, various philosophers of mathematics and philosophicallyinclined mathematicians worked on this question. The question was considered settled when the work of two such people, Church and Turing, converged during the first half of the 20th Century (Church was Turing's Doctoral advisor).

Church's approach was purely formal. He invented a mathematical language, the Lambda Calculus, for defining any possible calculation. Church's thesis was that the set of computable numbers is the set of results of evaluating all wellformed formulas of the Lambda Calculus. This way of characterizing computability is very much in keeping with what had by then become the mainstream "all that matters is validity" view of mathematics, which encourages such formalism.

Turing's approach starts with thought experiments involving hypothetical "Turing Machines." (Turing called these "amachines" in his dissertation and they are now recognized as being a particular type of "finite state automaton." It was Church who first referred to them as "Turing Machines.") In this way, Turing's work actually aligns well with the classical, "empiricist" view of mathematics, at least conceptually. A Turing Machine is a theoretical mechanical device built according to certain rules and which produces a particular sequence of symbols (e.g. digits in the representation of a real number) as the result of its operation. If Turing Machines could be constructed (which they could not be in reality because each would require an infinitely long paper tape on which to print its output), Turing's thesis is that the outputs of all possible such machines would be the set of computable numbers.
Computability Theory was considered settled and the world moved on to other interests when it was shown that Church's and Turing's theses are mathematically equivalent: given the rules governing the behavior of any possible Turing Machine, one can define a formula of the Lambda Calculus that will carry out the equivalent calculation and produce the same result. Any formula of the Lambda Calculus can be used as the "blueprint" for "building" a Turing Machine. Since two intuitively compelling definitions of "computability" were shown to produce the same results (by way of the nowfamiliar reasoning based bijective mappings between infinite sets), the consensus among those interested in this question was that the "problem" was solved, and there was, once again, much rejoicing.
As an unintended sideeffect, Turing and Church had between them provided the mathematical model and mental framework for programmable digital computers and for the programming languages used to control them. But before we delve further into the ongoing legacy of their work, note something interesting: neither Church's nor Turing's work actually directly addresses what makes computable numbers different from all those other supposedlyrealbutnotcomputable ones. Instead, they provide a means for characterizing an exhaustive set of computable numbers while saying nothing at all about noncomputable ones, other than by implicitly excluding them. In fact, rather than explaining Cantor's results, Computability Theory when considered on its own seems to imply that the set of real numbers actually is countably infinite using some ordering of the formulas of the Lambda Calculus to drive the bijective function. That such an ordering is possible is another consequence of Gödel's work. Generations of mathematicians and philosophers have accepted that Cantor's argument is valid, but the very success of Computability Theory begs the question: how "real" is a number that cannot be computed?
Taken strictly in its own terms, Cantor's argument is air tight. But from the historically more traditional, empiricist view of mathematics it is deeply circular. Cantor begins by stipulating that concepts like the "set of natural numbers" and "set of real numbers" make sense and that such sets have cardinalities. Only on that basis can you even pose the question as to whether and how their cardinalities differ. Even accepting those premises, you can never literally carry out his procedure for showing that \(\mathbb{R} > \mathbb{N}\) since you would first have to produce a "completed infinity" of representations of real numbers, each of which was a "completed infinity" of digits and finally produce yet another "completed infinity" of digits using the "slash" procedure. So, while Cantor's results and all the results that followed from it are logically valid, the only metamathematical realities to which they lead are represented by the countably many real numbers described by Computability Theory. Consider the opposite interpretation, for a moment. Viewing Cantor's argument as showing not that the cardinality of the continuum is greater than that of the set of natural numbers but, rather, that the continuum does not have a cardinality in any meaningful sense helps address quite a number of problems, including the socalled "tail problem" of Quantum Physics. More fundamentally, the continuum cannot be real in the same sense as physical reality since there is no such thing as an infinitely subdivisible physical substance. Subdivide any real substance into smaller segments enough and you reach a point at which you must start pulling molecules apart, then atoms, subatomic particles, quarks and eventually, if some popular theories hold, you reach the level of the "strings" of "string theory" which could then not be subdivided any further. Long before then, of course, the structural integrity of whatever object you began with would have broken down so far that the pieces into which it had been split would no longer be meaningfully understood to be smaller segments of a formerly single object made of a single substance. E.g. cut an actual macroscopic string in half, then each of the halves in half and so on. Long before you reach the level of molecules the fibers out of which the string was made would stop being at all stringlike and so you would no longer be cutting strings in half as you continued to "subdivide a piece of string". At the submicroscopic scale, the whole point of Quantum Physics is that physical matter is an agglomeration of discrete constituents, each of which can be observed to be in a specific, computable state when observed under the correct conditions. The "real number line" with its infinitely large number of points between any two points can only be understood as a mathematical object with no direct metamathematical correspondence to anything in the real world. Unless almost everything contemporary Physicists believe is wrong, the number of possible states of all the subatomic particles in the real world is not even infinite, much less uncountably so even as they use mathematics involving the continuum to reason about it. As already pointed out, the risk is in mistaking features of a mathematical model for features of the thing being modeled by some metamathematical theory. That there are these sorts of tensions between "pure" and "applied" mathematics became inevitable once mathematicians embraced validity over truth. What is remarkable, despite that tension, is how often what begins as an exploration of a pure mathematical abstraction, e.g. Computability Theory, yields results with powerful and even worldchanging applications, e.g. Computer Science. Similar examples can be found in other scientific domains. NonEuclidean Geometries have found practical (for some definition of "practical") use in various branches of Physics rather than remaining solely the province of abstract considerations of Euclid's axioms.
Programmers use the term "lambda" to refer to an anonymous function because of Church's Lambda Calculus and, more importantly, the Lambda Calculus is the original model for all socalled "Turing complete" programming languages. Turing Machines were the original model for the Von Neumann Architecture on which all current electronic programmable computing devices are based. Conversely, Cantor's noncomputable real numbers can only ever remain the stuff of abstract considerations of the theoretical properties of an infinitely subdivisable number line. But the same embrace of infinite quantities Cantor pioneered has been useful in both abstract and applied math in quite concrete ways. Division by zero, which once was regarded as an anomalous discontinuity, now is understood to have a definite result: \(\pm\infty\):
\[ \lim_{x \to \infty}{1 \over x} = 0 \iff \lim_{x \to 0}{1 \over x} = \infty \]
The preceding is baked into the IEEE
754 standard for floatingpoint
arithmetic, along with the special NaN
value for remaining cases of undefined
operations such as \(0 \over 0\) or attempting to calculate the square root of
a negative number.
\[ \frac x {\pm 0} = \begin{cases} \mathtt{NaN} & \text{where } x = \pm 0 \\ \pm \mathtt{INFINITY} & \text{otherwise} \end{cases} \]
where NaN
is the IEEE 754 constant meaning "Not a Number" and INFINITY
is
the IEEE 754 constant meaning \(\infty\). Also, 0 and INFINITY
are signed in
the IEEE 754 specification and follow the usual rules for sign agreement in
division.
© Copyright Kirk Rader 2023. All rights reserved.
Quantum Physicists Just Need to Get Over It
Almost daily, in certain corners of the media, yet another blog post, TV documentary, book or popsci magazine article appears, full of breathlessly overblown descriptions of how weird and brainbreaking Quantum Physics is including quotes from Professional Physicists explaining how Everything We Believe About the Nature of Reality Is Wrong!
Let's all just calm down.
A century ago, when the field of Quantum Physics was new and burgeoning, physicists might have been excused for reacting to its discoveries with shock and even disbelief. It took no lesser mind than that of Albert Einstein, who had pioneered the idea of treating photons as quanta of light, quite a while to fully internalize some of Quantum Physics' impossibletopredictinadvance results, particularly indeterminacy and entanglement. But consider that Einstein along with the originators of Quantum Mechanics like Bohrs were born firmly within the Victorian Age. Universal electrification of cities was still a work in progress when they were students. Steam power was still in use in nontrivial ways when they were working at the height of their intellectual powers. But what is so shocking, really? It turns out that billiard balls are made out of stuff at the subatomic level that does not behave exactly like a bunch of little tiny billiard balls. Stop the presses! Reality doesn't exist!
Would it not have been more surprising and metaphysically disturbing if the "particles" of particle physics actually did behave substantially like, well, particles? That would suggest an infinite regress of subsubsubatomic particles, each being very particlelike, to the power of \(\infty\) and that really would be hard to reconcile with common sense. As it is, chunks of macroscopic matter have different properties than molecules, molecules have different properties than atoms, atoms have different properties than protons, neutrons and electrons which, in turn, have different properties than the rest of the denizens of the "particle zoo." As physics advances, what counts as a "fundamental" particle keeps changing: some particles formerly regarded as fundamental turn out to be composed of even smaller particles like quarks (and now, it seems, leptoquarks) with even stranger properties. But that does not mean that it is accurate to regard a quark as being a little ball and a hadron as being a bigger ball made of mashedtogether quarks or an atom's nucleus being a ball of protons and neutrons, each of which is a littler ball, being orbited by a bunch of tiny balls, each of which is an electron, despite all those diagrams you saw in school.
Consider that mechanical clocks are not just collections of gears of particular sizes, springs of particular lengths and flexibility and so on. Clocks are collections of such bits and pieces arranged in very specific ways and maintained in a very particular state. Disassemble a clock and you have all the pieces of which a clock can be made, but none of those pieces on their own look or behave anything like a clock. You cannot tell time by looking at a pile of gears and springs, even if that pile is a disassmbled clock. You especially cannot tell time just using a single gear on its own.
Electrons are among the constituent bits and pieces out of which ordinary matter is made. They are too tiny to see on their own and travel too fast for us to follow as they whiz past (when viewed as "particles") or wash about (when viewed as "waves"), but a bunch of very clever people have figured out ways to perform hardtoexplain experiments measuring their actions and interactions. Human brains working the way they do, even the most clever people need to start with some sort of mental framework when trying to puzzle out completely new things. So, of course, phycists who were steeped in classical mechanics — because that is the only kind there was when science first became ready to tackle stuff at the quantum level — started to categorize and interpret the results they got when they started measuring the effects and properties of things like electrons. Reasoning by analogy to classical mechanics electrons when measured in some experiments "behave like waves" and in other experiments "behave like particles." This is only hard to reconcile if you forget that you were reasoning by analogy to begin with. It is no surprise that physicists may detect particlelike and wavelike properties in things that are neither particles nor waves using tools and techniques that can only ever measure indirectly some specific aspect of a thing they can never, even in principle, see exactly for what it is in the way we can literally see dust particles or ocean waves.
A billiard ball resting on a level surface will sit in perfect stillness and smug solidity when viewed at the scale at which human senses operate, right up to the moment that some external force is applied that sends it rolling off in some predictible direction, at some predictible speed, all according to Newton's comfortably explicable laws. Because that is how well polished spheres of lacquered wood behave. But dare zoom in far enough, popsci Physics warns, and the ball is nothing but a tenuous, seething cloud of mere probability that really should not be said to exist at all and is in danger at any moment of dissolving into a gamma ray burst or teleporting itself onto the surface of the moon. But is that not like saying that the old proverb about the blind men and the elephant means that elephants do not really exist but only walllike things, and treelike things, and ropelike things, and snakelike things?
A far more sensible way of regarding the supposed "weirdness" of Quantum Physics is that electrons behave like electrons, protons like protons etc. just as billiard balls behave like billiard balls. It is a metaphysical red herring known as reductionism to focus on the fact that the latter are made from the former. Since we cannot see at the subatomic level directly, we perform experiments and take measurements and construct mathematical models to describe the results. When we make the mistake of taking our reasoning by analogy with stuff we actually can see too seriously, our expectations are thwarted not because the behavior of stuff at the subatomic level is inexplicably and even threateningly different from the behavior of stuff made from that subatomic stuff, any more than the "reality" of a clock is threatened by the fact that it is made from parts that are, themselves, not at all clocklike. Stop talking or thinking about subatomic particles as being particles and suddenly things seem a lot less weird.
A corollary is that apparent paradoxes like the "tail problem" are not particularly problematic after all. Supposedly, quantum indeterminacy means that there is always some chance, even if only an ultra microscopically small chance, of some physical object acting in some observably nonclassical way. Ask any casino operator: that is not how probability actually works. If it were, casinos would go out of business. As it is, they love nothing better than to give out giant jackpots because the marketing opportunities they represent are built into their extremely profitable business model. Casinos know very well, in advance, the limits to how much money they are going to give out over any given period of time because their games of chance are carefully and consciously designed to pay out at a certain rate even while not being rigged in the sense of outright cheating. The chance of a casino going broke because it had to pay too many jackpots all at once is transcendently higher than the chance of seeing some billiard ball start rolling around on its own due to quantum indeterminacy. Do not hold your breath waiting to see either of those events in the real world.
More signficantly, the "tail problem" looks at the indeterminacy of some state of an individual wavicle and imagines that it somehow equates to even a very small potential for indeterminacy at the macroscopic scale. But that relies on math that treats probablities as points on the infinitely subdivisible number line. That, in turn, ignores the quantum nature of reality, itself. The real problem here is Physcists mistaking features of their mathematical models for features of the stuff they are modeling. Yes, quantum indeterminacy is real. It can lead to measurable effects that can be usefully applied in science and engineering like quantum tunneling. No, that does not mean that there is some metaphysical threat to the nature of reality nor that cause and effect are illusory or provisional.
And then there is superposition and collapse. The way in which the "Copenhagen" and "Many Worlds" interpretations are usually discussed is more than a little silly. The act of observation, it is said, causes the wave function to collapse such that observation determines reality or wave functions never collapse such that the universe is constantly boiling off into unimaginably vast snarls of alternate histories each with its own unique combination of quantum outcomes. As appealing as the one is to mystics and the other to scifi authors, neither is at all necessary nor plausible. Even taking the Bell Theorem into account and accepting the experimental results confirming that there are no "hidden variables," there is a difference between reality and knowledge about reality. Schrödinger's cat is doing just fine, thank you very much — unless, of course, she isn't. That is because cats are not subject to quantum indeterminacy even if composed of stuff that is. If you make her wellbeing contingent on the indeterminate state of a quantum experiment (in a thought experiment! No cats were harmed in the writing of this article!) the indeterminacy remains at the quantum level, not at the level of the cat. The fact that we will not know whether the cat is alive or dead until we open the box does not mean that opening the box caused the cat's wave function to collapse, because wave functions do not actually have anything to say about cats. Ultimately, the supposed ontological crisis implied by the observer effect is just another example of taking mathematical model too seriously while applying it in a reductionist fashion.
Though not related to Quantum Physics, the same could be said about General Relativity and the "problem" of "time's arrow." General Relativity is true enough that it can make accurate predictions regarding astronomical observations like gravitational lensing and time dilation must be accounted for by aerospace engineers working on satellite based systems and the like. One of its basic premises is that time is just another mathematically interchangeable dimension of spacetime along with width, height and depth. But if that were all there were to say about time, then the inexorable and unidirectional passage of time and the rest of our very different experience of it compared to our experience of the three spatial dimensions of spacetime seems to need explanation. Einstein famously quipped that the passage of time is a "persistent illusion." Given his penchant for metaphor and sarcasm ("Spooky action at a distance," "God does not play dice with the universe," "the definition of insanity is performing the same experiment over and over while expecting different results" and the like), there is no reason to doubt Einstein's sanity in this regard. I imagine he was employing irony to point out that while his own mathematical model allows us to treat spacetime usefully and effectively as a fourdimensional plenum for the purposes of General Relativity, that cannot actually be all there is to a complete understanding of the nature of time. Attempts to supplement the reduction of space and time to spacetime with reference to something like entropy to account for "time's arrow" do not help and actually just highlight the real problem. Metamathematical models are by definition reductionist and so can be relied upon to produce the kind of "paradoxes" that arise when observations of the real world focus on phenomena outside the domain of discourse of a given model. Einstein's model only looks at time as a dimension of a mathematical abstraction known as spacetime and does not even attempt to pull into its description other properties of time that are not relevant to General Relativity, in particular those properties that make it quite different from spatial dimensions. This also explains the supposed "incompatibility" of General Relativity and Quantum Physics. To be "inconsistet," the two mathematical models would need to describe the same phenomena while somehow producing incompatible results. In fact, they each consist of entirely distinct mathematical models of unrelated physical phenomena that can only be connected through a fallacious kind of reductionism. Saying that they contradict one another is like saying that a German text on automobile repair contradicts a cookbook written in French because they do not say the same things. To continue that analogy, to say that General Relativity needs to be "reconciled" with Quantum Physics is like saying that texts about automobile repairs and cooking written in different languages need to be "reconciled" since both mechanics and cooks operate under the constraints of Chemistry and Physics. They do, but in ways not relevant to sets of instructions on how to maintain a Porsche or prepare a soufflé.
In short, Physicists (really, popsci journalists) would do well to leave Metaphysics to the professionals in the Philosophy Department. Ideally, they will just get on with the business of calculating their wave functions and conducting their experiments, letting the rest of us in on the interesting and useful stuff they discover along the way without all the melodrama.