This is post number 2 in an ongoing series called Foundations of QAnal.

# Outline

The goal for today is for you to understand

- what a rational number is
- how it is represented in Erlang
- how the operations are defined
- basic properties of the operations

This is going to be brief (ish). I want to talk cool stuff like rational geometry. But first we have to talk about this.

This is (mostly) tedium. Most of this is known to every single math major (the stuff that isn’t is the weird details that the computability constraint imposes).

This is something you go over once in a “Foundations of Analysis” course (oh wait that’s what this is…), you get it, you move on, and you never worry about it again.

However, non-mathematicians are probably unclear on some of the mathematical terminology. Like, for instance, what exactly a “rational” number is and what distinguishes it from an “irrational” number.

I’m going to selectively skip some detail. Specifically, I’m going to skip most of the detail of *why* our “make sure every rational number has unique representation” procedure works. There’s a very deep rabbit hole of detail that you can dive into there.

Said detail is crucially important for logical rigor, but adds almost nothing in terms of cognitive clarity. Moreover, I already explained all of that excrutiating detail in my Math for Programmers playlist. So go watch that if you really want to dig in. Or if you’re a bookcel, try chapter 4 of *Concrete Mathematics* or chapter 1 of *Numbers, Groups, and Cryptography*.

# The Rational Numbers

A **[rational number]** is what you probably know as a fraction. It means a ratio of two integers. Rationals are also called “quotients” In mathematics, the standard notation for rational numbers is a capital Q (hence QAnal = rational analysis).

I will use the terms “rational”, “fraction”, “ratio”, and “quotient” interchangeably.

So, let’s pick a starting point for some definitions and we’ll improve it as we move on

# Defining rationals: first attempt

- A
**rational number**is a pair of integers`{T, B}`

, where`B`

is not`0`

. `T`

is called the**top**(or the**numerator**) of the fraction`B`

is called the**bottom**(or**denominator**) of the fraction- Two rationals
`{T1, B1}`

and`{T2, B2}`

are equal precisely when`0 =:= T1*B2 - T2*B1`

I first want you to note a pattern here:

- we defined a higher-order data structure (rationals) in terms of a lower-order data structure (integers)
- we imposed a new definition of “equals” on our higher-order data structure which is written in terms of the lower-order data structure; that is, to calculate if two
*rationals*are equal, you calculate the*integer*`T1*B2 - T2*B1`

and check to see if it is equal to the*integer*`0`

.QAnal happens to have the property that*everything*ends up being written as finite computations with integer arithmetic. Which is extremely significant. The CIA’s math does not have this property.

Anyway. In mathematics, this design pattern is known as [quotienting]. Which is some lovely terminology overloading. It’s a very important idea that will come up again and again and again, and so I want to note it when it shows up.

# The QAnal Equity Principle

There’s a problem straightaway with this definition. It’s not a mathematical or logical flaw. It’s a practical flaw: *we have many different pairs of integers that refer to the same rational number*. For instance

```
1> RatEq = fun({T1, B1}, {T2, B2}) -> 0 =:= T1*B2 - T2*B1 end.
#Fun<erl_eval.43.65746770>
2> RatEq({1,2}, {2,4}).
true
3> RatEq({1,2}, {-2,-4}).
true
4> RatEq({1000,-20}, {-50,1}).
true
4> RatEq({100,-20}, {-50,1}).
false
```

The equality definition is reflective of the fact that we can scale the top and bottom of any fraction by a nonzero integer, and it represents the same fraction. So for instance, `{1, 2}`

is the same fraction as `{2, 4}`

, which is the same as `{3, 6}`

and so on.

If you stare carefully at the expression `X = T1*B2 - T2*B1`

, you will notice that if we take `T1`

and `B1`

and multiply both of them by a nonzero integer `C`

, it has the effect of scaling `X`

. That is:

```
X = T1*B2 - T2*B1
(C*T1)*B2 - T2*(C*B1)
= C*(T1*B2 - T2*B1)
= C*X
```

The same thing happens if you scale `T2`

and `B2`

. So the test for rational equality captures the fact that rationals have this scale-invariance to them.

In mathematics, this does not represent a problem. In computing, it does.

This brings up a principle of QAnal that I should have mentioned in the original post: *equal things must have equal representation*. This is the **QAnal Equity Principle**.

The QAnal Equity Principle implies that there must exist a procedure (which we call a **[reduction procedure]**) that reduces all members of a given [equivalence class] to a [canonical form]. You don’t have to do some secondary check to see if two things are equal. You look at the representation, and the two things are either equal or they aren’t.

Most of this post concerns the reduction procedure for rationals.

The CIA’s mathematics is laughably incompatible with the QAnal Equity Principle. A “real number” for instance is typically defined as an equivalence class of Cauchy sequences of rational numbers (never mind what exactly that means). There is not even a way to determine in finite time if two such Cauchy sequences lie in the same equivalence class, let alone a reduction procedure.

# Why Haskell sucks

Some programming languages, such as Haskell, don’t have a universal “equals”, and instead insist that you explicitly define equals for each new data structure.

```
-- File: Rat.hs
module Rat where
data Rat = Rat {top :: Integer,
bot :: Integer}
instance Eq Rat where
(==) (Rat t1 b1) (Rat t2 b2) =
(==) 0
((-) ((*) t1 b2)
((*) t2 b1))
-- |shortcut for Rat that crashes if second argument is 0
q :: Integer -> Integer -> Rat
q _ 0 = error "cannot divide by zero"
q t b = Rat t b
```

In GHCi:

```
GHCi, version 8.0.2: http://www.haskell.org/ghc/ :? for help
Loaded GHCi configuration from /home/doctorajaykumar/.ghc/ghci.conf
Prelude> :l Rat.hs
[1 of 1] Compiling Rat ( Rat.hs, interpreted )
Ok, modules loaded: Rat.
*Rat> (q 1 2) == (q 2 4)
True
*Rat> (q 1 2) == (q (-2) (-4))
True
*Rat> (q 1000 (-20)) == (q (-50) 1)
True
*Rat> (q 100 (-20)) == (q (-50) 1)
False
```

This seems nice but it has the effect of limiting the scope of pattern matching within the language. For instance, consider this Erlang code

```
-module(eq).
-export([eq/2]).
eq(X, X) -> "pattern equal";
eq(X, Y) when X == Y -> "compare equal";
eq(_, _) -> "not equal".
```

This is meant to illustrate the difference between “pattern equals” and “compare equals”:

```
1> c(eq).
{ok,eq}
2> eq:eq(0, 0.0).
"compare equal"
3> eq:eq(0, 0).
"pattern equal"
4> eq:eq(0, 1).
"not equal"
```

When writing this, I naively tried to do something similar for Haskell:

```
-- |equals demonstration
eq :: Rat -> Rat -> String
eq x x = "pattern equal"
eq x y | (==) x y = "compare equal"
| otherwise = "not equal"
```

But it turns out that code doesn’t compile

```
Prelude> :l Rat.hs
[1 of 1] Compiling Rat ( Rat.hs, interpreted )
Rat.hs:23:4: error:
• Conflicting definitions for ‘x’
Bound at: Rat.hs:23:4
Rat.hs:23:6
• In an equation for ‘eq’
Failed, modules loaded: none.
```

I haven’t written much Haskell in years, and it seems during that time I’ve forgot some subtleties of the language.

The effective premise of Haskell (and similar languages) is that you can typecheck away all possible errors that might occur in your code. That’s just false.

There is a correct statement that is similar. Functional languages tend to have the property that once you sort out the *shapes* of your data, the procedural code writes itself. This is a theme throughout QAnal.

This is remarkable, and it’s why functional programming is fundamentally easy: *you can offload all of the cognitive complexity of your program into your data structures*.

However. You cannot offload uncertainty into your data structures. Crashes happen. Hardware fails. Disk reads fail. Networks fail. You can’t typecheck reality.

# Type systems are fake news

Note also that I used the word *shape* rather than *type*.

Lately I’ve been working primarily with two languages: Erlang and TypeScript. Neither of these languages have real type systems.

(Erlangers: TypeScript is basically dialyzer.js.)

Both languages instead have something called “success type systems”. This basically means that the type checker will only catch type errors at the point where data that violates the typespec is passed into a function call.

Dialyzer and TypeScript have pretty much identical origin stories: there’s a dynamically typed language but hoomans make type errors and want a type checker. So you invent a fake type system that is optional and can be applied gradually, then overlay it on top of the language.

Working with two similar, but slightly different success type systems has granted me a minor epiphany: *type systems are fake news*.

```
[ 5:32 AM ] Supreme Founder Ajay Kumar PHD : I was giving a naive definition of rationals as pairs of integers, and then saying this doesn't work because we have distinct pairs that correspond to equal rationals
[ 5:32 AM ] Supreme Founder Ajay Kumar PHD : And it's led to an interesting tangent about why Haskell sucks
[ 5:43 AM ] Supreme Founder Ajay Kumar PHD : @zxq9 you said something in a podcast a while ago about type systems. Something like C programmer graybeards who hate the world will tell you that there's only one type of data, and that's binary. The context of what you pretend the binary means is what gives something meaning
[ 5:44 AM ] zxq9 : Yes.
[ 5:44 AM ] zxq9 : Specifically, the only real data type in computerland is binary integers.
[ 5:44 AM ] zxq9 : You just pretend that means different things depending on what you are doing with it at the moment.
[ 5:45 AM ] zxq9 : The Quake III fast inverse square root thing is an example of that.
[ 5:45 AM ] Supreme Founder Ajay Kumar PHD : yeah ok I'll add that
[ 5:47 AM ] zxq9 : Same like when people pack a double with some specific data that has nothing to do with doubles (like a long unicode multibyte they know for certain can never be larger than 8 bytes or whatever) because they can stash it in a register temporarily to avoid a round trip out to the cache because they know nothing else can possibly be using that register just then to speed up unicode processing.
[ 5:49 AM ] zxq9 : Weird black magic tricks like this are possible because at the root there is only one data type as far as the computer is concerned: binary integers. What it does with those integers, integer operations, memory fetches, floating point operations, etc. is merely a function of the circuitry you happen to decide to wiggle with those bits attached -- the processor has no inherent meanings of its own that are attached. All of the meanings the computer "knows" and every belief it holds at that level are represented purely by the arrangement of the gates, and it mechanistically and faithfully pulses electrons through them, oblivious to whether it means anything, is in error, or is "good" or "bad".
[ 5:50 AM ] zxq9 : We give those numbers meanings at a different layer. Stated differently and at a very different level of interpretation, playing DayZ or having this little chat online is the Holographic Principle in action.
```

So, the reason Haskell sucks is that you end up spending most of your development time fighting with the type system. (Well, actually, you spend most of your time fighting with the packaging system, but you get my point.) All of your cognitive energy ends up being spent trying to shoehorn your problem into Haskell’s type system, where most of the time it really doesn’t fit.

In our terminology, Haskell’s type system has Angry Dragon Problems. And it ultimately stems from the false assumption that types are real, and the subsequent insistence that types should play a primary role in development.

In Erlang and TypeScript, the typechecker is your assistant. In Haskell, the typechecker is your boss.

The typechecker should be your assistant.

Data structures are *shapes*. They are not *types*. That’s what “structure” means.

/rant

# The Reduction Procedure for Rationals

Anyway, Erlang has shapes, not types. And it has a universal equals. And when you’re coding, you want to lean on the universal equals. Since we’re doing this in Erlang, we need to follow the QAnal Equity Principle.

Therefore we must do the following

- Define our canonical form for rationals.
- Define the reduction procedure.

So let me tell you the definitions.

# What does “positive” mean?

We have to talk about signs in a minute, so I want to clear up some terminology.

Apparently the word “positive” is ambiguous. Half of the English-speaking world includes `0`

as a positive number, and the other half does not. I am from the half of the world where `0`

is not a positive number.

Everyone agrees that “strictly positive” excludes `0`

, and that the word “non-negative” means numbers that are either strictly positive or zero.

The word “non-negative” sucks, because

- It’s double negation which is confusing.
- The half of the English-speaking world that thinks
`0`

is positive also thinks that`0`

is negative, which would make them logically believe that “non-negative” and “strictly positive” are synonyms.Thankfully, mathematical terminology has a solid reputation of being confusing and illogical, so this is not an issue that appears in practice.

I will try to avoid this issue, but will probably fail. Here is the convention I will follow:

`X`

is**[strictly positive]**precisely when`0 < X`

.`X`

is**[strictly negative]**precisely when`X < 0`

.`X`

is**[HIV-positive]**precisely when`0 =< X`

(because this terminology is AIDS).`X`

is**[HIV-negative]**precisely when`X =< 0`

.- I will try to use the phrases [strictly positive] and [strictly negative] to avoid ambiguity, but will not be very good at it.
- Absent qualification,
**[positive]**means [strictly positive], and**[negative]**means [strictly negative].

# Defining rationals, second attempt

- A
**canonical rational number**is a pair of integers`{T, B}`

with the following properties`B`

is strictly positive; that is,`0 < B`

.- The [greatest common divisor] of
`T`

and`B`

is`1`

.(BTW, pairs of integers whose GCD is`1`

are called**[coprime]**or**[relatively prime]**).

`T`

is called the**top**of the fraction.`B`

is called the**bottom**of the fraction.- Two canonical-rationals
`{T1, B1}`

and`{T2, B2}`

are equal precisely when`{T1, B1} =:= {T2, B2}`

# Why this definition is correct

The condition `0 < B`

takes care of the sign ambiguity:

```
*Rat> (q 1 (-2)) == (q (-1) 2)
True
```

The top of the fraction can be positive, negative, or zero. The bottom of the fraction must always be strictly positive. Division by zero is never allowed, so the bottom of the fraction can never be zero.

We could just as easily store the “sign bit” in the bottom of the fraction instead of the top. However, in that case, the bottom of the fraction could be strictly positive or strictly negative, but never zero, and the top of the fraction would have to be HIV-positive. Storing the sign bit in the top is more elegant.

The GCD condition takes care of the scale ambiguity:

```
*Rat> (q 1 2) == (q 40 80)
True
```

# Erlang definition of rationals and the reduction procedure

So alas, here is our type definition (source):

```
-type z() :: qx_z:z().
-type zp() :: qx_z:zp().
-type q() :: {q, z(), zp()}.
```

Z (short for German *Zahlen* = “numbers”) is standard mathematical notation for “integers”. If we look inside the `qx_z`

module, we can see that these are just aliases for standard Erlang types.

```
-type z() :: integer().
-type znn() :: non_neg_integer().
-type zp() :: pos_integer().
```

And our reduction procedure (source):

```
-spec q(z(), z()) -> q().
% @doc form the quotient of two integers
% Cannot divide by zero
q(_, 0) ->
error(badarg);
q(Top, Bot) when Bot < 0 ->
q(-1 * Top, -1 * Bot);
q(Top, Bot) when 0 < Bot,
is_integer(Top),
is_integer(Bot) ->
GCD = qx_z:gcd(Top, Bot),
{q, Top div GCD,
Bot div GCD}.
```

The logic is as follows

- No division by zero.
`q(_, 0) -> error(badarg);`

- If the bottom of the fraction is negative, (using the scale invariance property) scale the top and bottom of the fraction by
`-1`

and try again:`q(Top, Bot) when Bot < 0 -> q(-1 * Top, -1 * Bot);`

- Otherwise, divide the top and bottom of the fraction by their GCD, and store that.
`q(Top, Bot) when 0 < Bot, is_integer(Top), is_integer(Bot) -> GCD = qx_z:gcd(Top, Bot), {q, Top div GCD, Bot div GCD}.`

Every time QAnal constructs a rational number, it uses this `q`

function. This ensures canonical representation throughout.

# Why does this reduction procedure work?

So this begs two questions:

- Why is it that if we divide the top and bottom of the fraction by their GCD, then the resulting pair of integers is coprime?
- How does
`qx_z:gcd/2`

work?

I will give a fake answer the first question, and give a real answer to the second question.

The fake answer to the first question is to look at the prime factorizations of the two integers.

Let’s switch to Scheme for a moment, and consider the fraction `105/60`

.

```
scheme@(guile-user)> 105/60
$1 = 7/4
scheme@(guile-user)> (* 3 5 7)
$2 = 105
scheme@(guile-user)> (* 3 4 5)
$3 = 60
scheme@(guile-user)> (/ (* 3 5 7)
(* 3 4 5))
$4 = 7/4
```

You see that the GCD ends up being the “intersection” of the prime factorizations, and so when you divide by it, it cancels all the common factors.

The reason this is a fake answer is it begs the question “why is prime factorization unique?” That is, why is there not a different set of prime numbers whose product is `105`

?

Well it turns out that the proper way to set this up logically is to start with the GCD algorithm. Then you use the properties of the GCD algorithm to prove the uniqueness of factorization.

I took some moderate care to set this up correctly in my Math for Programmers playlist. So go consult that. Or look at one of the books I suggested (links in the links section).

It’s very voluminous, only moderately interesting, and somewhat outside the scope of this post. You’ve gone your entire life without knowing why integer factorization is unique, and you’re doing just fine. Let’s keep going.

# Euclid’s GCD Algorithm

First of all, I should mention that the glowies are playing some word games with the word “greatest”. See, friend, whenever someone says that X is “best” or “greatest”, there’s an implicit ordering relation with respect to which X is the greatest.

The glowie word game here is subtly changing the ordering relation under your nose and not telling you. The CIA’s favorite activity is implicit context swaps. That’s why glowies love object-oriented programming.

This by the way is why the CIA is going woke. At the bottom, the CIA’s mathematics is based on smoke and mirrors and retarded word games. Wokeism is *entirely* smoke and mirrors and retarded word games. Glowie mathematicians in woke discourse are like a pig in shit. This is their whole thing.

The glowies implicit ordering relation with respect to which the GCD is greatest is the divisibility ordering on integers. In particular:

`1`

is the universal lower bound (`1`

divides everything and nothing divides`1`

)`0`

is the universal upper bound (`0`

divides nothing and everything divides`0`

)

The divisibility ordering differs in many subtle ways from the standard ordering. For instance, the standard ordering is a total order: given two integers `X`

and `Y`

, at least one of the following statements is true:

`X =< Y`

`Y =< X`

This is not true with divisibility, which is a *partial* ordering.

I explored this caveat in slightly more depth in the Math for Programmers playlist.

For our purposes, this doesn’t actually matter. The first thing we do in computing `gcd`

is make sure both arguments are HIV-positive. And on HIV-positive integers, the divisibility ordering implies the standard ordering, *except* for the fact that `0`

is a universal upper bound. That is:

- if
`X`

and`Y`

are strictly positive integers and`X`

divides`Y`

- then
`X =< Y`

with respect to the standard ordering

This is a pitfall that you might have to be aware of at some point down the line. And it’s very important to be alert at all times for signs of glowie word games.

The proper definition is: *the [greatest common divisor] of a set S of integers is the greatest lower bound of S with respect to the divisibility relation*.

Anyway, algorithm. It’s extremely simple.

The first premise is that `gcd`

is HIV-positive: meaning if we have the fractions `(/ -105 60)`

or `(/ 105 60)`

, the thing we want to cancel from the top and bottom is the same.

So the first thing we do is make sure both arguments are HIV-positive (source):

```
% make sure both arguments are HIV-positive
gcd(X, Y) ->
gcdabs(abs(X), abs(Y)).
```

The next premise is that `gcd`

is symmetric: meaning if we have the two fractions `60/105`

or `105/60`

, the integer we want to cancel from top and bottom is the same. So we establish a convention that the bigger number goes on the left (source).

```
% gcd where the arguments can be assumed to be HIV-positive,
% but not necessarily in the correct order
%
% This function is responsible for making sure the argument on
% the left is bigger than or equal to the argument on the right.
gcdabs(X, Y) when X < Y ->
gcdplus(Y, X);
gcdabs(X, Y) when X >= Y->
gcdplus(X, Y).
```

And finally we get to the heart of the matter: `qx_z:gcdplus/2`

.

```
% gcd where the arguments can be assumed to be HIV-positive and
% in the correct order
% zero is a universal upper bound in the divisibility relation.
% so when compared with anything, return the other thing
gcdplus(X, 0) ->
X;
% X might be equal to Y here, in which case, modulo(X, Y) = 0, and
% the next call will degenerate into the base case
gcdplus(X, Y) when X >= Y ->
% X = qY + R
% R = X - qY
% 0 =< R < Y
% therefore any common divisor of X and Y (such as the gcd) also
% divides R
% therefore
% gcd(X, Y) ->
% gcd(Y, R)
gcdplus(Y, modulo(X, Y)).
```

`modulo/2`

is a function that in this cases matches the `rem`

operator in Erlang.

Let’s walk though that `105`

and `60`

example:

```
gcdplus(105, 60) ->
% 105 rem 60 = 45
gcdplus(60, 45) ->
% 60 rem 45 = 15
gcdplus(45, 15) ->
% 45 rem 15 = 0
gcdplus(15, 0) ->
% base case
15
```

And if you remember our little Scheme thing, it’s true that `15`

is the product of the common factors:

```
scheme@(guile-user)> 105/60
$1 = 7/4
scheme@(guile-user)> (* 3 5 7)
$2 = 105
scheme@(guile-user)> (* 3 4 5)
$3 = 60
scheme@(guile-user)> (/ (* 3 5 7)
(* 3 4 5))
$4 = 7/4
```

# The rational number operations

I will refer you to the code. Now that we’ve ironed out all of our shapes, the procedures are obvious. Godspeed.

# Properties of integer arithmetic

In QAnal, we assume that these are true.

Below, I illustrate how the rational number arithmetic properties are consequences of the integer arithmetic properties.

It is possible, in a similar way, to rewrite the integer arithmetic properties in terms of a more “primitive” data structure (say, Peano arithmetic). I don’t think there’s anything to be gained clarity-wise from doing things like that, but it can be done.

It’s mathturbation: it’s not necessarily wrong to do it, but it’s probably wrong to do it in public. Nonetheless, there’s tons of videos on the internet if you want to learn how.

The properties of the integer operations are the [commutative ring] properties:

- Addition is associative:
`(equal? (z+ (z+ a b) c) (z+ a (z+ b c)) (z+ a b c))`

- Addition is commutative:
`(equal? (z+ x y) (z+ y x))`

- There is a unique integer called
`zero`

which has the property of being an additive identity:`(equal? x (z+ zero x) (z+ x zero))`

- For each integer
`x`

there exists a unique integer`(minus x)`

which is its additive inverse:`(equal 0 (z+ x (minus x)) (z+ (minus x) x))`

- Multiplication is associative:
`(equal? (z* (z* a b) c) (z* a (z* b c)) (z* a b c))`

- Multiplication is commutative:
`(equal? (z* x y) (z* y x))`

- There is a unique integer called
`one`

which has the property of being a multiplicative identity:`(equal? x (z* one x) (z* x one))`

- Multiplication distributes over addition
`(equal? (z+ (z* a b) (z* a c)) (z* a (z+ b c)))`

# Properties of rational arithmetic

The first 8 properties are identical to those of integers. There is only one addtional property and that concerns division of rationals.

The properties of the rational operations are the [field] properties:

- Addition is associative:
`(equal? (q+ (q+ a b) c) (q+ a (q+ b c)) (q+ a b c))`

- Addition is commutative:
`(equal? (q+ x y) (q+ y x))`

- There is a unique rational called
`zero`

which has the property of being an additive identity:`(equal? x (q+ zero x) (q+ x zero))`

- For each rational
`x`

there exists a unique rational`(minus x)`

which is its additive inverse:`(equal zero (q+ x (minus x)) (q+ (minus x) x))`

- Multiplication is associative:
`(equal? (q* (q* a b) c) (q* a (q* b c)) (q* a b c))`

- Multiplication is commutative:
`(equal? (q* x y) (q* y x))`

- There is a unique rational called
`one`

which has the property of being a multiplicative identity:`(equal? x (q* one x) (q* x one))`

- Multiplication distributes over addition
`(equal? (q+ (q* a b) (q* a c)) (q* a (q+ b c)))`

- Every rational
`x`

that is different from`zero`

has a multiplicative inverse, called`(oneover x)`

, with the property`(equal? one (q* x (oneover x)))`

# Homework

Prove the properties of rational number operations.

What do I mean by that? What I mean is to use the rewrite layer idea: everything in QAnal can be rewritten in terms integer arithmetic.

So assume the properties of integer arithmetic are true, and also assume the scale invariance of rationals (the thing where if you multiply the top and bottom by the same nonzero integer, it’s the same rational number).

All you will need to do is some very straightforward algebra. In each case, you can show that *if the properties of integer arithmetic are true, then the properties of rational arithmetic are true*.

Disclaimer: I haven’t done this and I’m tired so that’s why it’s an exercise.

# Example: proof that rational addition is associative

Let me give an example: I will prove that rational number addition is associative.

```
-spec plus(q(), q()) -> q().
% @doc add two rationals
plus({q, TL, BL}, {q, TR, BR}) ->
% make common denominator
q((TL * BR) + (TR * BL),
BR * BL).
```

I accidentally chose the hardest example so this is a bit long. But I did the hardest part of your homework for you.

I think this is a bit clearer in Scheme-ish notation

```
(equal? (q+ (/ t1 b1)
(/ t2 b2))
(/ (z+ (z* t1 b2)
(z* t2 b1))
(z* b1 b2)))
```

We are trying to prove that

```
(equal? (q+ (q+ q1 q2) q3)
(q+ q1 (q+ q2 q3)))
```

I’m used to the CIA’s notation, and so it’s very likely that I made an error in the S-expression manipulations below. One interesting constraint that the blog medium imposes is that I cannot use CIA notation. So far I like this constraint.

So let’s just pick this apart

```
; rewrite treelike
(equal? (q+ (q+ q1
q2)
q3)
(q+ q1
(q+ q2
q3)))
; expand each of q1, q2, q3
(equal? (q+ (q+ (/ t1 b1)
(/ t2 b2))
(/ t3 b3))
(q+ (/ t1 b1)
(q+ (/ t2 b2)
(/ t3 b3))))
; inside out rewrite, starting on the left:
(equal? (q+ (/ (z+ (z* t1 b2)
(z* t2 b1))
(z* b1 b2))
(/ t3 b3))
(q+ (/ t1 b1)
(q+ (/ t2 b2)
(/ t3 b3))))
; reformat so tops and bottoms are visually easier to pick out
; (no change to tree structure)
(equal? (q+ (/ (z+ (z* t1 b2) (z* t2 b1))
(z* b1 b2))
(/ t3
b3))
(q+ (/ t1 b1)
(q+ (/ t2 b2)
(/ t3 b3))))
; again rewriting inside out starting on the left
;
; q+ rewrites as
;
; - quotient of
; - sum of
; - product of
; - top left
; - bottom right
; - product of
; - top right
; - bottom left
; - product of
; - bottom left
; - bottom right
;
; jesus christ
(equal?
; quotient of
(/
; sum of
(z+
; product of
(z*
; top left
(z+ (z* t1 b2) (z* t2 b1))
; bottom right
b3
)
; product of
(z*
; top right
t3
; bottom left
(z* b1 b2)
)
)
; product of
(z*
; bottom left
(z* b1 b2)
; bottom right
b3
)
)
(q+ (/ t1 b1)
(q+ (/ t2 b2)
(/ t3 b3))))
; take binary trees with identical associative/commutative stems
; and rewrite them as a single n-ary tree where the branches are
; arranged alphabetically
;
; so for instance:
;
; (z* t3 (z* b1 b2))
;
; becomes
;
; (z* b1 b2 t3)
;
; result:
(equal?
; quotient of
(/
; sum of
(z+
; product of
(z*
; top left
(z+ (z* b1 t2) (z* b2 t1))
; bottom right
b3
)
; product of
(z* b1 b2 t3)
)
; product of
(z* b1 b2 b3)
)
(q+ (/ t1 b1)
(q+ (/ t2 b2)
(/ t3 b3))))
; apply the distributive law to collapse the tree some more
(equal?
; quotient of
(/
; sum of
(z+
(z* b1 b2 t3)
(z* b1 b3 t2)
(z* b2 b3 t1)
)
; product of
(z* b1 b2 b3)
)
(q+ (/ t1 b1)
(q+ (/ t2 b2)
(/ t3 b3))))
; reformat (no change to structure)
(equal? (/ (z+ (z* b1 b2 t3)
(z* b1 b3 t2)
(z* b2 b3 t1))
(z* b1 b2 b3))
(q+ (/ t1 b1)
(q+ (/ t2 b2)
(/ t3 b3))))
; alright now we get to do all of that again, but to the
; right-hand side.
;
; first we expand out the innermost q+ and rewrite it as integer
; operations
;
; I learned from experience on the left side that it's best to
; pre-emptively sort terms in dictionary order when the integer
; operations allow it. So I'm going to do that without saying it
; explicitly, alright?
(equal? (/ (z+ (z* b1 b2 t3)
(z* b1 b3 t2)
(z* b2 b3 t1))
(z* b1 b2 b3))
(q+ (/ t1 b1)
(/ (z+ (z* b2 t3)
(z* b3 t2))
(z* b2 b3))))
; reformat so tops and bottoms are vertically aligned in each
; quotient
(equal? (/ (z+ (z* b1 b2 t3)
(z* b1 b3 t2)
(z* b2 b3 t1))
(z* b1 b2 b3))
(q+ (/ t1
b1)
(/ (z+ (z* b2 t3) (z* b3 t2))
(z* b2 b3))))
; alright. q+ rewrites as
;
; - quotient of
; - sum of
; - product of
; - top left
; - bottom right
; - product of
; - top right
; - bottom left
; - product of
; - bottom left
; - bottom right
;
; jesus christ
(equal? (/ (z+ (z* b1 b2 t3)
(z* b1 b3 t2)
(z* b2 b3 t1))
(z* b1 b2 b3))
; quotient of
(/
; sum of
(z+
; product of
(z*
; top left
t1
; bottom right
(z* b2 b3)
)
; product of
(z*
; top right
(z+ (z* b2 t3) (z3* b3 t2))
; bottom left
b1
)
)
; product of
(z*
; bottom left
b1
; bottom right
(z* b2 b3)
)
)
)
; alright. home stretch folks
; let's apply some integer magic to the right hand side and go
; home
(equal? (/ (z+ (z* b1 b2 t3)
(z* b1 b3 t2)
(z* b2 b3 t1))
(z* b1 b2 b3))
; quotient of
(/
; sum of
(z+
; associative and commutative laws:
(z* b2 b3 t1)
; distributive law
(z+
(z* b1 b2 t3)
(z* b1 b3 t2)
)
)
; associative law
(z* b1 b2 b3)
)
)
; more integer magic:
(equal? (/ (z+ (z* b1 b2 t3)
(z* b1 b3 t2)
(z* b2 b3 t1))
(z* b1 b2 b3))
(/ (z+ (z* b1 b2 t3)
(z* b1 b3 t2)
(z* b2 b3 t1))
(z* b1 b2 b3)))
```

And alas, we have the same expressions on both sides, so the statement is true.

# A tradeoff in QAnal

Proofs in QAnal typically end up being like this: very long, often laborious, and tedious chains of algebra.

We end up trading away a bit of intuition, and in exchange we get back logic that is extremely concrete and extremely verbose.