This is part of an ongoing series called Foundations of QAnal

# Outline

Today we’re going to talk about very basic points and lines geometry. Specifically we are going to address the following questions:

- What is a point?
- What is a line?
- How do we calculate the line that intersects a pair of distinct points (the
**[join]**)? - How do we calculate the point of intersection of a pair of transverse lines (the
**[meet]**)?(**[transverse]**= not parallel) - How do we determine if two lines are parallel?
- How do we determine if two lines are perpendicular?

### Resources

You might want to read these things first, and/or keep the tabs open while reading this:

Here are some GeoGebra applets you can play along with that might help while reading:

- Chromogeometry perps and circles
- Relationship between a row vector
`(A B)`

and the equation`Ax + By + C = 0`

- Relationship between sine, cosine, and uniform circular motion
- Projective geometry of the GeoGebra ellipse
- Rational parametrization of the unit circle
- The tile thing

# Hat tip

Rational geometry, as I am presenting it to you, is mostly the work of Norman J. Wildberger. If I am The Founder of QAnal, then Norman Wildberger is the Grandfounder. I cannot recommend enough that you deep-dive into his work.

Wildberger wrote a book called *Divine Proportions*, which is the canonical reference for rational geometry.

# The tragedy of the CIA

What I am going to outline for you is a mathematical tragedy. You will see that once you have the right abstractions, geometry, trigonometry, and calculus are extremely simple and borderline obvious subjects.

You do not need sines, cosines, arctangents, arc-cosines, square roots, arc length, integration, measure theory, differential forms, CIA real numbers, or CIA limits to do any of this. The angry dragons we live with in the CIA’s geometry (particularly, their trigonometry, and to a lesser extent their calculus) are entirely due to poor abstractions and misplaced priorities.

I have a feeling that if 2200 years ago, someone had sat down Archimedes and laid all this out to him as I will to you, humanity would be a multi-galactic species by now.

My hope is that there’s a 13 year old Archimedes somewhere reading this, and that you do something phenomenal with this information that I can’t even begin to wrap my head around.

# Building context

The idea for the next little while is for you to build up context. We’re going to start with geometry. What’s important right now is that you

- see what the ideas and core abstractions of rational geometry are
- build a crude mental model of how they fit together
- relate the rational geometry ideas to CIA ideas that you are likely more familiar with
- grok the general flow of how arguments tend to be structured

Later on, we will see how to collapse these ideas to a much tighter, more general, more abstract, and far more powerful logical structure. This general structure has a tree of special cases, one of which is the planar geometry we’re going to see today.

When we do talk about the more general abstract theory, I want it to make sense, and I want you to be able to relate each idea there to something very concrete that you are already familiar with.

# Apples and Goats

By analogy, imagine you lived in a world where you independently understood the following two facts:

- if I have two goats, and add three more goats, then I have five goats;
- if I have two apples, and add three more apples, then I have five apples;

but didn’t understand the relationship between them. That is,

- you have a concept of goat-twoness, goat-threeness, and goat-fiveness, and you understand the relationships between those concepts within the context of goats;
- you have a concept of apple-twoness, apple-threeness, and apple-fiveness, and you understand the relationships between those concepts within the context of apples.

You haven’t realized yet that your model for dealing with goats is the same as your model for dealing with apples.

My task is not only to explain that, but also to convince you that there is a general theory called “arithmetic” that includes as special cases your goat logic and your apple logic.

And also arithmetic applies to money and astronomy. (WTF? Why would stars have anything to do with goats?)

There’s also concepts that are currently completely out-of-context, like zero, negative numbers, multiplication, subtraction, divisibility, remainders, proportions, prime numbers, and modular arithmetic.

Each of those concepts has a concrete meaning in the context of apples and goats. It’s important to be able to relate the arithmetical concepts to concrete interpretations with apples and goats. But really we want to get so comfortable with arithmetic that we can understand it on its own terms, and not be constrained by interpretations in terms of apples and goats.

# Projective, Affine, and Metrical

Logically, it would be more proper to separate the questions above as follows:

**Definitions**- What is a point?
- What is a line?

**Projective operations**- How do we calculate the line that intersects any pair of distinct points (the
**[join]**)? - How do we calculate the point of intersection of any pair of distinct lines (the
**[meet]**)?(Please note that above we said that the lines were transverse, and here that constraint has been downgraded to distinct.)

- How do we calculate the line that intersects any pair of distinct points (the
**Affine operations**- How do we determine if two lines are parallel?

**Metrical operations**- How do we determine if two lines are perpendicular?

You probably don’t have enough context right now for these distinctions to make sense. These will become important later. I do want to explain them now, just in case you can grok 10% of it.

I want you to get a glimpse of where the branch points in generalization lie. And I want you to get a glimpse of which parts of the structure correspond to which branch points.

Don’t worry too much about this right now.

Also keep in mind that I write these posts mostly off-the-cuff. I don’t check my work, I don’t fact-check, and I don’t do the exercises before I assign them to you. So there will be mistakes from time to time. The responsibility is on you to check this work and make sure this is correct.

## Projective geometry

Projective geometry is the most basic points-and-lines geometry: meets and joins.

There is no notion of “parallel” and no notion of “perpendicular”.

This matches standard geometry except in one curious aspect: in projective geometry, the meet of parallel lines is allowed and well-defined.

This is visually counterintuitive, but mathematically quite natural. If anything, the anti-parallel constraint in our standard geometry presents itself as a mathematical wart. Projective geometry is very fundamental and very natural.

One interesting feature of planar projective geometry is that the theory is 100% symmetrical between points and lines. Any true statement about points and lines is equally true if one replaces `s/point/line/g`

and `s/line/point/g`

. This is not true in affine or metrical geometry (although we will see some remarkable symmetry, mostly as echoes of the projective symmetry).

Projective geometry will arise naturally in concrete problems in QAnal. In particular, projective geometry is the mathematics of forced perspective. So it has obvious applications in computer graphics.

In a post or two I will show you a concrete geometric interpretation of “the meet of parallel lines”. It won’t be an “a-ha!”/“how did I never think of this?!” moment. It will be a “whoa, that’s weird, I never would have thought about it that way, but that actually makes sense” moment.

## Affine geometry

Affine geometry is projective geometry, plus a notion of “parallel”, but no notion of “perpendicular”, “angle”, or “distance”.

Accordingly, meets of parallel lines are not allowed in affine geometry, and the “points at infinity” go by the wayside.

I said there’s no notion of “distance” in affine geometry. Sort of. In affine geometry, you can compare parallel distances, but you cannot compare transverse distances.

Weirdly, it is in *affine* geometry that we can define “areas”.

The way we do this is:

- you have a unit parallelogram
- I have a unit parallelogram
- we have a transformation
`u2me`

that transforms your unit parallelogram into mine, and an inverse transformation`me2u`

that transforms my unit parallelogram into yours - the
**[determinant]**of`u2me`

measures the ratio`(/ (area my-unit-parallelogram) (area ur-unit-parallelogram))`

- the
**[determinant]**of`me2u`

measures the ratio`(/ (area ur-unit-parallelogram) (area my-unit-parallelogram))`

There is no absolute notion of area, only area relative to a unit. And if you think about it, this is how areas work IRL. Areas are always given in terms of a unit: square feet, square meters, square miles, acres, etc.

If you were to specify a transformation that transforms a mile-square into a kilometer-square, the determinant of that transformation would be approximately `0.386102`

.

The inverse transformation has determinant approximately one over that: `2.589`

.

What’s surprising is three things:

- This notion lives naturally in the affine world (with no right angles or notion of “distance”), not in the metrical world (which has right angles).We normally measure areas as “distance times distance”. But (at least as far as QAnal is concerned), area is a more fundamental concept than distance.In other words, the way you measure area doesn’t change if you change the way you measure distance.Or, the branch-point for “change how we measure area” is prior to the branch point for “change how we measure distance”.
- Area is signed. So there’s negative area as well as postive area.This seems like a massive complication, but it ends up being a simplification, because suddenly areas can cancel.Much like how you would likely find yourself annoyed if you suddenly had to do arithmetic without negative numbers, you will find yourself annoyed that you ever had to do geometry without negative areas.
- We’ll see that we actually don’t have very much choice in the way we measure area. Pretty much the only degree of freedom is what you decide the area of your unit parallelogram is.Everything else is just measuring the ratio of some area relative to your unit parallelogram.What I mean by “don’t have very much choice” is:
- our area measurement methodology will have certain properties (e.g. translation invariance, signedness, additivity, etc)
- the combination of those properties implies the formula

I went through this argument in my Abstract Algebra playlist. It’s the last video in “Gaussian Elimination, Determinants, and Invertibility”.

- Our methodology for measuring areas generalizes to measuring areas of space contained by curves.Well,
*some*curves. Specifically, [Bézier curves] (which we’ll talk about in a while). All the non-Bézier curves we use in practice (e.g. sine waves, logarithms, circles) can be approximated by Bézier curves to arbitrary precision.In most cases, the Bézier approach is how your computer*actually*does the computation when you type in`math:sin(X)`

or`math:log(X)`

. So this is more honest.This is the beginning of**[PreAnal]**, which stands for “precise analysis”. QAnal’s fundamental motivation is- Mathematics that can be done exactly should be done exactly.(This is the subset of QAnal known as
**[XAnal]**: exact analysis.) - Mathematics that cannot be done exactly should have abstractions that keep track of where precision is lost.(This is the subset of QAnal known as
**[PreAnal]**: precise analysis.)

“Exact” and “precise” are synonyms in English. In QAnal, they are not antonyms, but they are not quite synonyms. It’s more accurate to say that precision is a gradient, and exactness lies at one extreme of the gradient.

This approach lies in contrast to the CIA’s approach, which is to use “real” numbers (which are fake) and floats (which are retarded) to pretend that precision and computability issues do not exist. In mathematics, “real” numbers (which are fake) allow glowie mathematicians to pretend that all mathematics is exact. In computing, floats (which are retarded) downgrade a great deal of exact mathematics into precise, and also don’t have any mechanism to manage the degree of precision. The glowies pat each other on the back and give each other tenure while the rest of us have to live in Plato’s cave with the angry dragons.

We will see that PreAnal is plainly and clearly superior to the CIA’s “integral calculus”, and that “differential calculus” is pretty much a non-issue for us. “Integration” is “hard”. It’s still significantly easier for us than for the CIA. But “differentiation” is literally obvious and just falls naturally out of the data structures. More on that later.

Likewise, it will turn out that there really is no choice in how we measure curved area. We specify properties of our area measurement, and those properties imply the formula.

Wildberger is once again responsible for developing most of PreAnal. See his Algebraic Calculus One course.

- Mathematics that can be done exactly should be done exactly.(This is the subset of QAnal known as

## Metrical geometry

This is a somewhat “advanced” linear algebra concept that we will get to in the posts. Not sure when. It’s on the other side of the horizon.

In affine geometry, it is possible to compare parallel vectors, but not transverse vectors. In metrical geometry, it becomes possible to compare transverse vectors.

It is in metrical geometry that we get notions of “distance” and “angle”. By the way, I’m putting “distance” and “angle” in quotes, because we don’t use the concepts of “distance” and “angle” as they appear in the CIA’s math.

That is, we don’t have [distance], but we do have “distance”, and by that I mean a [statistic] that measures separation between points. Likewise, we don’t have [angle], but we have “angle”, and by that I mean a statistic that measures separation between lines (well between parallel pencils, but alas).

By the way, **[statistic]** in QAnal means “computable function of the data”. This differs from the CIA’s definition of statistic, which is “[measurable] function of the data”. Measure theory glows in the dark. At some point (not in Foundations of QAnal, probably in my GlowieMath playlist), way down the line, I will deep-dive into measure theory. It’s very interesting but it glows like the sun.

A more correct name for what we are doing might be “statistical geometry”. The framework we are going to build up is *not* restricted to the rational numbers, but it *is* statistical, in the sense that almost all reasoning patterns boil down to “compute an integer and check if it’s zero or not”.

Anyway, the basic thing in metrical geometry is called a **[metric]**. This is (in the planar case) a 2×2 matrix `M`

. The metric defines an “inner product”, which we compute as (psuedocode)

```
(define (inner-product metric vector1 vector2)
(matrix* (matrix-transpose vector1) metric vector2))
```

(A **[vector]** is a difference between points. Think an arrow. The main difference is that points have an absolute position, and vectors don’t. The distinction is largely academic, because in practice, a point is defined as the vector emanating from the origin to the point. But it’s a useful distinction to keep in mind. I will use them interchangeably for a little bit. Moreover, vectors are typically further defined as column matrices.)

It’s important to know **the result of the inner product is a number**. It is a function that given two vectors, gives you a number.

The metric for standard rational geometry is the identity matrix (which is called the **[blue metric]**):

```
[1 0]
[0 1]
```

We’ll at some point going to talk about **[chromogeometry]**, which considers two other metrical geometries: the **[red geometry]** and the **[green geometry]**.

The **[red metric]** is

```
[ 1 0]
[ 0 -1]
```

The **[green metric]** is

```
[ 0 1]
[ 1 0]
```

Each geometry is perfectly interesting on its own, and there is a remarkable set of connections between the three geometries.

Two terms I want you to keep in mind:

**[quadrance]**is separation between points (replaces “distance” from CIA geometry)In CIA terminology, quadrance is “distance squared”.**[spread]**is separation between lines (replaces “angle” from CIA geometry)In CIA terminology, the spread is “sine squared”.

The **[quadrance]** of a vector is the inner product of that vector with itself.

```
(define (quadrance metric vector)
(inner-product metric vector vector))
```

The CIA takes this metric idea and then goes on to define distance as the square root of what we call quadrance. This of course requires CIA real numbers.

We can then define quadrance between points as the quadrance of the vector between them.

```
(define (quadrance metric point1 point2)
(define v (vector-from-to point1 point2))
(inner-product metric v v))
```

It’s important that you pick a symmetric matrix as your metric, so that the quadrance will be symmetric. The CIA further insists that a metric should be positive-definite, because they can’t wrap their tiny glowie minds around the concept of “negative distance.” Much like negative numbers or negative area, negative quadrance is actually a quite natural concept that makes a lot of mathematical sense.

The CIA’s approach to defining angles is to use the **[law of cosines]**. This starts with something called the **[Cauchy-Schwarz inequality]**. For any two vectors `u`

and `v`

, it is the case that

```
(<= (* (inner-product u v) (inner-product u v))
(* (inner-product u u) (inner-product v v))
```

In English: the square of the inner product is less than or equal to the product of the quadrances.

Or if you take square-roots and absolute values (a crime), in CIA notation

`|<u, v>| ≤ |u||v|`

(The angle brackets mean inner product. The bars mean “absolute value” on the left, and “length” on the right).

You can drop the absolute value bars from the left hand side and the inequality is still true:

`<u, v> ≤ |u||v|`

In fact, the following is also true, just because of the way absolute values work

`-|u||v| ≤ <u, v> ≤ |u||v|`

So, the glowie mind thinks “Ah. There must exist a function, let’s call it `c`

, which takes the vectors `u`

and `v`

and produces a number between `-1`

and `1`

, which ‘tamps down’ the magnitudes so that they match the inner products”. That is:

`<u, v> = |u||v| c(u, v)`

That function `c`

, the glowies call the **[cosine]**. They *define* the cosine as

```
(define (c metric u v)
(/ (inner-product metric u v)
(* (sqrt (quadrance metric u)) (sqrt (quadrance metric v)))))
```

They then *define* the “angle” between the vectors `u`

and `v`

as the [inverse-cosine] of the value of `c(u, v)`

.

The inverse-cosine, like the square root, requires CIA real numbers.

The way we defeat this is with the **[cross]**, which according to the CIA is “cosine squared”. We don’t need to do square-roots or arc-cosines.

We simply define the cross as

```
(define (cross metric u v)
(/ (expt (inner-product metric u v) 2)
(* (quadrance u) (quadrance v))))
```

Which is equivalent to

```
(define (cross metric u v)
(/ (* (inner-product metric u v) (inner-product metric u v))
(* (inner-product metric u u) (inner-product metric v v))))
```

The most important identity in glowie trigonometry is as follows: for any real number `x`

```
(equal? 1
(+ (expt (sin x) 2)
(expt (cos x) 2)))
```

“Sine squared plus cosine squared equals 1”. This is accepted as opaquely true by CIA cadets.

We define the **[spread]** as

```
(define (spread metric u v)
(- 1 (cross metric u v)))
```

The spread is clearly equal to the sine squared. What is important today is that spread is `0`

for parallel vectors, and `1`

for perpendicular vectors.

I will talk probably in the next post about the connection between the spread and the Pythagorean theorem.

In our framework, can just prove, plainly and algebraically, that at least with the blue metric

- if the spread between two vectors is 1, then the Pythagorean theorem is true for the triangle they generate
- if the spread between two vectors is 0, then the vectors are parallel
- the spread plus the cross equals 1

This is how our code works: it tests for parallelism and perpendicularity by just calculating the spread.

It’s important to understand that the concept of parallelism belongs to the prior branch point of affine geometry, but that our code uses the metrical notion of spread to calculate parallelism (unnecessarily).

# Code

The rest of this post talks about the code, save for the references at the very end. I wrote implementations in Erlang and Scheme.

Erlang is a lot more explicit and is a simpler language, but Scheme is significantly easier to play around with in the shell.

# Definition: [point]

A **[point]** is a pair of rational numbers. These are meant to index the point in a standard Cartesian coordinate system. However, strictly speaking, the point *is* just the pair of rationals.

Erlang:

```
-record(pt,
{x :: q(),
y :: q()}).
-type pt() :: #pt{}.
-spec pt(q(), q()) -> pt().
% @doc
% just a tuple constructor
pt(X, Y) ->
{pt, X, Y}.
```

In Scheme, we use some quasiquote jizz to achieve the same effect:

```
(define (pt x y)
"construct a point"
`(pt ,x ,y))
```

Scheme Shell:

```
scheme@(guile-user) [6]> (pt 0 1)
$58 = (pt 0 1)
scheme@(guile-user) [6]> (pt (- 8 (/ 4 88)) 82)
$59 = (pt 175/22 82)
```

You can see immediately why I also did a Scheme implementation when I try to do the same thing in the Erlang shell:

```
1> wpg:pt(qx_q:from_z(0), qx_q:from_z(1)).
{pt,{q,0,1},{q,1,1}}
2> wpg:pt(qx_q:minus(qx_q:from_z(8), qx_q:q(4, 88)), qx_q:from_z(82)).
{pt,{q,175,22},{q,82,1}}
```

The blow can be softened somewhat by defining local aliases:

```
3> Pt = fun wpg:pt/2.
fun wpg:pt/2
4> Minus2 = fun qx_q:minus/2.
fun qx_q:minus/2
5> Z = fun qx_q:from_z/1.
fun qx_q:from_z/1
6> Q = fun qx_q:q/2.
fun qx_q:q/2
7> Pt(Z(0), Z(1)).
{pt,{q,0,1},{q,1,1}}
8> Pt(Minus2(Z(8), Q(4, 88)), Z(82)).
{pt,{q,175,22},{q,82,1}}
```

However Scheme is still significantly more pleasant to play around with. I will probably implement a wrapper module for QAnal in LFE just to solve this issue.

We’re probably not going to stick with this code. We’re instead going to develop more powerful abstractions, and put *those* into the QAnal library. Hence why this is a separate repository. At least for now. Never know. Basic planar geometry is pretty important, and comes up a lot, so it might be worth having a module that just does planar geometry. I dunno.

# Definition: [line]

Alright, this takes a small amount of explaining. Most of you are familiar with the **[slope-intercept]** form of lines:

`y = M*x + B`

`M`

is called the **[slope]** and `B`

is called the **[intercept]**.

This captures [almost all] lines. The lines this definition fails to capture are vertical lines, which have “infinite slope”.

These are modeled by the equation

`x = Something`

We can capture both families of lines with **[standard form]**

`A*x + B*y + C = 0`

The constraint being that at least one of `A`

and `B`

are nonzero. Otherwise you either have `0 = 0`

which describes the whole plane, or `SomethingNonzero = 0`

which describes the empty set. Including those as valid lines is retarded.

Sort of. In projective geometry, the `SomethingNonzero = 0`

case is allowed, and only the `0 = 0`

case is excluded. The `SomethingNonzero = 0`

case is called the [line at infinity], and points on it are [points at infinity]. Each family of parallel lines (called a **[parallel pencil]**) collectively meet at a point at infinity. Generally, in projective geometry, a **[pencil]** means a family of lines emanating from a single point. When that point happens to be a point at infinity, the pencil appears to our primitive affine chimp brains as a family of parallel lines. More on that later. Gotta develop our intuitions in a familiar setting first before we start talking about where parallel lines meet.

Standard form once again has a scale invariance issue, which is that we can multiply both sides of that equation by any nonzero rational number, and we get the same line. For instance,

```
x + y - 7 = 0
8x + 8y - 56 = 0
```

describe the same line. So once again we need a reduction rule. The rule I chose is that we normalize the first nonzero value of `A`

and `B`

to `1`

. That is

Erlang:

```
-record(ln,
{cx :: q(),
cy :: q(),
cc :: q()}).
-type ln() :: #ln{}.
-spec ln(q(), q(), q()) -> ln().
% @doc
% Construct a line and reduce into canonical form
%
%
% Our canonical form is defined as {ln, A, B, C} which represents the
% equation
%
% Ax + By + C = 0
%
% We have a scale-invariance issue once again, like with rationals,
% and this is how we resolve it
%
% - at least one of A and B must be nonzero
% - the first nonzero entry must be normalized to one
% @end
% At least one of A and B must be nonzero
ln({q, 0, 1}, {q, 0, 1}, _) ->
error(badarg);
% If A = 0, then normalize by B
ln(_A = Zero = {q, 0, 1}, B, C) ->
NewA = Zero,
NewB = qx_q:one(),
NewC = qx_q:divide(C, B),
{ln, NewA, NewB, NewC};
% Otherwise, normalize by A
ln(A, B, C) ->
NewA = qx_q:one(),
NewB = qx_q:divide(B, A),
NewC = qx_q:divide(C, A),
{ln, NewA, NewB, NewC}.
```

Scheme:

```
(define (ln a b c)
"construct a line in standard form"
(cond [(and (equal? a 0) (equal? b 0))
(error "a and b cannot both be 0")]
[(equal? a 0)
`(ln 0 1 ,(/ c b))]
[else
`(ln 1 ,(/ b a) ,(/ c a))]))
```

Erlang shell:

```
9> Ln = fun wpg:ln/3.
fun wpg:ln/3
10> Ln(Z(8), Z(4), Z(1)).
{ln,{q,1,1},{q,1,2},{q,1,8}}
```

Scheme shell:

```
scheme@(guile-user) [6]> (ln 8 4 1)
$60 = (ln 1 1/2 1/8)
```

# Testing for incidence

A point `(x, y)`

is incident with a line `Ax + By + C = 0`

if, well, `Ax + By + C = 0`

. Huzzah

Erlang:

```
-spec incident
(pt(), ln()) -> boolean();
(ln(), pt()) -> boolean().
incident(Ln = #ln{}, Pt = #pt{}) ->
incident(Pt, Ln);
incident({pt, X, Y}, {ln, A, B, C}) ->
Zero = zero(),
IP = qx_q:plus([qx_q:times(A, X),
qx_q:times(B, Y),
C]),
Result = IP =:= Zero,
Result.
```

Scheme:

```
(define (incident? pl1 pl2)
"test incidence between a point and a line; crashes unless one argument is point and the other is a line"
(define (ipl point line)
"incidence between a point and a line"
(define x (coord-x point))
(define y (coord-y point))
(define a (coord-a line))
(define b (coord-b line))
(define c (coord-c line))
(define dot-product (+ (* a x) (* b y) c))
(equal? 0 dot-product))
(cond [(and (is-point? pl1) (is-line? pl2))
(ipl pl1 pl2)]
[(and (is-line? pl1) (is-point? pl2))
(ipl pl2 pl1)]
[else
(error "incidence must be between a point and a line")]))
```

# A note on join and meet

The intuition behind the join and meet formulas will become significantly clearer once we learn more about linear algebra and projective geometry. For now, what you can do is the following:

- take the formulas on faith for now
- verify using our incidence test that both points will be incident with their join
- verify using our incidence test that both lines will be incident with their meet
- since there is only one line that is incident with two distinct points, the formula must be correct
- since there is only one point that is incident with two transverse lines, the formula must be correct

You can read the comments in the file (which explain how I derived the formula) to get a hint of why these formulas work.

# How to [join] two distinct points to form a line

Erlang:

```
-spec join(pt(), pt()) -> ln().
% @doc
% join two DISTINCT points to get a line
join(Equal, Equal) ->
error(badarg);
join({pt, X1, Y1}, {pt, X2, Y2}) ->
% The answer is:
%
% ln(Y1 - Y2, X2 - X1, X1Y2 - X2Y1).
%
% Is incident with {X1, Y1}:
%
% X1*(Y1 - Y2) + Y1(X2 - X1) + (X1Y2 - X2Y1)
% = X1Y1 - X1Y2 + Y1X2 - Y1X1 + X1Y2 - X2Y1
% = 0
%
% Is incident with {X2, Y2}:
%
% X2*(Y1 - Y2) + Y2(X2 - X1) + (X1Y2 - X2Y1)
% = X2Y1 - X2Y2 + X2Y2 - X1Y2 + X1Y2 - X2Y1
% = 0
%
% Given two distinct points, there is exactly one line incident
% with both of them
ln(qx_q:minus(Y1, Y2),
qx_q:minus(X2, X1),
qx_q:minus(qx_q:times(X1, Y2),
qx_q:times(X2, Y1))).
```

Scheme:

```
(define (join pt1 pt2)
"join two distinct points to form a line"
(define (really-join rpt1 rpt2)
"join two points assumed to be distinct"
;; The answer is:
;;
;; ln(Y1 - Y2, X2 - X1, X1Y2 - X2Y1).
;;
;; Is incident with {X1, Y1}:
;;
;; X1*(Y1 - Y2) + Y1(X2 - X1) + (X1Y2 - X2Y1)
;; = X1Y1 - X1Y2 + Y1X2 - Y1X1 + X1Y2 - X2Y1
;; = 0
;;
;; Is incident with {X2, Y2}:
;;
;; X2*(Y1 - Y2) + Y2(X2 - X1) + (X1Y2 - X2Y1)
;; = X2Y1 - X2Y2 + X2Y2 - X1Y2 + X1Y2 - X2Y1
;; = 0
;;
;; Given two distinct points, there is exactly one line incident
;; with both of them
(define x1 (coord-x rpt1))
(define y1 (coord-y rpt1))
(define x2 (coord-x rpt2))
(define y2 (coord-y rpt2))
(define a (- y1 y2))
(define b (- x2 x1))
(define c (- (* x1 y2) (* x2 y1)))
(ln a b c))
(if (equal? pt1 pt2)
(error "cannot join equal points")
(really-join pt1 pt2)))
```

# How to [meet] two transverse lines to form a point

This code will beg the question of how to test for parallelism. That will be addressed next.

Erlang:

```
-spec meet(ln(), ln()) -> pt().
% @doc
% Meet two lines that are NOT PARALLEL to get a point.
meet(L1, L2) ->
Transverse = not parallel(L1, L2),
case Transverse of
true -> really_meet(L1, L2);
false -> error(badarg)
end.
really_meet({ln, A1, B1, C1}, {ln, A2, B2, C2}) ->
% This has a sign error. check against the scheme implementation.
% going to bed.
%
% We are solving the matrix equation
%
% [A1 B1][X] = [-C1]
% [A2 B2][Y] [-C2]
%
% The answer will be the inverse of the matrix, times [-C1 -C2]^t
%
% The inverse of the matrix is
%
% The inverse of the matrix is
%
% (1 / (A1B2 - B1A2)) * [ B2 -B1]
% [-A2 A1]
%
% So when the dust settles, we get
%
% X = ( B1*C2 - B2*C1)/Det
% Y = (-A1*C2 + A2*C1)/Det
% Y = ( A2*C1 - A1*C2)/Det
Det = qx_q:minus(qx_q:times(A1, B2),
qx_q:times(B1, A2)),
X = qx_q:divide(qx_q:minus(qx_q:times(B1, C2), qx_q:times(B2, C1)),
Det),
Y = qx_q:divide(qx_q:minus(qx_q:times(A2, C1), qx_q:times(A1, C2)),
Det),
pt(X, Y).
```

Scheme:

```
(define (meet ln1 ln2)
"find the meet of two non-parallel lines"
(define (really-meet rln1 rln2)
"find the meet of two lines assumed to not be parallel"
;; We are solving the matrix equation
;;
;; [A1 B1][X] = [-C1]
;; [A2 B2][Y] [-C2]
;;
;; The answer will be the inverse of the matrix, times [-C1 -C2]^t
;;
;; The inverse of the matrix is
;;
;; (1 / (A1B2 - B1A2)) * [ B2 -B1]
;; [-A2 A1]
;;
;; So when the dust settles, we get
;;
;; X = ( B1*C2 - B2*C1)/Det
;; Y = (-A1*C2 + A2*C1)/Det
(define a1 (coord-a rln1))
(define b1 (coord-b rln1))
(define a2 (coord-a rln2))
(define b2 (coord-b rln2))
(define c1 (coord-c rln1))
(define c2 (coord-c rln2))
(define det (- (* a1 b2) (* a2 b1)))
(define x (/ (- (* b1 c2) (* b2 c1))
det))
(define y (/ (- (* a2 c1) (* a1 c2))
det))
(pt x y))
(if (parallel? ln1 ln2)
(error "cannot meet parallel lines")
(really-meet ln1 ln2)))
```

# [Fibonacci’s Identity] and [Euler’s Four Square Identity]

I told you that next we’re going to talk about parallelism. I lied. Well not really. Our discussion of parallelism will rely quite heavily on something called **[Fibonacci’s Identity**]:

`(A1*B2 - A2*B1)^2 + (A1*B1 + A2*B2)^2 = (A1^2 + B1^2)(A2^2 + B2^2)`

Or in Schemeish notation

```
(equal? (+ (expt (- (* a1 b2) (* a2 b1)) 2)
(expt (+ (* a1 b1) (* a2 b2)) 2))
(* (+ (expt a1 2) (expt b1 2))
(+ (expt a2 2) (expt b2 2))))
```

This is by far the most crucial algebraic identity in QAnal. It is pretty much central to everything. I want you to stop reading this and prove it on your own. It is a tedious but very straightforward algebraic proof: expand both sides, and see that all the terms match. This identity will come up time and time again, and we will lean on it very heavily.

If you feel ambitious and want a more interesting sneak peek into why Fibonacci’s Identity is useful, try this: for a complex number `(c x y)`

(CIA notation: `x + y*i`

), define its quadrance to be `(+ (expt x 2) (expt y 2))`

. Prove using Fibonacci’s identity that the quadrance of the product is the product of the quadrances, that is:

```
(equal? (complex-quadrance (c* (c x1 y1) (c x2 y2)))
(q* (complex-quadrance (c x1 y1))
(complex-quadrance (c x2 y2))))
```

There is a more general identity (i.e. an identity for which the Fibonacci Identity is a special case) called **[Euler’s Four Square Identity]**. It is important that you sit down and prove the Euler Four Square Identity eventually, but not crucially important that you do it right now. It is crucially important that you prove Fibonacci’s identity right now.

```
(equal? (* (+ (expt a1 2)
(expt a2 2)
(expt a3 2)
(expt a4 2))
(+ (expt b1 2)
(expt b2 2)
(expt b3 2)
(expt b4 2)))
(+ (expt (+ (* a1 b1)
(* -1 a2 b2)
(* -1 a3 b3)
(* -1 a4 b4))
2)
(expt (+ (* a1 b2)
(* a2 b1)
(* a3 b4)
(* -1 a4 b3))
2)
(expt (+ (* a1 b3)
(* -1 a2 b4)
(* a3 b1)
(* a4 b2))
2)
(expt (+ (* a1 b4)
(* a2 b3)
(* -1 a3 b2)
(* a4 b1))
2)))
```

We will lean on this a bit in a couple posts when I talk about projective geometry and quaternions. In particular, the quadrance-multiplicity fact I mentioned for complex numbers also holds for quaternions. The Euler Four Square Identity plays an identical role in the quaternion fact as the Fibonacci Identity does in the complex number fact.

# Simple parallelism via the determinant

You may (do) want to watch The SHOCKING truth about determinants and Revelations 16 Context first.

Our naive first definition is:

Two lines `(ln a1 b1 c1)`

and `(ln a2 b2 c2)`

are **[parallel]** [precisely when] `(equal? 0 (- (* a1 b2) (* b1 a2)))`

.

Notice that the test doesn’t involve `c1`

or `c2`

. The reason is because in `(ln a b c)`

, the parameters `a`

and `b`

govern the inclination of the line, and `c`

governs its displacement away from the origin. For instance, all lines of the form `(ln a b 0)`

are incident with the origin (check using our incidence test!). Since parallelism is just checking if the lines have the same inclination, the displacement from the origin is irrelevant.

This quantity which we are testing if it’s equal to `0`

is called a **[determinant]**. It comes up a lot. We’re going to get an outrageous amount of mileage out of determinants. I will go so far as to say that the majority of rational geometry is just properties of determinants.

For now, I just want to convince you that, in this narrow case, it is true that the lines are parallel precisely when the determinant is zero.

There are two cases where the lines have the same inclination: either they are both vertical lines, or they are non-vertical lines with the same slope.

### Case 1: both lines are vertical

We should hope that our test returns “parallel” here.

In this case, the lines are (in reduced form) `(ln 1 0 c1)`

and `(ln 1 0 c2)`

for some `c1`

and `c2`

. It is true that `(equal? 0 (- (* 1 0) (* 0 1)))`

.

So all pairs of vertical lines are parallel according to our definition.

### Case 2: one line is vertical and the other is not

We should hope that our test returns “not parallel” here.

Let’s assume the first line is vertical `(ln 1 0 c1)`

and the second line is not vertical `(ln m2 -1 b2)`

. Then it is false that `(equal? 0 (- (* 1 -1) (* 0 m2)))`

, because the determinant will always evaluate to `-1`

.

Nice.

### Case 3: both lines are non-vertical and therefore have finite slope

We should hope that our test returns “parallel” here exactly when the lines have the same slope.

In this case, the lines are (in reduced form) `(ln m1 -1 c1)`

and `(ln m2 -1 c2)`

for some `c1`

and `c2`

, where `m1`

and `m2`

are the respective slopes. The expression `(equal? 0 (- (* m1 -1) (* -1 m2)))`

evaluates to `#t`

exactly when `(equal? m1 m2)`

.

Which is what we hoped for.

# How the code ACTUALLY tests for parallelism (and perpendicularity)

If you skimmed over the “metrical geometry” rant section above, you probably want to go read that.

Our previous “naive” definition is a perfectly satisfactory definition of parallelism. And the code would work equally well if I had chosen that path.

If you remember, I told you that the concept of parallelism belongs to affine geometry, rather than metrical. This “naive” version is the affine version of parallelism.

The code uses the metrical version of parallelism. The tests obviously agree in every case (or we would have a problem), it’s just that the metrical notion also allows us to test for perpendicularity.

The only functionality that would be missing with the naive definition would be the ability to test for perpendicularity. The way I set it up, it’s equally easy to test for parallelism and perpendicularity.

Absent perpendicularity, meets and joins would work exactly the same way. We would be able to answer the first 5 questions above, just not the 6th.

The way we define parallelism and perpendicularity is to use a statistic called the **[spread]**, which I’ll explain in the next section.

In CIA terminology, the spread is the sine squared of the angle between the two lines. The thing is, sine is an irrational transcendental function. In general, you can only compute the sine approximately. Weirdly, the sine **squared** can always be computed exactly, provided the inclinations of the lines are rational. It’s a very simple formula, and it makes a lot of sense, and I’ll explain all of that in the next section.

For now, look at this spread protractor:

And note that the two lines are parallel if the spread between them is `0`

and they are perpendicular if the spread between them is `1`

.

Spread in rational geometry plays roughly the same role as “angle” does in the CIA’s geometry.

Just to recap:

**quadrance**is our notion of “distance”, or separation between points. In CIA terminology, quadrance is equal to [distance] squared.**spread**is our notion of “angle”, or separation between lines. In CIA terminology, spread is equal to the sine squared of the [angle] between two lines.And note that the phrase “separation between lines” is a bit misleading, because for instance, we consider two distinct parallel lines to have a spread of`0`

.Really, spread is measuring the separation between**[parallel pencils]**. In other words, it’s a “distance” measure on the line at infinity, and that distance ranges between`0`

and`1`

.

So here’s the code

Erlang:

```
-spec parallel(ln(), ln()) -> boolean().
parallel(L1, L2) ->
Zero = zero(),
Spread = spread(L1, L2),
Result = Zero =:= Spread,
Result.
-spec perpendicular(ln(), ln()) -> boolean().
perpendicular(L1, L2) ->
One = one(),
Spread = spread(L1, L2),
Result = One =:= Spread,
Result.
-spec spread(ln(), ln()) -> q().
spread(L1, L2) ->
One = one(),
Cross = cross(L1, L2),
qx_q:minus(One, Cross).
-spec cross(ln(), ln()) -> q().
% this may have an error in it, check after sleep and compare against
% scheme implementation
cross({ln, A1, B1, _}, {ln, A2, B2, _}) ->
% start with the law of cosines from CIA trigonometry
%
% V.W = |V||W|cos(Theta)
% cos(Theta) = V . W
% ---------
% |V| |W|
%
% If we square this, we get the "cross"
% So the cross is the dot squared divided by the product
% of the quadrances
Dot = qx_q:plus(qx_q:times(A1, A2), qx_q:times(B1, B2)),
Q1 = qx_q:plus(qx_q:times(A1, A1), qx_q:times(B1, B1)),
Q2 = qx_q:plus(qx_q:times(A2, A2), qx_q:times(B2, B2)),
Res = qx_q:divide(qx_q:times(Dot, Dot),
qx_q:times(Q1 , Q2 )),
Res.
```

And Scheme:

```
(define (parallel? ln1 ln2)
"test if two lines are parallel"
(equal? 0 (spread ln1 ln2)))
(define (perpendicular? ln1 ln2)
"test if two lines are perpendicular"
(equal? 1 (spread ln1 ln2)))
(define (spread ln1 ln2)
"spread between two lines ('sine squared' according to the CIA)"
(- 1 (cross ln1 ln2)))
(define (cross ln1 ln2)
"cross between two lines ('cosine squared' according to the CIA)"
;; start with the law of cosines from CIA trigonometry
;;
;; V.W = |V||W|cos(Theta)
;; cos(Theta) = V . W
;; ---------
;; |V| |W|
;;
;; If we square this, we get the "cross"
;; So the cross is the dot squared divided by the product
;; of the quadrances
(define a1 (coord-a ln1))
(define b1 (coord-b ln1))
(define a2 (coord-a ln2))
(define b2 (coord-b ln2))
(define dot (+ (* a1 a2) (* b1 b2)))
(define q1 (+ (* a1 a1) (* b1 b1)))
(define q2 (+ (* a2 a2) (* b2 b2)))
(/ (* dot dot)
(* q1 q2)))
```

# Naive perpendicularity, cross, and spread

This post is already too long, so a proper discussion of cross and spread is going to have to wait until the next post. I’ll give you enough information so that you can be convinced that the code is correct.

Alright, remember how with naive parallelism, for lines `(ln a1 b1 c1)`

and `(ln a2 b2 c2)`

we tested for parallelism with the simple test `(equal? 0 (- (* a1 b2) (* a2 b1)))`

?

Well we have an equally simple test for perpendicularity:

Two lines `(ln a1 b1 c1)`

and `(ln a2 b2 c2)`

are **[perpendicular]** precisely when

```
(equal? 0
(+ (* a1 b1) (* a2 b2)))
```

That quantity is called the **[dot product]**.

Fibonacci’s Identity tells us that

```
(equal? (+ (expt (- (* a1 b2) (* a2 b1)) 2)
(expt (+ (* a1 b1) (* a2 b2)) 2))
(* (+ (expt a1 2) (expt b1 2))
(+ (expt a2 2) (expt b2 2))))
```

So, roughly, if we square our parallelism test, and square our perpendicularity test, and then add them together, we get on the other side this product of quadrance-looking things.

If we divide both sides by the quadrance-product, we get

```
(equal? 1
(+ (/ (expt (- (* a1 b2) (* a2 b1)) 2)
(* (+ (expt a1 2) (expt b1 2))
(+ (expt a2 2) (expt b2 2))))
(/ (expt (+ (* a1 b1) (* a2 b2)) 2)
(* (+ (expt a1 2) (expt b1 2))
(+ (expt a2 2) (expt b2 2))))))
```

The dot-product-squared divided by the quadrance-product is called the **[cross]**, and the determinant-squared divided by the quadrance-product is called the **[spread]**. The spread plus the cross equals `1`

.

In CIA terminology, this is “sine squared plus cosine squared equals 1”. Spread is sine squared, and cross is cosine squared.

Anyway, this is all that matters:

- The dot product is 0 (our definition of perpendicular) exactly when the spread is 1.
- The determinant is 0 (our definition of parallel) exactly when the spread is 0.

Hence the tests

```
(define (parallel? ln1 ln2)
"test if two lines are parallel"
(equal? 0 (spread ln1 ln2)))
(define (perpendicular? ln1 ln2)
"test if two lines are perpendicular"
(equal? 1 (spread ln1 ln2)))
```

# Links and references

- Foundations of QAnal series
- Code:
- Some GeoGebra applets:
- My relevant playlists:
- My less relevant but mentioned playlists:
- My relevant videos:
- Wildberger
- Algebraic Calculus One (curve stuff)
- Divine Proportions (reference book)
- Plimpton 322 Paper (interesting)
- WildTrig (reference playlist)
- WildLinAlg (where we’re headed)