MATH300 in Lean 4

Chapter 1: Preliminaries

1.0. Introduction to Lean 4 - Interactive Proof Assistant

Congratulations on taking your first step toward writing formal mathematical proofs!

When I was taking this course and doing the homework, I was always wondering: is there a way to verify the proofs I wrote before turning them in? Just like using a calculator to check the derivatives and integrals I took in those calculus problems?

If you have the same question, then you are in the right place! Our amazing mathematicians and computer scientists have developed some useful software tools to construct and verify formal mathematical proofs, which we typically call the proof assistants, and the tool we are gonna use is called the Lean Theorem Prover, which acts as both a programming language and an interactive proof assistant.

1.0.1. Installing Lean 4

The version of the Lean theorem prover we will be using is Lean 4. Follow the instructions here to install Lean 4 and VS Code editor on your computer. Then follow the instruction here to get a copy of this project on your computer.

Now we are all set. Open the file named "c1.lean" in VS Code. Take a look at the contents of this file. Suddenly, you are able to figure out everything in this file!

...... well, that's not true. You might be wondering: what the heck do all these lines of codes mean?? Don't worry, in this chapter, we will go through every line of code in this file, and by the end of this chapter, you will be able to understand everything in it!

1.0.2. How to use this guide

Everything in this guide exactly follows the order of the MATH 300 textbook by Conroy-Taggart: An Introduction to Mathematical Reasoning. For each chapter, we will go over the concepts, theorems, and proofs mentioned in the textbook, with their implementations in Lean, and we've made a lean file for each chapter that contains the codes in this guide, so that you can read this guide while checking the codes in Lean interactively.

Now is the time to let you know what we are going to do: we will turn the proofs we wrote into Lean code, and Lean, as a theorem prover, will produce feedbacks to our inputs (i.e., the remaining goals we need to solve in order to complete the proof). Keep in mind that Lean is not a forgiving language. Whenever we make a mistake in our proof, whether it's a logical mistake or just a syntax mistake, Lean will throw an error immediately (since it's interactive), and we must stop and fix that error in order to make further progress. You might get frustrated very often during this learning process, but remember, this is also the biggest advantage of Lean: we can be sure that the mathematical proofs we construct are absolutely correct.

And as you get used to it, Lean can not only verify the proofs you wrote, but also assist you to construct your own proofs! In this guide, we will first walk you through how the proofs in the textbook are exactly implemented in Lean, which means that we will simply turn the "paper version" of a proof to its "Lean code version". However, as we are familiar enough with Lean, I will show you how Lean Prover can do something that's impossible to do on paper. For example, a statement that requires a page-long proof which takes you hours just for finding the supporting theorems can be achieved by a single command in Lean!! I'm sure you will never regret learning this skill in your academic and career life :)

Get ready? Then let's start!

1.1. Elementary properties of the integers

Review:
In Lean:

Open the file for this chapter (c1.lean). In VS Code, it's easy to evaluate the truth value of a simple mathematical statement by using the #eval command. Move you cursor over the #eval command, and in the pop-up window, you can see how Lean evaluates this statement.

--1.1.01
#eval 1 + 1 = 2 #eval 1 + 1 = 3 #eval 2^10 < 2000 #eval 500 * 20 + 1356^2 - 63794 ≥ 10000

For most of the time, we use Lean to construct formal proofs, instead of just checking if 1 + 1 = 2. In Lean 4 (and for all the exercise problems in this guide), a proof typically has the following structure:

--1.1.02
example {n : ℝ} (h1 : n = 1) : n + 1 = 2 := by rw [h1] norm_num

Yes, this is a complete mathematical proof in Lean. In this case, it has three lines. By looking at the first line of code, can you guess what it's trying to prove?

Here is the interpretation: Let n be a real number. If n = 1, then n + 1 = 2. Is your guess correct? The first line of a Lean proof always indicates the statement we are trying to prove, so let's go through it first.

First Line: example {n : ℝ} (h1 : n = 1) : n + 1 = 2 := by

example means this problem is just an example. Nothing fancy here.

{n : ℝ} means "Let n be a real number". Many mathematical theorems make use of variables, which we need to put at the very beginning. You can have as many variables as you want, like {a b c : ℤ} simply means "Suppose a, b, and c are integers".

(h1 : n = 1) sets up the hypothesis. It's the thing we assume to be true before making our statement. Here, "h1" is just the name we give to the hypothesis, and "n = 1" is the thing we assume: if n = 1. Again, you can have as many hypotheses as you want. Just make sure all the variables you use are already declared in the previous part.

The variable declarations and hypotheses, together with the axioms we already know, are called the assumptions. After all the assumptions comes the colon :, which acts as a boundary line between the assumptions we make and the goal we are trying to solve. There are, of course, some theorems that have no explicit assumptions in the first place, which you can simply put a colon after "example": example :

n + 1 = 2 states the goal of the proof. In this case, our goal is to prove that "n + 1 = 2".

:= by just literally means "we will prove it by", which marks both the end of the statement and the start of the proof.

Whoa! That's all the information in the first line of code. In VS Code, if you click the end of the first line, in the "Lean infoview" window, you should see our hypotheses (before ⊢) and goal (after ⊢) listed in order. This window shows the current proof state of this example. If you click the end of other lines, you can see that the proof state changes accordingly, which shows the progress that line of code makes to the proof. At the last line it shows "no goals", which means that we have completed our proof. You can see why it's called the "interactive theorem prover".

Alright! Look back to the first line again, can you understand everything now? If so, let's dive into the exciting part: prove it!

Second Line: rw [h1]
Third Line: norm_num

Before looking at code, think about it: how you might prove this theorem: Let n be a real number. If n = 1, then n + 1 = 2.

Well, you might be thinking: do I really need to prove this thing? Even an elementary school student can figure this out just in mind! But here is the thing: in proof-writing (and also in programming), we need to demonstrate the logical order exactly. We need to break the entire process into single steps and put them in sequence. This is especially important when we are using the theorem prover because the things we find to be obvious might make no sense to the computer at all! We have to show exactly how we solve the problem step by step.

So let's slow down a little bit and think: what's the logical order of our thinking. Well, we know n = 1 from the assumption, so we can rewrite n as 1. And if we normalize 1 + 1, we get 2, thus we prove the theorem. These are exactly what rw and norm_num commands do.

rw [h1] means to rewrite the goal based on h1. In this case the argument we give to rw is "h1", which is "n = 1", so it will replace all the "n"s in the goal with "1"s. Therefore, if you click the end of the line, in Lean Infoview window, you can see that the goal changes from "n + 1 = 2" to "1 + 1 = 2". Be careful about the syntax: you always need to put arguments inside the square brackets.

OK, so how to prove "1 + 1 = 2"? We know that it's based on the axioms of algebra. As in the examples above, Lean is able to evaluate the calculation results, so whenever we see that our goal only involve simple numerical expressions, we can let Lean normalize them and prove the goal for us.

norm_num closes the goal with only numerical expressions. In this case, the goal is "1 + 1 = 2", so norm_num closes the goal immediately. Now we have no goals left in the current proof state. We completed our proof!

A few more examples:

--1.1.03
example {a b : ℤ} (h1 : a = 2) (h2 : b = 5) : a + b = 7 := by rw [h1, h2] norm_num

Don't look at the next part! Try to figure out what statement it's trying to prove by yourself! You can do it :)

Now I assume that you can understand this example. Well, our hypotheses are "a = 2" and "b = 5", and our goal is "a + b = 7". The way to prove it is basically the same as in the last example: we use rw to replace the variables with their corresponding numbers based on the hypotheses, and use norm_num to do the algebra with only numbers. In this case, we need to rewrite two variables. You can use rw commands twice, but you can also put the hypotheses you use for rewriting in the same command and seperate them with commas, as what we did in this example: rw [h1, h2].

One more example before we move on:

--1.1.04
example {n : ℝ} (h1 : 1 = n) : n + 1 = 2 := by rw [← h1] norm_num

It's the same as the first example, except that the hypothesis is "1 = n" instead of "n = 1". If we want to rewrite the n in our goal as 1, we need to tell the rw command explicitly that we want to replace n with 1 instead of replacing 1 with n (which it does by default). Therefore, we need to put a "←" before the hypothesis: rw [← h1]. Type "\l" in VS Code to get "←".

Remarks:
  1. We were implementing what's called the "tactic-style" proof. In the example above, both rw and norm_num are "tactics" (and they are two of the most common tactics we will be using). As we go further, we will need to use more and more tactics to achieve different kinds of goals.
  2. If you've taken CSE121 or some other programming courses, you instructor might empahsized a lot about "coding style". We will not focus on that too much, but it's definitely a good idea to structure you codes in a clean way. As in this example, you see that we indent a block of codes that solves a specific goal, and we use only one tactic for each line.
  3. You might have noticed that the assumptions can be enclosed by either parentheses () or curly braces {}. There is a subtle difference between these two, which we will talk about in chapter 3. For now, we don't need to worry about the difference.
  4. I just realize that the subtitle of this section is "Elementary properties of the integers", but I didn't mention anything about that big table on page 9 of your textbook at all. Well, we will go over how those properties are implemented in Lean 4 in this chapter, but I need to introduce something else before that.

1.2. Definitions

Review:
  • definition - an agreement between the writer and the reader as to the meaning of a word or phrase
  • absolute value - Let n be an integer. We define the absolute value of n to be the integer |n| given by the multi-part function: |n| = n if n ≥ 0; |n| = -n if n < 0.
  • divisibility - Suppose a and b are integers. We say that a divides b or that b is divisible by a and write a | b if there exists an integer c such that b = ac. If there exists no such integer c, then we say that a does not divide b or that b is not divisible by a and we write a ∤ b.
  • even and odd - An integer a is even if a = 2k for some integer k. An integer a is odd if a = 2k + 1 for some integer k.
  • parity - Two integers that are either both even or both odd are said to have the same parity. If one integer is even and the other is odd, then the two have opposite parity.
  • In Lean:
    Not every mathematical definition is included in Lean 4 by default. You can, of course, define everything by yourself in Lean 4 (which we might cover later), but for now let's just use the definitions that others have written for the sake of time. We usually need to import the definitions we want to the Lean file. If you have some experience with other programming languages, you should've seen things like that lot (e.g. "import java.util.*" in Java, "import numpy as np" in Python). But to Lean 4, where should import things from?

    1.2.0. Mathlib

    Mathlib is a super powerful library for Lean 4 that includes all the definitions, theorems, and tactics we need for this course. Since we will only deal with real numbers (and mostly with integers), we only need to import a small portion of this giant library, which is "Mathlib.Data.Real.Basic". This is what the first line of code mean: (you should always put the libaries you want to import at the beginning of the file)

    import Mathlib.Data.Real.Basic

    Now we have all the definitions we need!

    In Lean 4, you can use |a| ("shift + \" for |) to represent the absolute value of "a" directly. Below is an example of proving that the absolute value of -7 is 7. Since the goal consists of only numerical expressions, you can use norm_num to prove it.

    --1.2.01
    example : |-7| = 7 := by norm_num

    To show that a divides b in Lean 4, you need to type "a ∣ b". ("\ + |" for "∣" in VS Code). By the way, if you wonder how to get some special symbols in VS Code ("\real" for ℝ, "\nat" for ℕ, etc.), just find some examples with those symbols in the file, hover you cursor over the symbols, and a pop-up window well shows you how to type those symbols.

    Below is an example that shows how to prove "3 divides 6"

    --1.2.02
    example : 3 ∣ 6 := by use 2

    Oops! Here comes a new tactic - "use". Let's review the definition of divisibility: Suppose a and b are integers. a | b if there exists an integer c such that b = ac. In this case, 3 divides 6 because there exists an integer c such that 6 = 3 * c. It's obvious that we need to use 2 for c in order to prove it, which is what the tactic use does.

    The biggest takeaway here is that we can use the use tactic to prove anything with a definition that includes phrases like "there exists" or "for some", since we use a specific number to show that there really exists a number that satisfies the definition. You might get confused about what I actually mean by it here. Don't worry! It's a concept from the next chapter: quantifier. We will talk more about it later.

    In Lean 4, you can directly express "a is an odd number" as Odd a, or "b is an even number" as Even b. Very simple. Below is a proof of the statement that "20 is an even number":
    --1.2.03
    example : Even 20 := by use 10
    An integer a is even if a = 2k for some integer k. Again, this definition includes the phrase "for some", so we need the use tactic to give the specific integer that satisfies the definition. Here, to prove that 20 is an even number, we need to show that 20 = 2 * k for some integer k. Clearly, k = 10, so we use 10.

    1.3. Elementary properties of the integers (again)

    Now it's time to go back to the table on page 9 of the textbook: Elementary Properties of the Integers (EPIs).

    As we have imported a lot of theorems from Mathlib, let's see which Lean 4 command each theorem corresponds to (you don't need to understand everything for now):

    Suppose a, b, c, and d are integers.

    1. Closure: a + b and ab are integers.
      In Lean: Good news: You don't have to specify this property in Lean.
    2. Substitution of Equals: If a = b, then a + c = b + c and ac = bc.
      In Lean: congr
      * (This is a tactic. Don't use exact for this one.)
    3. Commutativity: a + b = b + a and ab = ba.
      In Lean: add_comm a b for addition, mul_comm a b for multiplication.
    4. Associativity: (a + b) + c = a + (b + c) and (ab)c = a(bc).
      In Lean: add_assoc a b c for addition, mul_assoc a b c for multiplication.
    5. The Distributive Law: a(b + c) = ab + ac
      In Lean: mul_add a b c
    6. Identities: a + 0 = 0 + a = a and a · 1 = 1 · a = a.
      (0 is called the additive identity).
      (1 is called the multiplicative identity).
      In Lean:
      a + 0 = a: add_zero a
      0 + a = a: zero_add a
      a · 1 = a: mul_one a
      1 · a = a: one_mul a
    7. Additive Inverses: There exists an integer −a such that a + (−a) = (−a) + a = 0.
      In Lean:
      a + (-a) = 0: add_right_neg a
      (-a) + a = 0: add_left_neg a
    8. Trichotomy: Exactly one of the following is true: a > 0, −a > 0, or a = 0.
      In Lean: lt_trichotomy a 0
    9. The Well-Ordering Principle: Every non-empty set of positive integers contains a smallest ele- ment.
      In Lean: we will discuss about this property when we work on "cardinality". (chapter 10)
    10. a ·0 = 0
      In Lean: mul_zero a
    11. If a + c = b + c, then a = b.
      In Lean: add_right_cancel h, with (h : a + c = b + c)
    12. −a = (−1) · a
      In Lean: neg_eq_neg_one_mul a
    13. (−a) · b = −(ab)
      In Lean: neg_mul a b
    14. (−a) · (−b) = ab
      In Lean: neg_mul_neg a b
    15. If ab = 0, then a = 0 or b = 0.
      In Lean: mul_eq_zero.1 h, with (h : a * b = 0)
    16. If a ≤ b and b ≤ a, then a = b.
      In Lean: le_antisymm h1 h2, with (h1 : a ≤ b) (h2: b ≤ a)
    17. If a < b and b < c, then a < c.
      In Lean: lt_trans h1 h2, with (h1 : a < b) (h2: b < c)
      If a ≤ b and b ≤ c, then a ≤ c.
      In Lean: le_trans h1 h2, with (h1 : a ≤ b) (h2: b ≤ c)
    18. If a < b, then a + c < b + c.
      In Lean: add_lt_add_right h c, with (h : a < b)
      If a ≤ b, then a + c ≤ b + c.
      In Lean: add_le_add_right h c, with (h : a ≤ b)
    19. If a < b and 0 < c, then ac < bc.
      In Lean: mul_lt_mul_of_pos_right h1 h2, with (h1 : a < b) (h2 : 0 < c)
      If a ≤ b and 0 ≤ c, then ac ≤ bc.
      In Lean: mul_le_mul_of_nonneg_right h1 h2, with (h1 : a ≤ b) (h2 : 0 ≤ c)
    20. If a < b and c < 0, then bc < ac.
      In Lean: mul_lt_mul_of_neg_right h1 h2, with (h1 : a < b) (h2 : c < 0)
      If a ≤ b and c ≤ 0, then bc ≤ ac.
      In Lean: mul_le_mul_of_nonpos_right h1 h2, with (h1 : a ≤ b) (h2 : c ≤ 0)
    21. If a < b and c < d, then a + c < b + d.
      In Lean: add_lt_add h1 h2, with (h1 : a < b) (h2 : c < d)
      If a ≤ b and c ≤ d, then a + c ≤ b + d.
      In Lean: add_le_add h1 h2, with (h1 : a ≤ b) (h2 : c ≤ d)
    22. If 0 ≤ a < b and 0 ≤ c < d, then ac < bd.
      In Lean: mul_lt_mul'' h1 h2 h3 h4, with (h1 : a < b) (h2 : c < d) (h3 : 0 ≤ a) (h4 : 0 ≤ c)
      If 0 ≤ a ≤ b and 0 ≤ c ≤ d, then ac ≤ bd.
      In Lean: mul_le_mul h1 h2 h3 h4, with (h1 : a ≤ b) (h2 : c ≤ d) (h3 : 0 ≤ c) (h4 : 0 ≤ b)
    23. If a < b, then −b < −a.
      In Lean: neg_lt_neg h, with (h : a < b)
      If a ≤ b, then −b ≤ −a.
      In Lean: neg_le_neg h, with (h : a ≤ b)
    24. 0 ≤ a^2, where a^2 = a · a.
      In Lean:
      sq_nonneg a for (0 ≤ a^2)
      sr a for (a^2 = a * a)
    25. If ab = 1, then either a = b = 1 or a = b = −1.
      In Lean: Int.mul_eq_one_iff_eq_one_or_neg_one.1 h, with (h : a * b = 1)
      *You need to import "Mathlib.Data.Int.Units" in order to use this command.
    Remarks:

    Alright, alright, I know what you wanna say: what the heeeeeeeck are all these things? That's too much!!!!!

    Well, as you are provided with this table in your homework and exams, you are also not required to memorize every command: just refer back to it whenever needed. In the next few chapters, we will let you know some useful Lean tactics that can handle many of these properties all at once! At that time, you will see how powerful the Lean Theorem Prover is for assisting your proving. But for now let's use the commands from this table just for practice.

    And now you may have a new question: OK, so now I know all the commands of these theorems, but how to use them? For example: If you are asked to prove that a + b = b + a, where a and b are integers. How do you prove it?

    You stare it for a while and suddenly realize: it's in the EPIs table! So you grab that table and find that the goal of your proof is exactly the third property: commutativity, so you need to use the exact tactic. This is the fourth tactic we learn in this chapter:

    --1.3.01
    example {a b : ℤ} : a + b = b + a := by exact add_comm a b

    That's it! Each time you find your goal to be exactly the same as a theorem in the EPIs table, you use exact tactic, and copy and paste the command of that property after exact .

    Here is another example:

    --1.3.02
    example {a b : ℤ} (h : a < b) : a + 5 < b + 5 := by exact add_lt_add_right h 5

    (Hint: it's the 18th property.)

    Since exact tactic can be used for anything we already know, it can not only be be used for the EPIs table's theorems, but also for the assumption that is the same as the goal. As a silly example: suppose a is an integer, prove that if a = 1, then a = 1.

    --1.3.03
    example {a : ℤ} (h : a = 1) : a = 1 := by exact h

    Well, the goal is exactly the same as the hypothesis, so you can use exact here.

    One last thing before we see a more difficult example and end this chapter: exact is not the only tactic we can use for citing properties from the EPIs table. If the property states something about "equality", then we can also rewrite part of the expression to the other side of the equality using your favorite rw tactic. Let's see an example:

    --1.3.04
    example {a b : ℤ} (h : a + c = b + 0) : a + c = b := by rw [add_zero b] at h exact h

    Note that you cannot use exact directly because we are not proving "b + 0 = b" here. We are proving something else but need to use the fact that "b + 0 = b", which is the 6th property, so the best way we can do is to rewrite "b + 0" in the hypothesis h to "b". Last time we only use rw to change the goal. If you want to change the hypothesis you need to add at h after the command, where "h" is the hypothesis you want to change. add_zero b gives the fact that "b + 0 = b", so rw [add_zero b] at h searches for "b + 0" in the hypothesis h and replaces it with "b". Now the hypothesis h is the same as the goal, so we can use exact h to close the goal.

    1.3.1 Example

    Perfect! With everything we learned so far, let's try to prove Example 1.2. from the textbook (page 10) : If a,b and c are integers, and c = a + b, then a = c − b.

    --1.3.1.01 (incomplete)
    example {a b c : ℤ} (h : c = a + b) : a = c - b := by sorry

    We have set the problem but haven't started our tactic-style proof yet. However, if you click the end of the second line (after "sorry"), you will see, in the current goal state, that there are "no goals" left. I am sorry to confuse you, but this is what the sorry tactic does: it closes the current goal immediately. Just like cheating in video games, you didn't prove anything, but you close the goal. People use this tactic for temporarily incomplete proof, which they will later delete the sorry and finish the proof. So...let's delete it. Below is the complete proof. Note that all the steps are exactly the same as the proof in the textbook.

    Friendly Reminder: you might see a couple of commands that we haven't yet learned about. Feel free to Google Lean 4's documentations (or simply move your cursor over the commands in VS code) to see how they work. That being said, it might take you a while to understand everything in this proof, and you are not required to do so, as we will cover them in the next few chapters.

    --1.3.1.01 (complete)
    example {a b c : ℤ} (h : c = a + b) : a = c - b := by -- Substitution of Equals have h1 : c + (-b) = (a + b) + (-b) := by congr -- Associativity of Addition have h2 : c + (-b) = a + (b + (-b)) := by rw [add_assoc a b (-b)] at h1 exact h1 -- Additive Inverses have h3 : c + (-b) = a + 0 := by rw [add_right_neg b] at h2 exact h2 -- Additive Identity have h4 : c + (-b) = a := by rw [add_zero a] at h3 exact h3 -- Symmetry of equality have h5 : a = c + (-b) := by symm at h4 exact h4 exact h5
    Remarks:

    1. have hn : t adds (hn : t) to the assumptions. But since this assumption "t" is not part of the original assumptions, you need to prove it in order to use it, which is why it follows by := by. the block of code indented after it will be used to solve the subgoal "t".
    2. If you type -- somewhere at a line, everything following will be the "comment", which will be ignored by the computer. Use have and comments can make your code much more readable.
    3. congr is a tactic, not a theorem. Whenever you have "a = b" as one assumption and "a + c = b + c" as the goal, just type congr directly to close the goal. (This tactic is actually much more powerful than that, but for now let's just use it for this specific property of integers.)
    4. The third to last line of code uses the tactic symm, which switches two sides of the equation.

    You made it! Congratulations on reading through the first chapter. Since now you have a basic idea of how Lean Theorem Prover works, we can try something much more interesting in the following chapters :D

    Chapter 2: Logic and mathematical language

    Review:

    statement - a sentence that is either true or false

    We can use propositional variables (P, Q, R, ...) to indicate some arbitrary statements, like "P is true", "Q is false", blah blah...

    When talking about a statement, we often want to know all the possibilities of the truth value that statement can have, and we can use a truth table to list those possibilities:

    P
    True
    False

    Well, the truth value of a statement with one variable can only have two possibilities, which is not very interesting. However, as Mathematics is the art of relations, things get more interesting when we study a statement that consists of more than one variable using logical connectives.

    2.1. Negations (¬)

    Review:

    The negation operator negates the truth value of a statement. False becomes true, and true becomes false. Thus:

    P ¬P
    True False
    False True

    In Lean:

    In VS Code, type "\not" to get the operator "¬". (There are of course other ways for doing so. You can always check the ways to type specific symbols by hovering your cursor over them.)

    Try to evaluate the following. See the difference?

    -- 2.1.01
    #eval 10 > 5 #eval ¬(10 > 5)
    Remarks:

    We can actually keep negating the truth value of a statement:

    P ¬P ¬¬P
    True False True
    False True False

    We can see that the "double negation" restores the original truth value.

    2.2. And (∧) and Or (∨) / Conjunction and Disjunction

    Review:
    • The statement P and Q (P ∧ Q) is true provided P is true and Q is true. Otherwise, P and Q is false.
    • The statement P or Q (P ∨ Q) is true provided P is true, Q is true, or both are true. Otherwise, P or Q is false.
    P Q P ∧ Q
    True True True
    True False False
    False True False
    False False False

    P Q P ∨ Q
    True True True
    True False True
    False True True
    False False False

    The truth tables for "And" operator and "Or" operator get bigger. Because now we have two variables in a statement, we have four possibilities in total, as shown above. You can see the pattern here: Each time we add a variable to our statement, we double the combinations. Therefore, a statement with n propositional variables has 2^n combinations.

    In Lean:

    In VS Code, type "\and" for "∧" and type "\or" for "∨". Try evaluating the following statements.

    -- 2.2.01
    #eval (1 + 1 = 2) ∧ (2 = 0) #eval (1 + 1 = 2) ∨ (2 = 0)

    The first statement (1 + 1 = 2) is true, and the second statement (2 = 0) is false. This situation corresponds to the second row of the truth table: P is true while Q is false. Therefore, (P ∧ Q) is false, and (P ∨ Q) is true, according to the truth tables.

    OK. We've discussed a lot about the class materials. Now let's see how to prove things with these logical connectives in Lean 4.

    First situation: the goal includes "and" .

    Let's take a look at the truth table of (P ∧ Q). We can see that the only situation for (P ∧ Q) to be true is that P and Q should both be true.

    Therefore, in order to prove that (P ∧ Q) is true, we need to prove that P is true and Q is true, which means we actually have two goals to prove: (P is true) (Q is true). In Lean 4, we need to manually construct these two goals using the constructor tactic.

    -- 2.2.02
    example {P Q : Prop} (hp : P) (hq : Q) : P ∧ Q := by constructor · exact hp · exact hq

    Here, we define P and Q to be propositional variables by typing {P Q : Prop}. And we have two assumptions: (hp : P) is the hypothesis that P is true, and (hq : Q) is the hypothesis that Q is true. Our goal is to prove that P ∧ Q is true.

    We first use the constructor tactic to split our goal to two subgoals: ⊢ P and ⊢ Q. If you click the end of this line and look at the Lean Infoview window in your VS Code editor, you can see that now we have two goals to solve.

    For each of the goal, it's exactly the same as one of the hypothesis, so as usual, we can use the exact tactic to solve it. Once we close all the goals, we complete our proof.

    Note that we use a dot · for the lines of codes that solves a specific goal. It is also a tactic! If you have multiple goals, the · tactic will let the proof state only show the main goal you need to prove at that time. It can not only help you focus on a specific subgoal at one time, but also make the code more structured. Keep in mind that all the lines of code for one goal should be indented for the same amount of spaces (they need to be left-aligned).

    Second Situation: the goal includes "or"

    Look at the truth table of (P ∨ Q). You can see that it's much easier to prove that (P ∨ Q) is true: as long as one of the variable is true, the entire statement is true.

    In other words, to prove that (P ∨ Q) is true, we can just show that (P is true). Similarly, we can just show that (Q is true). Either works.

    -- 2.2.03
    example {P Q : Prop} (h : P) : P ∨ Q := by left exact h
    -- 2.2.04
    example {P Q : Prop} (h : Q) : P ∨ Q := by right exact h

    The first example shows how we prove (P ∨ Q) by only using the hypothesis that (P is true), while the second examples shows how we do that by only using (Q is true). If the goal has the "Or" operator "∨", we use the left tactic to tell Lean that we only want to prove the left side of the goal, so it reduces the goal from (P ∨ Q) to P. Same thing to the right tactic, which reduces the goal from (P ∨ Q) to Q.

    Third situation: the hypothesis includes "and"

    Well, if we already know that (P ∧ Q) is true, then we know that P and Q should both be true. Therefore, (P ∧ Q) is a very strong condition: we can use it not only to prove that (P is true), but also to prove that (Q is true).

    -- 2.2.05
    example {P Q : Prop} (h : P ∧ Q) : P := by cases' h with hp hq exact hp

    In this example, our hypothesis is P ∧ Q, and our goal is P. To prove it, we need to first split our hypothesis to two new hypotheses, and we do so by using the cases' command. cases' h with hp hq means to split the hypothesis h to two new hypotheses named hp and hq. Lean can figure out by itself what the new hypotheses should be. In this case, the original hypothesis is (P ∧ Q), so the new hypotheses should be (P) and (Q).

    You will see why this tactic is called cases' in the next and last situation.

    Fourth situation: the hypothesis includes "or"

    I should say this is the most important situation you need to know since it turns out to be the key idea of "proof by cases", which we will talk about in the next chapter.

    Think about this question: if you know that (P ∨ Q) is true, can you make conclusion that (P is true)? Or similarly, can you say that (Q is true)?

    Boom! You can't :)

    If you look at the truth table, you can see that "(P ∨ Q) is true" correspond to three combinations: (P is true, Q is true) (P is true, Q is false) (P is false, Q is true).

    Therefore, you can't tell whether one of the variables is true by just knowing (P ∨ Q) is true. However, there is one thing you can do: you can assume two cases. One case is that you only know P is true, and the other case is that you only know Q is true. If both of them lead to the same conclusion, then you can say that (P ∨ Q) also leads to that conclusion because you've considered about all the cases the hypothesis (P ∨ Q) can have.

    I keep emphasizing the word "cases" because you can also use the cases' tactic here (I think that's probably why it's called cases'). When used for the assumption (P ∧ Q), it splits the hypothesis to two new hypotheses, and you can use either of them later. When used for the assumption (P ∨ Q), however, it makes two new goals: although the two goals are the same, the first goal uses the first hypothesis (P) while the second goal uses the second hypothesis (Q). You need to prove both of them in order to complete the goal.

    -- 2.2.06
    example {P Q : Prop} (h : P ∨ Q) : P ∨ Q := by cases' h with hp hq · left exact hp · right exact hq

    You might find this example to be kind of silly: we can just use exact h to close the goal. And...you are right. This example just gives you a sense of how to prove by cases, which we will spend a lot of time on in chapter 3. (This example includes the tactics we discussed in the first two situations. If you find it hard to understand this example, the first two situations' examples should help)

    2.2.1. Negation of "and" and "or" statements (De Morgan's laws)

    The textbook gives some intuitive examples and then draws the following conclusion:

    • In general, the statement not (P and Q) has the same meaning as the statement (not P) or (not Q).
    • In general, the statement not (P or Q) has the same meaning as the statement (not P) and (not Q).

    But how to prove these results? Well, let's draw the truth tables:

    P Q ¬P ¬Q P ∧ Q ¬(P ∧ Q) ¬P ∨ ¬Q
    True True False False True False False
    True False False True False True True
    False True True False False True True
    False False True True False True True

    The table looks kind of messy, but everything we've worked out is just for the last two columns. And guess what? They are exactly the same! If two statements have the same column in the truth table, then they are logically equivalent (≡). In this case, we can therefore conclude that ¬(P ∧ Q) ≡ ¬P ∨ ¬Q.

    We can work out the same process for the other result:

    P Q ¬P ¬Q P ∨ Q ¬(P ∨ Q) ¬P ∧ ¬Q
    True True False False True False False
    True False False True True False False
    False True True False True False False
    False False True True False True True

    The last two columns are also the same. Therefore, ¬(P ∨ Q) ≡ ¬P ∧ ¬Q.

    Remarks:
    1. These two results are also called "De Morgan's laws". They can be very useful when you set the conditions for "if-else" statements in some programming languages.
    2. We can use the truth table to prove some other simple logical equivalence. For example, in the "2.1. Negations" section, we can see that "P" and "¬¬P"s' columns in the truth table are the same. We can therefore conclude that (P ≡ ¬¬P).
    3. Of course, this is not the only way to express the "logical equivalence". We will discover another way of expressing that in the "if and only if" section.

    2.3 If, then

    Here it is, our protagonist in this chapter! Remember the proofs we did before? Usually, a statement we try to prove consists of hypotheses and a goal. The hypotheses follow "if", and the goal follows "then". If "P" is the hypothesis, and "Q" is the conclusion, we denote "if P, then Q" by (P → Q). Here's the truth table:

    P Q P → Q
    True True True
    True False False
    False True True
    False False True

    You might find the last two rows, where "if the hypothesis is false, then the statement must be true, regardless of the conclusion" to be confusing. Well, that's just how people define it. For example, consider this statement: If a dinosaur ate all the cherry trees on Quad yesterday, then all the books in Suzzallo Library become sandwiches at lunch time every day. In reality, it sounds ridiculous. But to mathematicians, it's a true statement, because the hypothesis is false: there's no dinosaur on Quad yesterday! Try evaluate the following statement:

    -- 2.3.01
    #eval 10 < 5 → 3 = 4

    It's true, right? Btw you can type "\r" in VS Code for "→".

    But these vacuous truths can be quite boring. For most of the time, we want to prove (P → Q) to be true by assuming "P is true". Why it works? Well, if we can assume that P is true, and show that Q must be true when P is true, then the second row of the truth table for (P → Q) can never be reached. Since we know that if P is false, then (P → Q) must be true, we've' considered through all the situations where (P → Q) is true. Thus, the following two examples are actually proving the same thing:

    -- 2.3.02 (a)
    example {x : ℝ} (h : x = 2) : x^2 = 4 := by rw [h] norm_num
    -- 2.3.02 (b)
    example {x : ℝ} : x = 2 → x^2 = 4 := by intro h rw [h] norm_num

    However, if you look at the second example, you can see that we added an extra step at the beginning. Since the first step to prove a statement (P → Q) is to assume P to be true, we need to introduce the hypothesis that "P is true", which is what the intro tactic does. intro h just means to introduce an assumption named "h" from the hypothesis of the statement. With this tactic, we change the goal from (P → Q) to a new hypothesis h : P and a new goal Q.

    Bonus 1:

    If you look at the truth table, you can actually get another result: If P is true and (P → Q) is true, then we can say that Q must be true! This is a neat perspective, and this result is known as "modus ponens". In other word, if we already know that P is true and (P → Q) is true, we can prove that Q is true. (e.g. if we know that a function is differentiable, and we know that a differentiable function must be continuous, we can conclude the this function is also continuous!) We can apply the conditional statement in the assumptions by using the apply tactic:

    -- 2.3.03
    example {P Q : Prop} (h1 : P) (h2 : P → Q) : Q := by apply h2 exact h1

    Here, a hypothesis is h2 : P → Q, and the goal is Q, so apply h2 changes the goal from Q to P. Then the goal is exactly the same as h1. You can think of the functionality of apply as to rewrite a goal from a conclusion back to a hypothesis based on a conditional statement you know to be true. This is a big advantage of Lean: you can do the proofs backward!

    Bonus 2:

    We usually want to prove that something is true based on the assumptions. However, sometimes mathematicians want to prove something to be false! (the second row of the truth table). This leads to the key idea of proof by contradiction, which we will cover in the next chapter.

    Negation of (P → Q)

    In the textbook, we know that ¬(P → Q) ≡ P ∧ ¬Q. Let's use the truth table to prove that.

    P Q P → Q ¬P ∨ Q ¬(P → Q) P ∧ ¬Q
    True True True True False False
    True False False False True True
    False True True True False False
    False False True True False False

    Based on the truth table, we can see that the third and fourth columns are the same, so P → Q ≡ ¬P ∨ Q. The fifth and sixth columns are the same, so ¬(P → Q) ≡ P ∧ ¬Q. Actually, you can also use De Morgan's laws and double negation to show that ¬(P → Q) ≡ ¬(¬P ∨ Q) ≡ ¬¬P ∧ ¬Q ≡ P ∧ ¬Q. If you don't like drawing the truth tables, you can also use some logical properties you know to prove the logical equivalences.

    2.3.1. Converse and Contrapositive

    Review:

    Definition 2.1. The converse of P → Q is the statement Q → P. The contrapositive of P → Q is the statement (not Q) → (not P).

    P Q ¬P ¬Q P → Q Q → P ¬Q → ¬P
    True True False False True True True
    True False False True False True False
    False True True False True False True
    False False True True True True True

    Look at the last three columns, you can see that P → Q is not equivalent to its converse Q → P. However, P → Q is equivalent to its contrapositive ¬Q → ¬P.

    This is an extremely useful fact! It means that proving P → Q is the same as proving ¬Q → ¬P since these two statements are logically equivalent. Sometimes it's hard to prove P → Q, but it's easy to prove ¬Q → ¬P, so we can change out goal from P → Q to ¬Q → ¬P, which turns out to be a common proof technique: proof by contrapositive. We will talk more about that in the next chapter.

    In Lean:

    If you want to change your goal to its contrapositive, just shout at your computer :"contrapose!" If nothing happens, you will then need to explicitly type the contrapose! tactic.

    -- 2.3.1.01
    example {P Q : Prop} (h : P → Q) : ¬Q → ¬P := by contrapose! exact h

    2.3.2. if and only if

    Definition 2.2. The statement P if and only if Q, written P ↔ Q, is equivalent to the statement (P → Q) and (Q → P).

    As usual, let's draw the truth table. (Hopefully this is the last truth table in this chapter...)

    P Q P → Q Q → P (P → Q) ∧ (Q → P) [a.k.a. P ↔ Q]
    True True True True True
    True False False True False
    False True True False False
    False False True True True

    You can see that in order for P ↔ Q to be true, P and Q should have the same truth value (they should be both true or both false), which means that P and Q should be logically equivalent. This gives us another way of defining logical equivalence: P ≡ Q if P ↔ Q.

    Also, since P ↔ Q is defined by (P → Q) ∧ (Q → P). If we want to prove the statement P ↔ Q, we should prove two things: (P → Q), and (Q → P). Do you still rememer how we prove something with "∧" operator? Yes, the constructor tactic!

    In Lean:
    -- 2.3.1.01
    example {P Q : Prop} (h1 : P → Q) (h2 : Q → P) : P ↔ Q := by constructor · exact h1 · exact h2

    In VS Code, type "\iff" for "↔". "iff" means "if and only if".

    2.4. Quantifiers

    One last thing before we end this chapter.

    Review:
    • Universal Quantifier (∀) -
    • ∀x, P(x) ↔ For all x, P(x) is true.
    • Existential Quantifier (∃) -
    • ∃x, P(x) ↔ There exists at least one x such that P(x) is true.

    (In VS Code, type "\all" for "∀" and "\ex" for "∃".)

    Sometimes, we make a statement with the variable restricted in a certain set (which is also called domain of discourse). For example, the statement "Every apple in the world is red" has the domain of discourse "all the apples in the world". We can denote this statement as "∀ apple in the world, it's red." Similarly, if we want to say that "At least one apple in the world is red", we can write "∃ an apple in the world such that it's red."

    When I was learning the concepts of quantifiers, I tried to relate them with the concepts I learned before, and here's how I think about them:

    ∀x, P(x) means that every element x in the domain of discourse should satisfy P(x). Suppose the domain of discourse is the set {a, b, c, b....}. To make sure that ∀x, P(x), we need to have P(a) ∧ P(b) ∧ P(c) ∧ P(d) ∧... Remember the truth table of conjunction (∧)? The only case where the conjunction of multiple statements is true is when all the statements are true, which is exactly what the universal quantifier is used for.

    Similarly, if you remember the truth table of disjunction (∨), you should know that the only case where disjunction is false is that all the statements are false. If at least one statement is true, the entire statement is true. This is exactly how existential quantifier is defined! Suppose our domain of discourse is {a, b, c, d...} and our statement is: ∃x, P(x). We only need to have P(a) ∨ P(b) ∨ P(c) ∨ P(d) ∨... Even if only one element satisfies the statement, we can say that "∃x, P(x)" is true.

    Now you see that we can always make analogies like these when talking about quantifiers. Therefore, there are also four situations to consider:

    In Lean:

    First situation: the goal includes ∃.

    Some mathematical concepts are defined using quantifiers. We've seen in chapter 1 that the definitions of "divisibility" and "parity" both include existential quantifiers (e.g. The integer x is even if there exists an integer k such that x = k + k). Did you remember how to prove the goal with "there exists" or "for some"? The use tactic :D

    -- 2.4.01 (a)
    example : ∃ n : ℝ, |n| > 0 := by use 1 norm_num
    -- 2.4.01 (b)
    example : ∃ n : ℝ, |n| > 0 := by use -1 norm_num

    This is an example from the textbook: There exists n such that |n| > 0. To prove the goal with "there exists", we only need to use one specific example that satisfies the statement by using the use tactic. Since there are innumerous numbers that satisfies the statement, (in this case, all the real numbers, except 0, satisfies the statement), we can choose any non-zero real number we want. In the examples above, 1 works, and -1 also works. The use tactic replaces the variable with the example we give. Since we reduce the goal to a numerical expression, we can use norm_num to close the goal.

    Second situation: the goal includes ∀.

    If the goal includes ∀, we need to show that every element in the domain of discourse satisfies the statement, which is a very tedious thing to do. If the domain of discourse is the entire set of real numbers, then... well, we must find another way.

    Luckily, we have a trick: to prove that "∀x, P(x)", we can assume an arbitrary element x from the domain of discourse. Note that x must be an arbitrary element x, meaning that we can not have any other assumptions about this element. Then this x can be used to represent any element in the domain of discourse. If we prove that P(x) for this arbitrary element is true, we can therefore show that P(x) for every element is true. To introduce this arbitrary element, we need our old friend: intro tactic.

    -- 2.4.02
    example : ∀ n : ℝ, |n| ≥ 0 := by intro n norm_num

    This is also an example from the textbook. To prove that "∀ n : ℝ, |n| ≥ 0", we first introduce an arbitrary real number n by typing intro n. After that, we have a new variable n, and our goal becomes "⊢ |n| ≥ 0". Well, think about the definition of absolute value we learned in the first chapter, it's easy to figure out that the absolute value of a real number is always greater than or equal to 0. This is an axiom about real number, and surprise! norm_num also works here. (But don't abuse norm_num! It typically doesn't work with variables. We got lucky here because our goal is an axiom of real number.)

    Third situation: the hypothesis includes ∃.

    Consider this statement: If n is an even number, then n + 1 is an odd number. This is a true statement, but how to prove that? Well, let's unfold the definitions of even and odd number, and we can rewrite the statement as the following equivalent statement: If n = 2 * k₁ for some integer k₁, then n + 1 = 2 * k₂ + 1 for some integer k₂. Now it's easy to prove: just let k₂ = k₁.

    More specifically, the hypothesis is "∃ k, n = 2 * k", so we know that there must be an integer k that satisfies n = 2 * k. Even if we don't know the value of k here, we can construct a variable k and a condition that n = 2 * k, And here is the good news: you know what tactic we need to use. We've discussed that we can relate existential quantifiers with disjunction (∨). Which tactic is used for the situation where the hypothesis includes disjunction? The cases' tactic! Whenever we want to seperate something from the hypothesis, we can just cases' the hypothesis with the things we want. It's such a useful tactic!

    -- 2.4.03
    example {n : ℤ} (h : Even n) : Odd (n + 1) := by cases' h with k newh use k rw [newh] ring

    Since the hypothesis (h : Even n) is definitionally equivalent to (h : ∃ k : ℤ, n = 2 * k), cases' h with k newh means to pull out a new variable k : ℤ and a new hypothesis (newh : n = k + k) from the hypothesis h.

    Now we have the variable k, what should we do next? Since the goal Odd (n + 1) is definitionally equivalent to ∃ k : ℤ, n + 1 = 2 * k + 1, and we know how to prove the goal with "there exists"! We can use the variable k we obtained to rewrite the goal as n + 1 = 2 * k + 1.

    We are close to our goal! We have the hypothesis that (newh : n = k + k), so we can rewrite n in the goal as k + k. By using rw [newh], the goal becomes k + k + 1 = 2 * k + 1.

    Yeah! Our goal is reduced to high school algebra. However, norm_num will not work here because this expression includes variable, and we have discussed that norm_num doesn't deal with variables well. If you do this on paper, you need to cite the properties of integers from the EPIs table. But we are proving things on Lean, so...it's time to let you know one of the most powerful tactic in Lean 4: ring. In this course, whenever your goal only involves some algebra of variables (i.e. an equality of expressions that only contains addition, multiplication, and exponents), you can use ring tactic to close the goal. It can apply the properties of integers (associativity, commutativity, distribution law, etc.) all at once!

    Fourth situation: the hypothesis includes ∀.

    Actually, we've seen this situation already. Consider our last example: If n is even, then n + 1 is odd. We actually have a hidden assumption here: n must be an integer. Therefore, the complete statement should be: For any integer n, if n is even, then n + 1 is odd. We often omit the universal quantifier in a conditional statement, since it's easy to infer it from the statement. And in Lean, we often have this hidden universal quantifier already when we declare the variables (e.g. {n : ℤ} is the same as ∀ n : ℤ). Thus, we don't really need to worry about this situation.

    2.4.1. Negations of quantified statements

    Review:
    • ¬(∀x, P(x)) ≡ ∃x, ¬P(x)
    • ¬(∃x, P(x)) ≡ ∀x, ¬P(x)

    How to negate the statement that "every apple in the world is red"? If you can find at least one apple that is not red, it's sufficient to say that the original statement is false, and its negation that "at least an apple in the world is not red" is true. You can also find some other examples to understand the second property. Finding examples from real life is always a great way to study logic :)

    Now we have accomplished all the prerequisites! Let's use the things we've learned so far to write some real and useful mathematical proofs :D

    Chapter 3: Writing Proofs

    Finally! We are going to write some proofs. In the following two chapters, we will use the tactics we learned from the first two chapters to write the lean code version of the proofs from the textbook. Another way of saying that is to formalize the theorems from the textbook, which sounds pretty cool!

    Calculational Proofs

    Before we dive into those different proof techniques, let's first take a look at the kind of proofs we've dealt with over the past 10 years: calculational proofs. Maybe we can get some inspirations from that.

    In Lean:
    -- 3.0.01 (complicated way) Prove that (x + y) * (x + y) = x * x + (x * y + y * x) + y * y
    example {x y : ℝ} : (x + y) * (x + y) = x * x + (x * y + y * x) + y * y := by calc _ = (x + y) * x + (x + y) * y := by exact mul_add (x + y) x y -- 5. The Distributive Law _ = x * (x + y) + y * (x + y):= by rw [mul_comm (x + y) x, mul_comm (x + y) y] -- 3. Commutativity _ = x * x + x * y + (y * x + y * y) := by rw [mul_add x x y, mul_add y x y] -- 5. The Distributive Law _ = x * x + (x * y + (y * x + y * y)) := by rw [add_assoc (x * x) (x * y) (y * x + y * y)] -- 4. Associativity _ = x * x + ((x * y + y * x) + y * y) := by rw [← add_assoc (x * y) (y * x) (y * y)] -- 4. Associativity _ = x * x + (x * y + y * x) + y * y := by rw [← add_assoc (x * x) (x * y + y * x) (y * y)] -- 4. Associativity

    If you want to prove an equality, you can use the calc keyword, and then follow this structure to do the calculation step by step. Note that for each step you need to show the tactics you use to make such progress (after := by). These commands can be found in first chapter's EPIs table.

    Well, it looks really complicated, right? Here is a simple version:

    -- 3.0.01 (simple way) Prove that (x + y) * (x + y) = x * x + (x * y + y * x) + y * y
    example {x y : ℝ} : (x + y) * (x + y) = x * x + (x * y + y * x) + y * y := by ring

    Yes, just type ring, and the goal will be closed. It applies all the basic properties of integers we use in the calc version implicitly. You don't need to do the algebra by yourself anymore.

    So why did I show you the complicated way? Well, first of all, calc is just a command you'd better know as a Lean user. And most importantly, the calculational proof shows that in order to prove something, it is common to make progress step by step, which therefore leads to the discussion about "direct proofs".

    3.1. Direct Proofs

    Review:

    Theorem 3.1. If a and b are even integers, then a + b is even.

    The textbook gives a clear proof of this theorem. Let's see how we code a "Lean version" of this proof.

    In Lean:
    -- theorem 3.1. If a and b are even integers, then a + b is even.
    theorem c3_1 {a b : ℤ} (h1 : Even a) (h2 : Even b) : Even (a + b) := by cases' h1 with k1 h1 cases' h2 with k2 h2 use k1 + k2 rw [h1, h2] ring

    If you find it hard to understand the proof, try thinking about the definition of "even" as logical expression. That is, think about "even a" as "∃ k1 : ℤ, a = k1 + k1". Which tactic should you use to pull out something from a statement with existential quantifier? What about the goal?

    Review:

    Theorem 3.2. Suppose a, b, and c are integers. If a|b and b|c, then a|c.

    In Lean:
    -- theorem 3.2. Suppose a, b, and c are integers. If a|b and b|c, then a|c.
    theorem c3_2 {a b c : ℤ} (h1 : a ∣ b) (h2 : b ∣ c) : a ∣ c := by cases' h1 with k h1 cases' h2 with l h2 use k * l rw [h2, h1] ring

    That's it! We've successfully formalized two theorems from the textbook. Note that we use the "theorem" command here, instead of the "example" command we used before. The statement with "theorem" command can be used later. Remember the EPI's table? All the commands we gave you are theorems from the Mathlib!

    So...How to use a theorem?

    A theorem typically looks like this:

    theorem_name {a b c ... : Type} (h : prop) ... : content := ...

    Seems a little bit abstract. Let's see some real examples. To EPI 3 - commutativity, the theorem is called add_comm in Mathlib. If you find that theorem in Mathlib, or if you move your cursor over the theorem in VS Code, it should look like this:

    add_comm.{u_1} {G : Type u_1} [inst✝ : AddCommSemigroup G] (a b : G) : a + b = b + a

    Well, after the theorem name add_comm, you don't need to worry about anything in curly braces { } and square brackets [ ]. Those are called "implicit arguments", which Lean can infer by itself. You only need to start from things with parentheses ( ). Here, before the colon :, we see that (a b : G). "G" refers to "Group", which you will learn in MATH 402 if you take that, but we don't need to worry about that for now. Just remember that the set of integers is a group under addition. Only for this class, anytime you see G, you can just think of it as . Therefore, given two integers as arguments to the theorem add_comm a b, it will return everything after the colon :. In this case, add_comm a b gives us a + b = b + a, a statement we can use by rw or exact tactic.

    Another example is the theorem we proved above. Move your cursor over the theorem name, you can see that:

    c3_1 {a b : ℤ} (h1 : Even a) (h2 : Even b) : Even (a + b)

    Start from the parentheses: there are two hypotheses enclosed by hypotheses. Therefore, we need to give two propositions as arguments to the theorem, where the two propositions both indicate that an integer is even. In our proof, if we have two propositions (h1 : Even a) (h2 : Even b), we can then use this command:

    c3_1 h1 h2

    which returns the things after the colon : Even (a + b), which is a statement we can use.

    Last important reminder: remember the conditional statements from chapter 2? If we have P → Q, we can call P the "assumption" and Q the "conclusion". Well, we saw many examples that assumptions are in the parentheses before the colon for each theorem. And in fact: (h : P) : Q and P → Q are internally the same in Lean! Therefore, to use a theorem like th{...} : P → Q, if we type th P, it will return Q, which is exactly the same as how th{...} (h : P): Q works! If we have a biconditional statement like th{...} : P ↔ Q, the command th.1 stands for P → Q, and th.2 stands for Q → P, so th.1 P will return Q.

    In short, if you want to use a theorem, just give it everything specified in the parentheses in order, and then it will return the statement after the colon for you to use. We will do some practice in the next section. You may find the EPIs table from chapter 1 to be helpful!

    3.1.1. Structure your proof in Lean

    Let's continue our discussion about the "step-by-step" proof from the beginning of this chapter. The two examples above are relatively simple and don't require much reasoning. Oftentimes, however, a proof consists of many intermediate steps. We all want lives to be easy, so if our goal is (P → Q), we always want to check if there exists a theorem that is exactly the same as our goal, and then use our lovely exact tactic to close the goal directly. Unfortunately, for most of the time we need to show an entire logic chain in order to prove something. For example, to prove that (P → Q), we need to show that (P → R → S → T → U → V → W → ... → Q). If you find this to be frustrating, then ...... welcome to Mathematics.

    Let's take a simple example to see why this "logic chain" works.

    -- 3.1.01 If P → R, and R → Q, then P → Q.
    example {P Q R : Prop} (h1 : P → R) (h2 : R → Q) : P → Q := by intro h apply h2 apply h1 exact h

    Using the tactics we learned before, we proved that "If P → R, and R → Q, then P → Q." In fact, as you follow this pattern, you can have as many intermediate propositions as you want. As long as the logic chain begins with P and ends with Q, you can always show that (P → Q). Therefore, we've proved that "a chain of implications" does work. And most importantly, we are now able to use Lean to prove something that's not proved in the textbook! That's amazing!!

    Let's see a real example of it. Consider this statement: If a and b are even integers, then 2 | a + b. De ja vu? It's actually equivelent to theorem 3.1. - "if a and b are even, then a + b is even." (a + b is even ≡ ∃ k, a + b = 2k ≡ 2 | a + b). In fact, 2 divides any even integer! Therefore, we can first use the result from theorem 3.1. to show that if a and b are both even, then a + b is even, and then unfold the definition of even number and divisibility of prove that 2 | a + b, which means: in order to prove (Even a ∧ Even b → 2 | a + b), we can add an intermediate step: (Even a ∧ Even b → Even (a + b) → 2 | a + b).

    So how can we construct the intermediate steps of a proof in Lean? For most of the time, we want to prove (P → Q), but we don't have the hypotheses like (h1 : P → R) and (h2 : R → Q) that are explicitly written in the problem, so we need to have our own hypotheses as the intermediate steps by using the have tactic.

    In Lean:
    -- 3.1.04 If a and b are even integers, then 2 ∣ a + b
    example {a b : ℤ} (h1 : Even a) (h2 : Even b) : 2 ∣ a + b := by have h3 : Even (a + b) := by exact c3_1 h1 h2 cases' h3 with k h4 use k rw [h4] ring

    Remember: our first step is to prove that a + b is an even integer. To add this intermediate step, we use the have tactic, and give this step a name. Since it will become the third hypothesis of this statement, we name it h3 here. After the colon we state the content of this step: Even (a + b). Finally, we type := by to indicate that we are going to prove it. As you hit "enter" + "tab", you will see that in the "current proof state" in your Lean Infoview window, the goal becomes the statement we are proving in this step. This is a subgoal of our whole proof. We use indentation to indicate that we are proving a subgoal, and Lean's current proof state will only show the subgoal to help us focus on it. In this case, since the subgoal is already proved in this Lean file, we can use the theorem directly by the exact tactic. We named this theorem "c3_1" above, which takes two hypotheses (h1 : Even a) (h2 : Even b) and outputs the result Even (a + b). (We've seen how to use a theorem.) In this case, "c3_1" needs to take two arguments that are the hypotheses to show that a and b are even, so we type c3_1 h1 h2, and it will return the output (Even (a + b)). Since it's exactly the same as this subgoal, we use exact tactic. After we complete the proof of a subgoal, we exit this indentation block, and Lean will show us the main goal again. Then you can use this new hypothesis and some tactics to close this main goal as usual.

    3.2. Proof by Cases (a.k.a. proof by exhaustion)

    From last chapter, we've shown that in order to prove things like ((P ∨ Q) → R), we need to prove that (P → R) ∧ (Q → R). You can draw a truth table to show that these two statements are actually logically equivalent, but we have three variables here, so our truth table has 2^3 = 8 rows, and it becomes very tedious to draw a truth table for it. Let's see if we can apply some properties of logic to prove this equivelance.

    Review:
    • Law of Implication: P → Q ≡ ¬P ∨ Q
    • De Morgan's Laws (1): ¬(P ∧ Q) ≡ ¬P ∨ ¬Q
    • De Morgan's Laws (2): ¬(P ∨ Q) ≡ ¬P ∧ ¬Q
    • Distributive Laws (1): P ∧ (Q ∨ R) ≡ (P ∧ Q) ∨ (P ∧ R)
    • Distributive Laws (2): P ∨ (Q ∧ R) ≡ (P ∨ Q) ∧ (P ∨ R)

    We've seen the first three properties from the last chapter. You might be unfamiliar with the last two properties, but it's not difficult to memorize them: they are very similar to the distrubutive law in arithmetic.

    Applying these properties, we have:

    (P ∨ Q) → R ≡ ¬(P ∨ Q) ∨ R ≡ (¬P ∧ ¬Q) ∨ R ≡ (¬P ∨ R) ∧ (¬Q ∨ R) ≡ (P → R) ∧ (Q → R).

    We've proved the underlying logic of proof by cases!

    Review:

    Theorem 3.3. The product of two consecutive integers is even.

    In Lean:

    The textbook first states the fact that an integer is either even or odd. Well, we can't just do the same thing in Lean: we need to prove it or use the theorem that's proved somewhere else. We will prove its contrapositive later in this chapter, but for now let's use the theorem from Mathlib. You might have already seen that we imported "Mathlib.Data.Int.Parity" at the beginning of the file. It has the following theorem we will use:

    Int.even_or_odd (n : ℤ) : Even n ∨ Odd n

    It states that an integer is even or odd. If we type Int.even_or_odd n, where n is an integer declared ealier, it will output Even n ∨ Odd n. This is all we need!

    -- 3.2.01 (Theorem 3.3.) The product of two consecutive integers is even.
    theorem c3_3 (n : ℤ) : Even (n * (n + 1)) := by have h : Even n ∨ Odd n := by exact Int.even_or_odd n cases' h with heven hodd · cases' heven with k heven use k * (n + 1) rw [heven] ring · cases' hodd with k hodd use n * (k + 1) rw [hodd] ring
    Review:

    Theorem 3.4. (Elementary Properties of Absolute Value). Suppose a and b are integers. Then:

    • (a) |a| ≥ 0.
    • (b) |−a| = |a|.
    • (c) −|a| ≤ a ≤ |a|.
    • (d) |a|^2 = a^2.
    • (e) |ab| = |a||b|.
    • (f) |a| ≤ |b| if and only if −|b| ≤ a ≤ |b|.
    In Lean:

    If you look at the proofs in the textbook, you should notice that they all assume that an integer is either "greater than or equal to 0" or "less than 0". We will use a theorem from Mathlib to achieve that:

    lt_or_le.{u} {α : Type u} [inst✝ : LinearOrder α] (a b : α) : a < b ∨ b ≤ a

    Again, just look at things in the parentheses: (a b : α). Before chapter 5, please take my words: I will always give you the right theorem, so don't worry about what α means here (just think of "α" as "ℤ"). Therefore, if we let b be 0, then the command lt_or_le a 0 will return a < 0 ∨ a ≥ 0, which is all we need for the following proofs! In chapter 5, I will explain more about what {α : Type u} means here. By that time, you should be able to understand any theorem and later look for theorems by yourself. Let's take time :)

    To prove these theorems, let me introduce two new powerful tactics for you.

    3.2.01 rcases

    To prove (b), we will need three cases: a < 0 ∨ a = 0 ∨ 0 < a. You might want to use cases' tactic, but here is an upset fact: cases' can only seperate a statement to two statements. In this case, it is only able to split the output to a < 0 and a = 0 ∨ a > 0, as Lean pull out statements from logical connectives in order. Frustratingly, you need to use cases' again to split the second statement to two statements.

    Is there one way to seperate the cases all at once? There is! We can do this recursively by rcases command. "r" stands for "recursive".

    If the hypothesis is h : P ∨ Q ∨ R, type rcases h with hp | hq | hr to seperate the hypothesis to hp: P, hq: Q, and hr: R. If the hypothesis is h : P ∧ Q ∧ R, type rcases h with ⟨hp hq hr⟩ to seperate the hypothesis to hp: P, hq: Q, and hr: R. In short, | for disjunctions, and ⟨⟩ for conjunctions. In VS Code, type "\<" for "⟨" and "\>" for "⟩". This tactic is a recursive version of cases', so it can do everything that cases' can do. Feel free to stick to it.

    3.2.02 linarith

    The proof in the textbook use the fact that if a < 0, then -a > 0. This is an example of linear inequality. For any goal as linear equalities and linear inequalities, with enough information provided in the assumptions, you can use the linarith tactic. For example, if the assumptions are (h1 : a < b) and (h2 : b < c) and your goal is a < c, just type linarith. It will close the goal immediately. Keep in mind that the expression should be linear, which means the variables are linearly combined. It's a powerful tactic, but don't use it everywhere blindly.

    Also, some axioms about absolute value and integers for you (I omit some unimportant information). Note that you've seen the fifth command at EPI - 8. If you type lt_trichotomy a 0, you will get a < 0 ∨ a = 0 ∨ a > 0 .

    abs_of_neg ... (h : a < 0) : |a| = -a
    abs_of_nonneg ... (h : 0 ≤ a) : |a| = a
    abs_of_pos ... (h : 0 < a) : |a| = a
    abs_of_nonpos ... (h : a ≤ 0) : |a| = -a
    lt_trichotomy ... (a b : α) : a < b ∨ a = b ∨ b < a

    -- 3.2.02 |a| ≥ 0.
    theorem c3_4_a {a : ℤ} : |a| ≥ 0 := by have h : a < 0 ∨ a ≥ 0 := by exact lt_or_le a 0 cases' h with h1 h2 · rw [abs_of_neg h1] linarith · rw [abs_of_nonneg h2] exact h2
    -- 3.2.03. |-a| = |a|
    theorem c3_4_b {a : ℤ} : |-a| = |a| := by have h : a < 0 ∨ a = 0 ∨ a > 0 := by exact lt_trichotomy a 0 rcases h with h1 | h2 | h3 · have h : -a > 0 := by linarith rw [abs_of_pos h, abs_of_neg h1] · rw [h2] norm_num · have h : -a < 0 := by linarith rw [abs_of_neg h, abs_of_pos h3] ring

    The textbook leaves (c) to (f) as exercises to you. However, the proof of next theorem (the Triangle Inequality) needs results from (c) to (f), so I will formalize all of them for you. You are welcome :)

    Though you might just skim through the next 4 proofs in order to continue reading this chapter since some of them are REALLY long, I highly encourage you to check these proofs in details later for practice. All the Mathlib theorems used here are given either in this chapter or in EPIs table from chapter 1. You can always move your cursor over the theorems in VS Code to check their definitions since you already know how to read them!

    -- 3.2.04. -|a| ≤ a ≤ |a|
    theorem c3_4_c {a : ℤ} : -|a| ≤ a ∧ a ≤ |a| := by have h : a < 0 ∨ a ≥ 0 := by exact lt_or_le a 0 cases' h with h1 h2 · constructor · rw [abs_of_neg h1] linarith · rw [abs_of_neg h1] linarith · constructor · rw [abs_of_nonneg h2] linarith · rw [abs_of_nonneg h2]
    -- 3.2.05 |a|² = a²
    theorem c3_4_d {a : ℤ} : |a|^2 = a^2 := by have h : a < 0 ∨ a ≥ 0 := by exact lt_or_le a 0 cases' h with h1 h2 · rw [abs_of_neg h1] ring · rw [abs_of_nonneg h2]
    -- 3.2.06 |ab| = |a||b|
    theorem c3_4_e {a b : ℤ} : |a * b| = |a| * |b| := by have ha : a < 0 ∨ a = 0 ∨ a > 0 := by exact lt_trichotomy a 0 rcases ha with ha1 | ha2 | ha3 · have hb : b < 0 ∨ b = 0 ∨ b > 0 := by exact lt_trichotomy b 0 rcases hb with hb1 | hb2 | hb3 · have htemp : b * a > 0 * a:= by exact mul_lt_mul_of_neg_right hb1 ha1 have hab : a * b > 0 := by linarith rw [abs_of_pos hab, abs_of_neg ha1, abs_of_neg hb1] ring · rw [hb2] norm_num · have htemp : b * a < 0 * a := by exact mul_lt_mul_of_neg_right hb3 ha1 have hab : a * b < 0 := by linarith rw [abs_of_neg hab, abs_of_neg ha1, abs_of_pos hb3] ring · have hb : b < 0 ∨ b ≥ 0 := by exact lt_or_le b 0 cases' hb with hb1 hb2 · rw [ha2] norm_num · rw [ha2] norm_num · have hb : b < 0 ∨ b = 0 ∨ b > 0 := by exact lt_trichotomy b 0 rcases hb with hb1 | hb2 | hb3 · have htemp : a * b < 0 * b := by exact mul_lt_mul_of_neg_right ha3 hb1 have hab : a * b < 0 := by linarith rw [abs_of_neg hab, abs_of_pos ha3, abs_of_neg hb1] ring · rw [hb2] norm_num · have htemp : 0 * b < a * b := by exact mul_lt_mul_of_pos_right ha3 hb3 have hab : a * b > 0 := by linarith rw [abs_of_pos hab, abs_of_pos ha3, abs_of_pos hb3]
    -- 3.2.07 |a| ≤ |b| iff -|b| ≤ a ≤ |b|
    theorem c3_4_f {a b : ℤ} : |a| ≤ |b| ↔ -|b| ≤ a ∧ a ≤ |b| := by constructor · intro h1 constructor · have ha : a < 0 ∨ a ≥ 0 := by exact lt_or_le a 0 cases' ha with ha1 ha2 · rw [abs_of_neg ha1] at h1 linarith · rw [abs_of_nonneg ha2] at h1 linarith · have ha : a < 0 ∨ a ≥ 0 := by exact lt_or_le a 0 cases' ha with ha1 ha2 · have htemp : |b| ≥ 0 := by exact c3_4_a linarith · rw [abs_of_nonneg ha2] at h1 exact h1 · intro h2 cases' h2 with h21 h22 have ha : a < 0 ∨ a ≥ 0 := by exact lt_or_le a 0 cases' ha with ha1 ha2 · rw [abs_of_neg ha1] linarith · rw [abs_of_nonneg ha2] linarith
    Review:

    Theorem 3.5. (The Triangle Inequality). If a and b are integers, then |a + b| ≤ |a| + |b|.

    In Lean:

    We will follow the flow of textbook's proof and make use of the theorems we proved above. You might see something wierd at the last step (starting from h4 part). No worries if you can't understand it! I will explain about that later.

    -- 3.2.08 (Theorem 3.5) Triangle Inequality. -- If a and b are integers, then |a + b| ≤ |a| + |b|
    theorem c3_5 {a b : ℤ} : |a + b| ≤ |a| + |b| := by have h1a : -|a| ≤ a ∧ a ≤ |a| := by exact c3_4_c have h1b : -|b| ≤ b ∧ b ≤ |b| := by exact c3_4_c cases' h1a with h1a_left h1a_right cases' h1b with h1b_left h1b_right have h2_left : -|a| + (-|b|) ≤ a + b := by exact add_le_add h1a_left h1b_left -- can also use linarith have h2_right : a + b ≤ |a| + |b| := by exact add_le_add h1a_right h1b_right -- can also use linarith have h3 : -(|a| + |b|) ≤ a + b ∧ a + b ≤ |a| + |b| := by constructor · rw [neg_eq_neg_one_mul |a|, neg_eq_neg_one_mul |b|] at h2_left rw [← mul_add (-1) |a| |b|, ← neg_eq_neg_one_mul (|a| + |b|)] at h2_left exact h2_left · exact h2_right have h4 : |(|a| + |b|)| = |a| + |b| := by have h41 : |a| ≥ 0 := by exact c3_4_a have h42 : |b| ≥ 0 := by exact c3_4_a have h43 : |a| + |b| ≥ 0 := by exact add_le_add h41 h42 exact abs_of_nonneg h43 rw [← h4] rw [← h4] at h3 exact (@c3_4_f (a + b) (|a| + |b|)).2 h3
    Remarks:
    1. Note that the textbook skips something at the last step: you cannot use the part (f) directly if you didn't show that "|a| + |b| = | |a| + |b| |". It might seem obvious to you, but Lean requires you to explicitly show this implicit step, which is why we have an additional step h4.
    2. At the last line of code, when we use the theorem (f) (which we named it c3_4_f in this Lean file), we add an @ before it. If you remove this, Lean will throw an error to you. Remember we talked about "implicit arguments" somewhere in this chapter? When we define the theorem c3_4_f, we use {a b : ℤ} to indicate the variables in the statement, and since they are in curly braces instead of in parentheses, they are considered "implicit arguments", so we don't need to input the variables when we use this theorem: Lean will try to infer the correct arguments for us.

      In this case, however, our arguments are "too complicated" for Lean to infer. We want the two integer arguments to be "(a + b)" and "(|a| + |b|)". If Lean cannot figure out the correct arguments by itself, then we need to explicitly (manually) give these inputs. The way to do that is to add @ before the theorem we use. In this way, we can pretend that the theorem uses (a b : ℤ) instead of {a b : ℤ}, so we can give the correct inputs as arguments to the theorem.

    3.3. Proof by contrapositive

    Review:
    • Theorem 3.6. Suppose n is an integer. If n² is even, then n is even.
    • Theorem 3.7. Let n be an integer. If n² + 2n < 0, then n < 0.
    In Lean:

    Recall that P → Q ≡ ¬Q → ¬P. To change a goal from P → Q to ¬Q → ¬P in a Lean proof, use the contrapose! tactic.

    Also, we need to use the fact that "an integer is not even if and only if it's odd". This is obvious (and vice versa), but Lean is extremely rigorous, so we need to prove this fact... It requires proof by contradiction, which we will cover later, so for now let's just use the existing theorems from Mathlib. This theorem is from another package called "Mathlib.Data.Int.Parity", which is why we imported it at the begining of this Lean file.

    Int.odd_iff_not_even {n : ℤ} : Odd n ↔ ¬Even n
    We will also need this one, but later: Int.even_iff_not_odd {n : ℤ} : Even n ↔ ¬Odd n

    -- 3.3.01 (Theorem 3.6) Suppose n is an integer. If n² is even, then n is even.
    theorem c3_6 {n : ℤ} : Even (n^2) → Even n := by contrapose! intro h rw [← Int.odd_iff_not_even] rw [← Int.odd_iff_not_even] at h cases' h with k h use (2*k^2+2*k) rw[h] ring
    -- 3.3.02 (Theorem 3.7) Let n be an integer. If n² + 2n < 0, then n < 0.
    theorem c3_7 {n : ℤ} : n^2 + 2 * n < 0 → n < 0 := by contrapose! intro h1 have h2 : 0 ≤ n^2 := by exact sq_nonneg n linarith

    3.4. Proof by contradiction

    That's it! This is my favorite form of proof :) Let's see why and how it works.

    Let's first introduce two new terms: tautology and contradiciton. A tautology is always true, while a contradiction is always false. For example:

    P ¬P P ∨ ¬P
    True False True
    False True True

    We can see that (P ∨ ¬P) is true in all cases, so (P ∨ ¬P) is a tautology.

    P ¬P P ∧ ¬P
    True False False
    False True False

    We can see that (P ∧ ¬P) is false in all cases, so (P ∧ ¬P) is a contradiction. (These two results are also called "Negation Laws".)

    Now let's see the definition of proof by contradiction from the textbook: In a proof by contradiction, we use the fact that a statement and its negation have opposite truth values. To prove that P is true, suppose instead that not(P) is true and apply logic, definitions, and previous results to arrive at a conclusion you know to be false. Then you may conclude that not(P) must be FALSE and thus P must be TRUE.

    In short: To prove P, we need to show that (¬P → C), where C stands for contradiction. Let's draw a truth table:

    P C ¬P → C
    True False True
    False False False

    P and (¬P → C) are logically equivalent, which shows why proof by contradiction works.

    In Lean:

    If we want to prove by contradiction in Lean 4, we need two steps: first step is to negate the goal and make it as a new hypothesis (assume the negation of our goal); second step is to change our goal to "false", which is how Lean denotes contradiction. Luckily, there is a tactic called by_contra that can complete these two steps at once.

    -- 3.4.01 If P is true, then P is true
    example {P : Prop} (h : P) : P := by by_contra h1 contradiction

    We can just use exact h to close the goal, but for a demo of proof by contradiction in Lean, let's use by_contra h1 here, which creates a new hypothesis h1 : ¬P, (which is the negation of the original goal ⊢ P), and changes the goal to ⊢ False.

    OK, so now we have two hypotheses: h : P and h1 : ¬P. The contradiction is so obvious! Whenever we see a contradiction in our hypotheses, we can use contradiction tactic to close the goal immediately.

    Remark: contradiction can close any goal with contradictory hypotheses. Remember that if the hypothesis is false, the the statement is vacuously true, right? In other words, you don't have to use contradiction tactic for every proof by contridition. See the example below.

    Review:
    • Theorem 3.8. No integer is both even and odd.
    In Lean:
    -- 3.4.02 (Theorem 3.8.) No integer is both even and odd.
    theorem c3_8 (n : ℤ) : ¬(Even n ∧ Odd n) := by by_contra h cases' h with heven hodd cases' heven with k h1 cases' hodd with l h2 rw [h1] at h2 have h3 : 2 * (k - l) = 1 := by linarith have h4 : (2 : ℤ) ∣ 1 := by use (k - l) exact h3.symm norm_num at h4

    Note that to the fourth to last line, we use (2 : ℤ) instead of simply 2. Since Lean treats the non-negative integers as natural numbers by default, this expression will be the divisibility about natural numbers, which is more complicated than divisibility about integers in Lean (we will see more about that in chapter 6). Therefore, we need to explicitly tell Lean that the 2 here is an integer in this way.

    To the second to last line, we use the symm command after the dot. Any time you want to use the symmetry of an equation, just use this command.

    The textbook uses a theorem from EPIs table to show that "2 ∣ 1" is a contradiction. Since it's a numerical expression, we can also use norm_num. If norm_num is used on a false expression, it will return "False", which can therefore close our goal.

    Review:
    • Theorem 3.9. √2 is irrational.
    In Lean:

    To prove that square root of 2 is irrational is a little bit complicated if we only want to use the definitions from Mathlib. For simplicity, we will prove its equivalence. That is, we will set the first few steps of proofs on textbook as our assumptions, so we transform the statement to:

    • Suppose m, n are integers, where n ≠ 0 and m and n are not both even. Then 2n² ≠ m².
    -- 3.4.03 (Theorem 3.9.) √2 is irrational.
    theorem c3_9 {m n : ℤ} (h1 : n ≠ 0) (h2 : ¬(Even m ∧ Even n)) : 2 * n^2 ≠ m^2 := by by_contra h3 have h4 : Even (m^2) := by use n^2 linarith have h5 : Even m := by exact c3_6 h4 have h6 : ∃ k, m = k + k := by cases' h5 with k h5 use k cases' h6 with k h6 have h7 : 2 * n^2 - 4 * k^2 = 0 := by rw [h3, h6] ring have h8 : n^2 = k^2 + k^2 := by linarith have h9 : Even (n^2) := by use k^2 have h10 : Even n := by exact c3_6 h9 have h11 : Even m ∧ Even n := by constructor · exact h5 · exact h10 contradiction
    Review:
    • Theorem 3.10. Let a be an integer. If a² is odd, then a is odd.
    In Lean:

    Proof by contrapositive is easy. We did similar proof earlier in this chapter. To prove by contradiction, the textbook uses the results from Exercise 3.1 (Note that there is a mistake in textbook: it's from 3.1(b) instead of 3.1(a)). So we will prove Exercise 3.1(a) first.

    -- 3.4.04 (Theorem 3.10.) Let a be an integer. If a² is odd, then a is odd.
    
    -- contrapositive
    
    theorem c3_10_a {a : ℤ} : Odd (a^2) → Odd a := by
      contrapose!
      rw [← Int.even_iff_not_odd, ← Int.even_iff_not_odd]
      intro h
      cases' h with k h
      use 2 * k^2
      rw [h]
      ring
    
    -- contradiction
    
    -- Exercise 3.1(b)
    theorem e3_1_b {a b : ℤ} (ha : Even a) (hb : Odd b) : Odd (a + b) := by
      cases' ha with k1 h1
      cases' hb with k2 h2
      use (k1 + k2)
      rw [h1, h2]
      ring
    
    -- Then...
    theorem c3_10_b {a : ℤ} : Odd (a^2) → Odd a := by
      intro h1
      by_contra h2
      rw [← Int.even_iff_not_odd] at h2
      have h3 : Odd (a + a^2) := by
        exact (e3_1_b h2 h1)
      have h4 : a + a^2= a * (a + 1) := by
        ring
      rw [h4] at h3
      have h5 : Even (a * (a + 1)) := by
        exact c3_3 a
      rw [Int.even_iff_not_odd] at h5
      contradiction
    

    3.5. Proof of an if-and-only-if Statement

    Suppose we want to prove (P ↔ Q) in Lean, and we know that (P ↔ Q) is the same as (P → Q) ∧ (Q → P), so our goal is actually a conjunction. Therefore, we need to use the constructor tactic to split our goal to (P → Q) and (Q → P). (you can review chapter 2 if you don't remember this)

    Review:
    • Theorem 3.11. Let a be an integer. Then a is odd if and only if a² − 1 is even.
    In Lean:
    -- 3.5.01 (Theorem 3.11.) Let a be an integer. Then a is odd if and only if a² −1 is even.
    theorem c3_11 {a : ℤ} : Odd a ↔ Even (a^2 - 1) := by constructor · intro h1 cases' h1 with k h1 use (2 * k^2 + 2 * k) rw [h1] ring · contrapose! rw [← Int.even_iff_not_odd, ← Int.odd_iff_not_even] intro h2 cases' h2 with k h2 use 2 * k^2 - 1 rw [h2] ring

    Whoa, we just proved so many theorems in Lean! You made it! Great job :D

    Chapter 4: Proofs Involving Quantifiers

    4.1. Proofs of for all statements

    Review:

    Remember that in chapter 2, we showed that the universal quantifier (∀) and the hypothesis of the conditional statement (if...then...) are quite similar. Here is an example from the textbook:

    The statement "for all integers x, P(x)" is equivalent to the statement "if x is an integer, then P(x)."

    Therefore, we can use the same method to prove a statement with "for all..." and a statement with "if..." by introducing an arbitrary variable and reduce our goal to the remaining part of the statements. Does it ring a bell? Yes! The intro tactic! Whenever you see something like (∀ x..., P(x)) or (if x ..., then P(x)), use intro x to add x as a variable to your assumption and reduce your goal to P(x). This tactic is sooooooooo useful!

    theorem 4.1 proof

    4.2. Proofs of there exist statements

    Review:

    Since "there exists" is much weaker than "for all", when we prove a statement with "there exists an x such that P(x)" (∃x, P(x)), we only need to use one example of x that satisfies P(x). In Lean 4, this can be simply done by the use tactic. If we are able to construct specific examples to prove the statement, then we are using "constructive proof". Let's implement some examples from textbook in Lean:

    In Lean:
    -- 4.2.01 There exist integers m and n such that 10m + 13n = 3.
    example : ∃ m n : ℤ, 10 * m + 13 * n = 3 := by use -1, 1 norm_num

    To prove that "there exist integers m and n such that 10m + 13n = 3", we can just give speficic examples of "m" and "n" that satisfy the equation. In this case, use -1, 1 means that we use "-1" for "m" and "1" for "n". Then the goal is reduced to an expression with numbers, and we can use norm_num to close the goal.

    -- 4.2.02 For every integer a, there exists an integer b such that a + b = 1.
    example : ∀ a : ℤ, ∃ b : ℤ, a + b = 1 := by intro a use 1 - a ring

    If you see some quantifiers being used together, don't panic, as you will get used to that in your real analysis class... Just use the techniques we learned to reduce to goal in sequence. It's important to follow the sequence of quantifiers! ("∀ a : ℤ, ∃ b : ℤ" and "∃ b : ℤ, ∀ a : ℤ" are not the same thing!) In this case, we see ∀ a : ℤ first, so we intro the arbitrary variable a. Then we construct a specific example 1 - a and use it to replace b in the goal. Then the goal is reduced to an algebraic expression with variable, and we can use ring to close the goal.

    4.3. Existence and uniqueness

    To prove "existence" (∃), we only need to give one example. There might be other examples that also satisfy the same proposition, but one example is enough.

    To prove "uniqueness" (∃!), however, we not only need to give an example, but also need to show that that's the only example that can satisfy the proposition. In other words, besides the basic steps of proving existence, we need one more step: we assume that there exist another example that also satisfies the same proposition. But after some deduction, we find that the two examples are actually the same.

    I know it sounds a little bit abstract. Let's follow the textbook and see how to prove uniqueness in Lean.

    In Lean:
    theorem 4.2, 4.3 proof, intro to dsimp

    Chapter 5: Sets

    Welcome to the chapter of set theory!

    Review:

    • set: a collection of elements
    • empty set (∅): a set with no elements (type "\empty" for "∅")

    In Lean:

    In textbook we can see that there are many ways of specifying a set.

    In mathematics, if we want to describe a set with certain elements, we can list them explicitly (e.g. {1, 2, 3}). In this chapter, however, we'd like to prove more general features of sets instead of taking specific examples. Also, it's a little bit complicated to implement the reasoning of this kind of sets in Lean 4, so we will not focus on it.

    Another common way is to use the set-builder notation. As an example from the textbook, if we want to describe a set with all integers between 5 and 103, we write {x ∈ ℤ : 5 ≤ x ≤ 103}. This notation consists of two parts: a variable, and a statement specifying the properties of the set. In Lean 4, we can describe the same set in this way:

    -- 5.0.01
    #check {x : ℤ | x ≥ 5 ∧ x ≤ 103}

    If you move your cursor over #check, you can see that it Lean describes this set as "Set ℤ", which means that it's a set of integers. #check command can show the type of a certain expression.

    Wait, what is a type?

    5.0. Type

    If you have some programming experience, you might notice that in a programming language (e.g. Java), different variables can have different kinds of data types (integer, float, character, boolean, etc.). If you don't, no worries! You just need to know that in programming, a term is usually associated with a type. (e.g. "6" has the type "integer", "hello" has the type "string"). The study of such type systems is called type theory, which is what Lean, as a programming language, is based on.

    In Lean, everything has a type, and we can use the #check command to check its type.

    -- 5.0.02
    #check 1 #check 1.5 #check (1 : ℝ)

    In Lean 4, the notation we use to show that "a has type b" is "a : b". You can see that "1" has the type "ℕ" (1 : ℕ), meaning that 1 is an natural number. Note that ℕ is a type which means "natural number" in Lean 4, instead of "natural number set". This is the key difference between set theory and type theory! It's why we denote it as "1 : ℕ" instead of "1 ∈ ℕ" in Lean.

    #check 1.5 gives us "1.5 : Float", so 1.5 has the type "Float". Seems like the numbers have default types in Lean. But what if we don't want the default type of a specific number? For example, we want 1 to be treated as a real number instead of an natural number. Well, it's easy: just explicitly type (1 : ℝ), and Lean will treat 1 as a real number in that expression.

    If we want to declare some variables and assign them specific types, we just need to follow the same format:

    -- 5.0.03
    variable (a b : ℤ) variable (P : Prop)

    Here, we declare variables "a" and "b" with type "integer". We also declare a variable "P" with type "Prop", meaning that it's a propositional variable. Seems familiar? That's right! For every example we've done before, we always declare the variables with their types in this way. If we declare variables inside an example, then the variables can only be seen and used in that example. But if we declare a variable as the way above, then they can be seen until the end of the file! Different ways of declaring variables lead to different scopes of variables. If you want to restrict the scope of some variables in a certain part of the file, use the section and end keywords, so that the variables declared between these two keywords will also have scope only in this part. You might've already seen these a lot in the exercise files.

    -- 5.0.04
    section variable (a b : ℤ) variable (P : Prop) end

    What about some types (like ℕ and ℝ) themselves? Are they also variables of certain types?

    -- 5.0.05
    #check ℕ #check ℝ

    The answer is "yes". ℕ and ℝ have the type "Type"... I will not go deeper than this. If you are confused, just remember: a type itself can also have a type.

    Alright, I almost forget that this is a chapter of sets. In Lean, if we want to declare a variable as a set, we need to specifically show that it's a set of a certain type. For example, "Set ℤ" means that it's a set of "integer" type. In this chapter, we will prove some general things about sets, so we declare a new variable with the type "Type" and then declare another variable to be the set of that type.

    -- 5.0.06
    section variable (α : Type) variable (A : Set α) end

    In this case, we declare a new type "α", and declare the variable A to be the set of type "α". For most of the exercises of set theory, we will follow this format, so you will get used to that very soon.

    Now guess what? We've gone through the most difficult part in this chapter! The following content will just be a review of basic set theory stuff from your textbook with their implementations in Lean 4, and you already know all the tactics we will be using! Before that, let's take a break :)

    5.1. Union

    For this chapter, I will first declare some variables of sets for all the examples of this chapter. These variables will be implicitly available to all the problems in this chapter, so that I don't need to write the same things over and over again for each example.

    -- Declare variables of sets for all the problems in this chapter
    variable {α : Type} {A B C D : Set α}
    
    Review:
    • x ∈ A ∪ B ↔ x ∈ A ∨ x ∈ B ("\union" for "∪")
    • A ∪ B = B ∪ A
    • A ∪ A = A

    5.2. Intersection

    Review:
    • x ∈ A ∩ B ↔ x ∈ A ∧ x ∈ B ("\inter" for "∩")
    • A ∩ B = B ∩ A
    • A ∩ A = A
    • A and B are disjoint ↔ A ∩ B = ∅

    5.3. Subsets

    Review:
    • A ⊆ B ↔ ∀ x, x ∈ A → x ∈ B
    • ∅ is a subset of all sets, including itself.
    • Theorem 5.1. Let A, B, C and D be sets. Suppose A ⊆ B and C ⊆ D. Then A ∪ C ⊆ B ∪ D.
    In Lean:

    Let's prove the second and third statements in Lean. A key thing to remember: always think of the logical expressions of the concepts' definitions.

    -- 5.3.01
    example : ∅ ⊆ A := by intro x intro h contradiction

    Let A be an arbitrary set, and we proved that the empty set is a subset of A. The proof is short, but it might not be that intuitive in the first place. So think about the definition of our goal: what does "∅ ⊆ A" mean? it's equivalent to "∀ x, x ∈ ∅ → x ∈ A", right? We know how to prove a statement like that! It starts with "∀", so we use intro x command to introduce an arbitrary variable x. Now the goal reduces to "x ∈ ∅ → x ∈ A". It's a conditional statement, so we still use the intro tactic to introduce a new hypothesis (h : x ∈ ∅). The goal is then reduced to only "x ∈ A". But since the hypothesis h is always false, as an empty set cannot have any element, we can use contradiction to close the goal. Not difficult, right? As long as you can think of the set theoretic notations as logical expressions.

    -- 5.3.02 (Theorem 5.1.) Let A, B, C and D be sets. 
    -- Suppose A ⊆ B and C ⊆ D. Then A ∪ C ⊆ B ∪ D.
    example (h1 : A ⊆ B) (h2 : C ⊆ D) : A ∪ C ⊆ B ∪ D := by
      intro x
      intro h3
      cases' h3 with h3 h4
      · have h5 : x ∈ B := by
          exact h1 h3
        left
        exact h5
      · have h6 : x ∈ D := by
          exact h2 h4
        right
        exact h6
    

    This example is slightly longer, but trust me: you can understand everything in this proof (review chapter 2 if you forgot some of the tactics). Here are some hints :)

    • A ∪ C ⊆ B ∪ D ↔ ∀ x, x ∈ A ∪ C → x ∈ B ∪ D
    • A ⊆ B ↔ ∀ x, x ∈ A → x ∈ B
    • C ⊆ D ↔ ∀ x, x ∈ C → x ∈ D
    • if a hypothesis is of this form: (h1 : P → Q), and another hypothesis is of this form: (h2 : P), then h1 h2 outputs Q. You can think of a conditional statement as a function that takes the proof of the hypothesis as an input and outputs the conclusion. In this case, (h1 : P → Q) is a function that takes (h2 : P) as an input and outputs Q.

    5.4. Set equality

    Review:
    • A = B ↔ A ⊆ B ∧ B ⊆ A.
    In Lean:

    Set equality is essentially a conjunction, so we can use constructor command to seperate the main goal to subgoals. However, Lean 4 doesn't handle "A = B ↔ A ⊆ B ∧ B ⊆ A" implicitly, so we need to use a theorem from the package "Mathlib.Data.Set.Basic" called Set.Subset.antisymm_iff. In practice, if the goal is A = B, then rw [Set.Subset.antisymm_iff] changes the goal to A ⊆ B ∧ B ⊆ A.

    5.5. Set Difference

    Review:
    • x ∈ A \ B ↔ x ∈ A ∧ x ∉ B ("\\" for "\", "\notin" for "∉")
    • Theorem 5.2. Let A be a set. Then A \ A = ∅.
    • Theorem 5.3. Let A, B, and C be sets. Then A \ (B ∪ C) = (A \ B) ∩ (A \ C).

    Notes (delete after): a theorem for 5.2, de morgan's laws for 5.3, same variable names for different goals

    In Lean:

    We will exactly follow the textbook's ways of proving these two theorems.

    In the textbook, the theorem 5.2. is proved by contradiction. It uses the fact that if a set is not empty, then there exists en element in the set. We need to use a theorem from "Mathlib.Data.Set.Basic" called Set.eq_empty_iff_forall_not_mem, which states that s = ∅ ↔ ∀ (x : α), x ∉ s, where s is some set. Using this theorem, we can rewrite the goal from A \ A = ∅ to ∀ (x : α), x ∉ A \ A. After introducing the arbitrary variable x, just follow the process of proof by contradiction.

    -- 5.5.01 (Theorem 5.2.) Let A be a set. Then A \ A = ∅. 
    example : A \ A = ∅ := by
      rw [Set.eq_empty_iff_forall_not_mem]
      intro x
      by_contra h
      cases' h with h1 h2
      contradiction
    

    In the textbook, the proof for theorem 5.3. uses the fact that "x ∉ A ∪ B ↔ x ∉ A ∧ x ∉ B". This is a result from De Morgan's Laws of sets. Since the textbook doesn't require you to prove it (or you've already proved it, depends on your instructor's requirement), I implement this theorem in the file of this chapter for you to use directly. The theorem is called notin_union_iff_notin_and_notin, which states that x ∉ A ∪ B ↔ x ∉ A ∧ x ∉ B. Though not required, you are welcome to check how it's implemented.

    Another friendly reminder: If there are subgoals in your proof, after each subgoal is closed, the variables you declared in the subgoal will also disappear, so that you can use the same variable names for other subgoals.

    -- 5.5.02 (Theorem 5.3.) Let A, B, and C be sets. 
                             Then A \ (B ∪ C) = (A \ B) ∩ (A \ C).
    example : A \ (B ∪ C) = (A \ B) ∩ (A \ C) := by
      rw [Set.Subset.antisymm_iff]
      constructor
      -- ⊢ A \ (B ∪ C) ⊆ A \ B ∩ (A \ C)
      · intro x
        intro h1
        cases' h1 with h1 h2
        rw [notin_union_iff_notin_and_notin] at h2
        cases' h2 with h2 h3
        have h4 : x ∈ A \ B := by
          constructor
          · exact h1
          · exact h2
        have h5 : x ∈ A \ C := by
          constructor
          · exact h1
          · exact h3
        constructor
        · exact h4
        · exact h5
      -- ⊢ A \ B ∩ (A \ C) ⊆ A \ (B ∪ C)
      · intro x
        intro h1
        cases' h1 with h1 h2
        cases' h1 with h11 h12
        cases' h2 with h21 h22
        constructor
        · exact h11
        · have h3 : x ∉ B ∧ x ∉ C := by
            constructor
            · exact h12
            · exact h22
            rw [notin_union_iff_notin_and_notin]
            exact h3
    

    The proof looks scary, but if you can understand the proof on the textbook, then you can also understand this one. At the second half of this guide we will introduce some other techniques for writing shorter proofs in Lean.

    5.6. Sets of sets

    Review:
    • Sets of sets are also called families.
    • Definition 5.1. The power set of a set A is the set of all subsets of A, written as 𝒫(A). ("\power" for "𝒫")
    • ∀ A, ∅ ∈ 𝒫(A)
    • Theorem 5.4. Let A and B be sets. Then 𝒫(A) ⊆ 𝒫(A ∪ B)
    • Theorem 5.5. Let A and B be sets. Then A ∩ B = ∅ if and only if 𝒫(A) ∩ 𝒫(B) = {∅}.
    In Lean:
    command for definition, intro two at the same time, find theorems by self

    5.6.1. Unions and Intersections of Families

    Review:

    If F is a family, then:

    • ∪ F = {a : there exists A ∈ F with a ∈ A}
    • ∩ F = {a : for all A ∈ F,a ∈ A}
    • Theorem 5.6. Let A and B be families. Then ⋃ (A ∩ B) ⊆ (⋃ A) ∩ (⋃ B).

    Chapter 6: Proofs by Induction

    This is the content of Chapter 6.

    Chapter 7: Relations

    This is the content of Chapter 7.

    Chapter 8: Congruences

    This is the content of Chapter 8.

    Chapter 9: Functions

    This is the content of Chapter 9.

    Chapter 10: Cardinality

    This is the content of Chapter 11.

    (Optional) Chapter 11: Dependent Type Theory

    This is the content of Chapter 11.