Transcript
Marshall Poe:
Welcome to the New Books Network. Hello everybody, this is Marshall Poe, I'm the editor of the New Books Network, and you're listening to an episode in Grinnell College's Authors and Artists podcast. And today I'm very pleased to say that we have Joe Mileti on the show, Joe is an associate professor of mathematics at Grinnell College, which happens to be my alma mater. And we'll be talking to Joe today about math and his new book, Modern Mathematical Logic, which is out from Cambridge University Press pretty soon I think. So Joe, welcome to the show.
Joe Mileti:
Thank you, it's great to be here.
Marshall Poe:
Could you begin the interview by telling us a little bit about yourself?
Joe Mileti:
Sure. I grew up in Cleveland, Ohio, and then I went to college at Carnegie Mellon University, I was planning to be a computer scientist, and I went in as a computer science major. And over the course of my time there I realized that math is awesome, and to really understand computer science you had to understand mathematics, so I slowly got swayed into that. From there I went to graduate school at the University of Illinois Urbana-Champaign, and got a PhD in mathematical logic. My specialty is in computability theory, which is sort of at the intersection of math and computer science. After that I had two postdocs at the University of Chicago for two years, and then at Dartmouth, and from there I came to Grinnell in 2009, and have been here ever since.
Marshall Poe:
And what do you think about Grinnell? You should say, it's a great place and you love teaching there.
Joe Mileti:
Yeah, that those are both true. I love the fact that I am a five minute walk door to door for my home office to my school office.
Marshall Poe:
Yeah, that's a plus.
Joe Mileti:
I like quiet, it's a good place to think and to relax. City life is not for me.
Marshall Poe:
Yeah, it's not for me either actually. How is the math program at Grinnell now? Are you getting enough majors and things like this? Is it going okay?
Joe Mileti:
We have simultaneously an embarrassment of riches, and also it's terrifying because there's just so many students that our enrollments have gone up and up over the last 10 years, which is fantastic, but it becomes harder and harder. And the department is expanding recently because of these enrollment pressures, so our majors are fantastic, they're going on and doing great things, but there's a lot of them.
Marshall Poe:
Yeah, that's great. Well, as I told you while we were chatting before the interview, I took calculus at Grinnell, and there I stopped, but we can talk a little bit more about that later. You are a mathematician, and you study a particular kind of math, there are many different kinds of math. This is mathematical logic, can you explain to the audience what mathematical logic is?
Joe Mileti:
Sure. I should probably start by explaining what mathematics is, since as you mentioned most people get up to or not even up to calculus, and so they have a very distorted view of what mathematics is about. So in high school, and maybe in early college, you think of mathematics as manipulating formulas, which is unfortunate because mathematics is really about defining things and reasoning about them. So for example, a thing that I've been seeing on the internet lately is a debate about how many holes a straw has. And there's lots of people with strong opinions, and to a mathematician we have to define what a hole is, and it gets really complicated. And mathematicians have developed elaborate schemes, homotopy, homology, and all these other fancy words to understand how to measure how many holes an object has. And then you could ask, "Well, how many holes does a pair of pants have?" Which is a much more subtle question than a straw. And so mathematicians, this is what they do, they define things, and they reason about them, and we reason about them using logic.
And so the objects that you reason about often define the area of mathematics, or number theory, works with numbers, usually the usual counting numbers, analysis works with the real numbers. But what mathematical logic does, is it tries to study mathematical reasoning itself, using the tools of mathematics. So a mathematical argument is a few basic moves you can do, so how do I argue that something is true of all numbers? This is a very difficult thing to do, we teach it to the 200-level, but eventually you sort of start to understand the basic moves of mathematics, and how you reason about very abstract things. And over time it sort of developed into, can we systematize this? Can we write down in a formal way what a mathematical statement is, and what the permissible rules are? And once you do that, you've turned the subject of mathematics into something you can analyze using mathematics, which sounds very weird and circular, but that fundamentally is the basis of mathematical logic, and where the whole subject starts.
Marshall Poe:
So if I could draw an analogy that might be helpful, or it might not be, I work with words all the time, and words appear as elements, and then they are put in sentences, those sentences have grammars and syntax, and there are rules. As my mother, the English teacher would always say, "There are rules for this." And you can have grammatical and ungrammatical sentences. Similarly, if you put these sentences together, they can follow from one another logically or not, there are rules for that too.
Joe Mileti:
Yes, and that is very much a great analogy, and this logic more generally tries to understand the rules of human language, whereas mathematical logic restricts itself down to mathematical language. And at first that might sound more complicated, but in fact it is way easier, because human language is incredibly complicated. You can write down sentences that are very subjective, that don't have true false values, you can say things that are incredibly vague, or directly circular, that gets into very complex issues, whereas a mathematical language is very simple. We start with a few very primitive things, and have incredibly strict rules about how to build up more complicated things.
Marshall Poe:
Yeah, this is very interesting to me, because as somebody who works in natural language all the time, a dictionary definition has a denotation and a connotation, this is an artificial distinction. And these things can change mightily depending on how the word is used, by whom, where in a sentence, where in a set of propositions, and so on and so forth. I've been very interested in the use of the word disinformation or misinformation recently, and I'm like, "How is that different than just something that's wrong?" I guess there's intent behind there. And then you go down a rabbit hole, which takes your standard dictionary definition of disinformation or misinformation, and adds a whole bunch of other sentence about what it might or might not be, and all that does is add ambiguity, tremendous amounts of ambiguity to what this word actually means in any given speech act. But you do away with that in mathematics by having I guess primitives.
Joe Mileti:
Right, so we have very primitive things, and we all sort of agree on what things mean. So lots of fields try to define concepts, but I think everyone agrees that those definitions will evolve over time, and then there'll be arguments about them, and then refinements, whereas in mathematics, we define something, and then we're done practically. I mean, sometimes the act of defining something takes a couple of decades, and back and forth in mathematics, but once there's a definition, everyone agrees and we move on. And mathematics is I think fairly unique in the academic disciplines, in that once we establish something... There's some counterexamples throughout history. But mostly once we establish something, everyone agrees that it is done, it has been established, there's no longer arguments about it. And so we don't have the same types of revolutions, or other things that happen in other areas due to different ways of viewing the subject, or anything like that. And that all comes from the fact that we have basic primitives, and we build up in a very logical way from there.
Marshall Poe:
Are the primitives limited number, is there a set of primitives, or is that set growing?
Joe Mileti:
That gets complicated, there's not an easy answer to that.
Marshall Poe:
Like the principle of identity, one equals one, that's primitive, right?
Joe Mileti:
Well, so the problem is there's not necessarily one foundation for mathematics, there's sort of competing things here. From my perspective, and from most practicing mathematicians perspectives, we take set theory, which has basically one primitive, and everything else is built up from that in a standard way, other people prefer some other ways. In a lot of ways it doesn't matter, to basic practicing mathematics they're all the same, so there's a complicated discussion there. But the short answer is mathematics can be done with a very tiny set of primitives and built up from there.
Marshall Poe:
And then you work with these primitives, and put them together in various ways, and you put them in sets of statements or propositions, and then you see what follows from those, and you just go on endlessly.
Joe Mileti:
Pretty much, yes.
Marshall Poe:
Right. There's a long philosophical debate as to whether this is an invented thing, or a natural thing, you see what I mean?
Joe Mileti:
Yes.
Marshall Poe:
An artifact of the human mind, or what philosophers call a natural kind, have any thoughts on that? Can we resolve that right now?
Joe Mileti:
I will not resolve that. I think it's somewhere in the middle, and I know that that's unsatisfying. I tend to believe in a platonic world of mathematics that there is a true world of mathematics out there, but I don't believe it's physically there in any sense, it's not part of our universe, which in some ways doesn't make sense. But I don't believe it's just a product of the human mind, I think if we come across other beings in the universe, we will agree on what follows logically from other things. And the basic rules of mathematics we'll be very much an agreement on, we might have different primitives, we might have different setups, but we'll agree on the fundamental statements, we'll both agree that there are infinitely many prime numbers, and all sorts of other things. So I don't believe it's just human, but I don't really believe it's part of the physical world either, it's complicated.
Marshall Poe:
Yeah. It's funny, I remember a conversation I've told you in correspondence that I used to hang out with mathematicians a lot, and they told me that there were a infinite number of prime numbers. And I said, "Is anybody looking? Maybe it just ends at a certain point." And they said, "No, there's a proof for that.
Joe Mileti:
Yeah, and that's the amazing thing, that is very hard to get across to students when they're first learning how to reason about mathematics, is how can you argue something goes on forever? Or how can you argue that something is impossible? We'll never find a rational number that when you square it you get the number two, the square root of two is irrational, how do we know? Did you check them all? No.
Marshall Poe:
Right, that was my question.
Joe Mileti:
But we do have methods to do this that are completely logically sound and airtight, and it takes a while to get used to this way of thinking, but that is what mathematicians do all the time.
Marshall Poe:
Right, And these are called proofs, right?
Joe Mileti:
Correct, yeah.
Marshall Poe:
That is the technical term for them proofs. And there are lots of famous proofs in math. Who was the first person, you may not know this, to construct the proof that there are an infant number of prime numbers?
Joe Mileti:
So it usually is cited as being Euclid's proof, but it is almost certainly not Euclid. Euclid is just the person who put it all together and wrote the elements, which are the basis of Euclidean geometry, and some number theory, and so on. But Euclid probably synthesized a lot of things that were happening back then, so who the original person was, I don't know, I don't think anybody knows.
Marshall Poe:
I presume that as math major or a mathematician you've learned a lot of these famous proofs, this is your textbook, this is how they did it.
Joe Mileti:
Yes, absolutely. Although the way that they did it was complicated, in that we have so much more modern perspectives and notations on these things. So we don't teach Euclidean geometry how Euclid did it, we don't teach number theory how Euclid did it, because it's hard. It's really hard to follow what happened, how they wrote things 2,500 years ago.
Marshall Poe:
My mathematician friends would talk about, and this seemed to be a stock phrase, powerful tools. There's a powerful tool for that, what are these tools?
Joe Mileti:
So we have big theorems out there that allow one to reason about complicated things using simple techniques, but building up those simple techniques required decades or hundreds of years. So for example, I mentioned earlier this straw example, how many holes a straw has, homotopy groups are these objects that were developed over a long time, and there's these big theorems that have been proven about how they relate to each other. And if you take a straw and you glue another object into it, these homotopy groups, how do you compute them from this gluing process, or what have you?
And so that requires a huge amount of time and effort, but once you have it, now you can answer basic questions like, how do I compute this thing now? So for people who've taken it through calculus, you learn about derivatives and integrals, they look completely different, they look really hard both of them, but then you learn of the fundamental theorem of calculus, which says, "Hey, these are related, and you can solve this problem by doing this simple technique." Because we worked really hard and saw some key insight that allows us to connect these ideas.
Marshall Poe:
Yeah. Another word that they used when they were talking about the powerful tools, they said they were using the powerful tools to get a result, that's a big word in mathematics, result.
Joe Mileti:
And a result is just another word for a theorem basically, a new fact, and something that is the result of a proof. So as you mentioned earlier, mathematicians, their currency is proofs, that's what they do.
Marshall Poe:
Right, and so these proofs are just building up over time, a vast catalog of proofs. Has anybody ever written them down anywhere? Is there the big phone book of mathematical proofs?
Joe Mileti:
And so this actually ties back into mathematical logic in some way, in that a mathematical proof is usually written in a human language, but typically English these days. And so we communicate not in huge equations and symbols, but in paragraphs that are convincing somebody of a certain statement following the basic logical rules. But we take a lot of shortcuts when we communicate with each other, because we know how to fill in the gaps, and so a proof as written in a paper or a book is not broken down into its most primitive statements, and then every rule is articulated very, very carefully, because that would just be way too long, and that's not how humans understand things. And so we write proofs in papers, and then over time these get refined into simpler ways of thinking about it, and those get refined into books.
But mathematical logic then gives us the tools and the methods to, if we wanted to, turn those proofs into the most basic forms, so that everything follow these primitives, and then we can write every statement in this very specific way, and then each thing follow from the next. And nobody does that anymore, there was an attempt in the early 1900s by Russell and Whitehead to build up mathematics in this way, and it had some flaws.
And it didn't get very deep, because it's so hard to get very deep into mathematics doing this stuff, but they made it clear that it was possible. In these days, there's people doing what are called automated theorem proving, or automated proof checking, where you let a computer do it. So since you can take mathematics and break it down into these very syntactic formal things, and you know the rules, you can verify by a computer, is this legit? This argument that a mathematician wrote in natural language, if we break it down, can we follow it step-by-step and check every last step that it does really follow? And there's a lot that goes into that, and computers can't completely do mathematics without humans, humans have to guide it and say, "Okay, I think this will follow shortly. Can you find the little steps that will do it?" And this is a somewhat growing area in mathematics right now, so that we can be even more sure than we are, absolutely sure, that this statement follows from these statements.
Marshall Poe:
Yeah, this is interesting. And by analogy again, I was just editing a document using Microsoft Word, and it has a grammar checker built into it. And the grammar checker essentially is the rules, it's back there somewhere, and it's using some sort of algorithm to check every instance, I don't know what else to call it, against the rules, what's surprising is how often it's not quite right.
Joe Mileti:
Exactly, and that goes back to our earlier discussion of natural language is so much harder than mathematical language.
Marshall Poe:
Yeah, that's right. So there are these grammar checkers now for proofs?
Joe Mileti:
Yeah, pretty much, except it's better than grammar checking, in that it's not like this is a well-constructed thing, but it actually follows from these exact rules from these previous things. So you can write down well-constructed mathematical statements that are false, so it's similar, but there are some differences there.
Marshall Poe:
Yeah. Is some of what you do simplification? Because I know when I edit, I used to be an editor of a journal, and I would edit people, and there's so many words here that don't really matter, and there's a bunch of extraneous stuff, let's get to the point. So is some of what you're talking about simplification, get the simplest possible instance of a true statement or a theorem? And simplification is a big part of math. I mean, you learn to do this in high school.
Joe Mileti:
Yeah. So in some ways I'd say most of mathematical logic these days actually doesn't care about actually doing this, they care about that it's theoretically possible to do this. My papers are not written in this way, in this very formal way, and to me what makes something simple in mathematics is not that it is formal and written out in this way, is that it is easily and insightfully understood by a human, which is very different. So mathematical logic is a branch of mathematics, and we argue just like other mathematicians do, using informal natural language, we write our proofs informally, we just know that they can be translated into this very formal system, and the fact that that is possible has a lot of very deep consequences.
Marshall Poe:
I was going to say, one of the tools you use to explain something in natural language is the analogy. These are extraordinarily useful in helping people understand something they're unfamiliar with, are analogies used in math?
Joe Mileti:
So it's unfortunate that the way people see mathematics in books or in papers, is it seems like there are none, most papers are definition theorem proof. There are no, "Hey, here's how to think about this. Hey, here's the analogy." But when mathematicians are working and doing their research, they think in analogies and talk to each other in analogies all the time. That is the only way to make progress, is to say, "This feels like this kind of argument, or this feels like it might be true because of this." And we draw terrible pictures, and we don't think in this incredibly detailed formal way when we're trying to figure new things out, we think in analogies and stuff like that. So that is a deep part of mathematics. And the thing that's hard about it, is at the undergraduate level one reason why we don't do it so much, is because you have to understand the language and the formalities before you can test this.
So one thing that's tricky about mathematics compared to a lot of the other sciences, is you might have some intuition, you might have a guess, and then you could check it by doing an experiment. And in mathematics we don't have the natural world to experiment against, so how do we check that our intuitions and our analogies hold up, that there's something there that is not a figment of our imagination? Well, we have to have a method to determine, is this okay? And our method is proof, and that is the analog of experiments in the sciences, or something like that. And so we have to build up that toolset before you can think very intuitively, analogously, and so on, we have to have the system that you can check your intuition through a proof. So it is a huge part of mathematics, but it's not until the very upper undergraduate level that I think we really emphasize that.
Marshall Poe:
Yeah. And we talked about this in the pre-interview, they make you take calculus, and calculus is a killer. It ruined my appreciation of mathematics.
Joe Mileti:
Yeah, calculus has issues, such as it builds on a huge amount of high school mathematics, algebra and trigonometry. The logical structure of calculus is really hard actually, it took 200 years for people to come to grips with how to do all of this logically, and break it down into primitives, and understand how it all built up. Fascinating history, and we teach it to incoming students, and it's very hard because we can't pull out all this logical material, and we build on all this past stuff, which has a lot of... Students come with all sorts of different backgrounds. But it is important in the sciences, and that's why it is tough.
Marshall Poe:
Yeah, it's fundamental for the science, especially engineering and things like this, you can't do without it. And in that way it kind of stands in the same place as statistics, that is both descriptive and correlational, you really have to have it. It's no fun, but you have to have it, because it's extraordinarily useful. I was interested that you used the word intuition, and I'm reminded this follows on our conversation about analogies, I actually sat with these mathematicians quite a bit.
This was at the Institute for Advanced Studies in Princeton, and I watched them talking to one another, and it was precisely what you said. They would say, "This feels like this. This feels like this other thing over here." And they really used language like this. And then this disabused me of the idea that all mathematics was this strictly algorithmic process with a right answer, probability is all over it at that level, maybe this is like that. And one of the things they would often do, and this also gets back to your point about having to have a certain amount of knowledge in order to be able to do it, is that at the institute there were these wise older mathematicians, the one that comes to mind is who Pierre Deligne, you know how Pierre Deligne is? Yes, these very famous wise, old... And they would say, "We should go talk to Pierre Deligne, he'll tell us whether our intuitions are whacked out or not, because he knows everything."
Joe Mileti:
Yeah, and this is another weird thing, and it's an unfortunate thing about mathematics in terms of worldwide development of mathematics, is it's actually incredibly hard to learn mathematics by reading the research literature, because it is so streamlined and it's just what you need to know. And so you learn mathematics as a graduate student, you do read papers, and you do read books, but fundamentally a huge amount of how you develop is talking to your advisor. And your advisor will be like, "Yeah, this is how the proof goes, but this is how you think about it, this is how you remember it. These five theorems look very different, but there's this one key idea. If you understand this, you'll figure the rest out."
And again, we don't write that way in research papers, which is really unfortunate. But you have to have that sort of human... Tell me what's really going on here? I think mathematicians have an allergy to writing things that aren't quite exactly true, but in conversation they're fine. They don't want to be caught saying something that's not a hundred percent true, but talking to each other you can fudge things. You can say, this is a phrase mathematicians use a lot, "This is morally true." Which means that's not really true, but this is how you should think about it. And only through those sort of human discussions do you develop this, and understand this, so that makes it very hard for people who aren't near centers of mathematical activity to come up to speed on what's happening in mathematics.
Marshall Poe:
Well, that's another thing that we talked about earlier, and that mathematics is incredibly collaborative. You always work with other people.
Joe Mileti:
Yeah, it didn't used to be that. I mean, it used to be that way in that people would write letters to each other and so on, but 60 years ago most mathematical papers were written by one author, and these days that is uncommon to have one author papers.
Marshall Poe:
Yeah, and so they would go to Pierre Deligne, not looking for the answer, but seeing whether they were going in the right direction. It was really to ward off any waste of effort.
Joe Mileti:
Yeah. And I've had a lot of discussions with mathematicians, where they'll be like, "Yeah, I'm skeptical this will work, because of blah blah blah." And that they're not always right, but it's an insight, and it helps you sort of trim the tree of possibilities.
Marshall Poe:
Yeah, that's a great analogy, trim the tree of possibilities.
Joe Mileti:
I should focus on this thing, because of that insight.
Marshall Poe:
Yeah, that's exactly right. And you can see how really probabilistic judgment, again, this gets us away from this notion that mathematics is all this strictly algorithmic process, like long division, you just do this, this, this, this, again until you get the right answer, it's not like that.
Joe Mileti:
It is not like that at all.
Marshall Poe:
Not at all. But see that's what calculus does to people, it kills them, and it makes them think that it is like that, but it's not like that. Because there's lots of places where you say, "This might be true, but it might not."
Joe Mileti:
Yeah, and that actually comes back into a really interesting part of mathematical logic, and the history here. So there was a long effort in mathematics to sort of systematize mathematical reasoning to turn it into something symbolic that you could totally understand the rules of. But this became a very big deal in the beginning of the 1900s, there were sort of a crisis in mathematics at the time. People were introducing new methods, Cantor introduced this way of thinking about infinite sets and comparing the sizes of infinite sets, and some things that a lot of mathematicians got very queasy about. And then some mathematicians were using these new methods to prove new theorems, and some mathematicians were like, "Well, that's not allowed. That technique is not okay, this basic axiom is not okay." And there was a battle that was happening at the beginning of the 20th century, as to what do we allow? And a mathematician named David Hilbert he was at the forefront of using these new methods, and was pushing them very hard.
And he wanted to win this battle, and the way he proposed to win this battle was not just to convince other mathematicians, but he was going to use the skeptics arguments against them. And so he wanted to systematize mathematics, write it in this very formal way. And then he wanted to argue using the methods of his enemies that if we use these more complicated methods, we're not going to get into trouble. So since you can systematize mathematics and write it very formally, you've turned those things into mathematical objects. And then he wanted to prove using the mathematics that they were using, that these complicated methods would not lead to a contradiction, would not destroy mathematics. So he wanted to win using their tools, and so this is of the origins of mathematical logic in the 20th century.
Marshall Poe:
Did he do it?
Joe Mileti:
No. So he wanted to do that, he wanted to go further, he also wanted to prove using mathematics that for every statement either we could prove it or it's negation. So every single thing we will either eventually figure out or its negation, and he wanted a method, what he called sort of an algorithm in the [inaudible 00:29:01], that would take in a statement and would tell you, does it follow from the axiom? So can we reduce mathematics to...
Marshall Poe:
A grammar checker?
Joe Mileti:
Yeah, an algorithmic process. And he really wanted this, he thought it would be amazing if we could systematize all of mathematical thought. And yeah, it didn't work out.
Marshall Poe:
He was an ambitious guy, this guy Hilbert, he went for the stars.
Joe Mileti:
He sure did. So there's something called the incompleteness theorems of mathematical logic, which dealt a blow to part of what Hilbert wanted to do, actually a big part of what Hilbert wanted to do. But that gets a lot of press, and it's a super important result. But the bigger result I would say came a couple years later by Alan Turing, who sort of argued that this last thing was impossible, you can't reduce mathematics to algorithmic thinking. And again, how do you do that? How do you prove such a thing? Well, you have to define what it means for something to be algorithmic or computable, and nobody knew how to do that. People had tried, but how do you define something that I have this sense of? And Alan Turing did it in the 1930s, he designed these things that are now called Turing machines, and once you define them, you can reason about them.
And so he argued that this third part of Hilbert's program, reduced mathematics, find an algorithm that would determine whether a mathematical statement follows or doesn't, and he said, "Nope, that's impossible." He literally proved that that is impossible. And he developed the idea of the modern day computer, a general purpose computer, a machine that could do everything that we can imagine as computable. He defined it, he proved that there is sort one maximal type of machine, a machine that can interpret all others, he did all of these things decades before modern day computers existed. So mathematical logic had a pivotal role in defining computer science, and then structuring how computers as we know them today were built. And it all came out of this idea of, I just want to know this very pure mathematics question.
Marshall Poe:
Yeah, there are a lot of different directions we could go, this is fascinating to me. Well first of all, I have a good friend who's a computer scientist, he was a friend of mine, this was in the 90s, and he was a computer science major. And I said, "I'm having a problem with my computer." And he said, "I don't know anything about computers."
Joe Mileti:
There's a standard line here from a famous computer scientist, I think it was Dijkstra, I might be getting it wrong though, "Computer science is much about computers as astronomy is about telescopes." So you can be a fantastic astronomer and not know all the intricacies of how a telescope works, or how to fix it if your telescope has problems, it's the exact same thing. There are computer scientists who care very much about the physical manifestations of computers, and how to do that stuff, but a lot of computer science is the computer is a tool to implement this thing that I understand and can reason about.
Marshall Poe:
Yeah, that's right. And then you mentioned earlier in the conversation, and this is something I've heard before, but I always find it fascinating... I can't even put it in words. Infinite sets of different size. Now see, that just doesn't make any sense to me.
Joe Mileti:
Yeah, and it all comes back to what I said earlier, which is instated in natural language, it's like, "What do you mean?" And so in order to make sense of that, you have to define what you mean by different size.
Marshall Poe:
Okay, let's do that.
Joe Mileti:
And that takes some time. So the way I describe it in a 200-level course is, let's pretend you have a pile of nickels and a pile of quarters, and you want to know, do I have the same number of nickels and quarters? So one way is to count.
Marshall Poe:
The brute-force method.
Joe Mileti:
Count the number nickels, count the number of quarters, which might be the first thing you think of. But a better way, probably less error prone, is to take one nickel, take one quarter, pair them off, put them aside. Take one nickel, take one quarter, pair them off, put them aside. And then in this pairing process, do you use up both at the same time? And so that's a way to understand when two finite collections have the same number of elements, can I have a buddy system? So in elementary school you pair off a class with another class, everybody has a buddy, so that's a way to understand when two things have the same size. So if we steal that idea and use that to define when two infinite sets have the same size.
Marshall Poe:
By analogy.
Joe Mileti:
Exactly, we take the analogy, and then we turn it into a literal definition. This is what we mean when we say these two sets have the same size. And I want to be really careful, there are other ways to understand the sizes of sets than this way that mathematicians use. This isn't the on high absolute truth of how to define this, just like the number of holes in an object, there are actually different ways to define this in mathematics. But this is a good way to measure the sizes of infinite sets, can I find a pairing from the objects in this set to the objects in this set, so that everybody has one and only one buddy on the other side? And once you take that as a definition, you can reason about it, and then you can argue say, I can't pair off the natural numbers the number 0, 1, 2, 3, 4, with the real numbers, the decimals as we think of them. So you can prove this, you can reason about it, and so therefore in this sense, in this specific sense, the real numbers are bigger than the natural numbers.
Marshall Poe:
There you go. Yeah, that's fascinating to me, absolutely fascinating. So one of the things you study is set theory, we've just been talking about, what is set theory?
Joe Mileti:
So set theory is a lot of things, so at the most basic level it's exactly this, taking this idea and running with it. It's we have this cool new tool to understand infinite sets, let's use it, let's see if it's useful, and it is. Cantor didn't stumble across this because he was having fun with infinite sets, he stumbled into this because he was solving a problem in analysis, and about the real numbers, he was literally trying to understand something related to Fourier series, which is a very important thing in physics. He was trying to understand something about it.
Marshall Poe:
So this is interdisciplinarity in mathematics? Gone from analysis to set theory.
Joe Mileti:
Yeah. But this is how mathematics works all the time, and people think that mathematicians go off and do their own weird things, but we're very motivated by very concrete questions, and then they spin off these whole areas. So at its most basic level, the set theory starts there. But from a logician's point of view, set theory is more than that, it's we can express that theory in this very simple way, using very simple rules, and so then we could understand all of mathematics in this one world. So within you can code everything in mathematics as a set, and so you can then study the whole of mathematics, the whole universe of mathematics, as one mathematical object in a certain sense. And now this gets really weird, so coming back to mathematical logic a little bit, when you write down some statements, some axiom, some very basic things, you might only have one object in mind. So with the natural numbers you can add them and you can multiply them, and they satisfy certain properties that you might remember from high school, something called the commutative law, the distributive law and so on.
Marshall Poe:
Right, These are the grammar rules.
Joe Mileti:
Exactly. So you could say, "Okay, the natural numbers have a addition, a multiplication, and they satisfy these rules." But there might be other worlds that satisfy those rules as well, that you didn't intend, unintended interpretations of those things. So historically this came up in terms of geometry, Euclidean geometry had some basic rules, and basic axioms, and basic maneuvers, and then proved things from them. And then they thought that there was just the natural world of geometry, but then in the 1800s people developed non-Euclidean geometries, which are ways to interpret these rules that were not intended. But you're stuck, if you're only using these rules, and you can find a non-Euclidean world where something isn't true, than you [inaudible 00:37:07].
So there's that, there's the Euclidean three space and Euclidean N space, but then also the surface of the earth is not Euclidean. So when you write something down, you might think that you're capturing something canonical, there's only one universe in which this all makes sense, but you're probably not, there's probably other ways.
So by analogy, you can try to define what a democracy is, and you could try your best, and you might think you've captured what a democracy is, but then a democracy does something really weird that you didn't anticipate, and it's consistent with all of your rules of what you thought a democracy was. And so that can happen, and then you have to revise how you thought about these things. So set theory these days studies different models of set theory, different interpretations of the basic rules of mathematics that disagree with each other. And so this leads to mathematical statements that in one universe of mathematics are true, and in a different universe of mathematics are false. And so this really gets at the heart of a lot of what's happening in mathematical logic these days, is sort of understanding what axioms you need to prove various theorems.
Marshall Poe:
Yeah. A lot of this would fall... At least going back to my discussions with these mathematicians, is the relaxation of constraints, and this is very important in mathematics. So you begin by saying, "Okay, geometry is this, it's in three space, and that's what you're done." Well, what if we relax that constraint and then do something else? Then a whole new world when that doesn't happen to exist opens up, but you can definitely still have provable statements in that, I guess I would call it imaginary world, does that make sense?
Joe Mileti:
Yeah, and it doesn't even have to be imaginary. So here's the amazing thing, so you talked about three space and N dimensional space, and a lot of people think, well this is just nonsense, N dimensional space doesn't exist. And we could argue in the physical world whether it exists or not, so when people first started reasoning about three dimensional space in mathematics, they did things when they wanted to generalize to N dimensional space. And now N dimensional space is everywhere, whenever you're crunching data, you have people and properties.
Marshall Poe:
Yeah, definitely, I've done this myself.
Joe Mileti:
A person has a age and a weight, a person has many dimensions to them. And so if you want to reason about this, you have to understand N dimensional space. And so linear algebra, which is a 200-level course here, a lot of people first think it's not applied at all. But it is the foundation for statistics, it's the foundation for machine learning, for artificial intelligence, it is just working with huge data which are points in [inaudible 00:39:52] dimensional space. Now, you don't visualize it that way, but the same mathematics applies. And so if we can generalize and prove something from some slightly weaker constraints, it's fun, mathematicians just love doing that, but it ends up being incredibly applicable down the line a lot of the time.
Marshall Poe:
Yeah, it's interesting, because one of the things you read a lot in academic literature is about... What is the word that is used for it? Intersectionality. You've heard this word, and when I saw it I was like, "Oh, this is multiple dimensions, that's exactly what this is." And I remembered back from when I used to do this stuff, that's different dimensions, and they're additive, or multiplicative, whatever they are. But there is a kind of formal logic for it.
Joe Mileti:
Right, and actually getting back to dimension, this gets super interesting as well. So what does dimension mean? Well we defined it in linear algebra, but you can also define it in more complicated ways. So there's a famous thing, the shoreline of England, is that a one dimensional thing, because if you zoom in it gets more and more jagged? And so how do you define fractional dimensions, or decimal dimensions, and this took a while, and there is a definition that we can then reason about. So dimension then itself becomes much more general, and you can apply it to more things.
Marshall Poe:
Right, and I guess the difference between the way that a social scientist or historian like me would look at this is the formality of it. Is that you have pretty strict rules about how things have to be defined, presented, and then tested. We don't really have that ability in the social sciences [inaudible 00:41:32], we should do our best.
Joe Mileti:
Yeah. But again, it's weird, in that most people think that mathematics is this incredibly hard discipline, which it is. But it's also in some ways the easiest discipline, in that we can get rid of a huge amount of that complexity, and totally define what we mean and what the rules are. And that strips away a huge amount of the human experience.
Marshall Poe:
Of ambiguity, period.
Joe Mileti:
Right, but that gives us the ability to do powerful things as well.
Marshall Poe:
Yeah, that's exactly right. And I also really like what you said about the way in which there's this kind of interdisciplinary in mathematics, that one field will affect another field. I know somebody very well who essentially I believe all of her work is to try to reduce geometry to algebra, that's the whole program.
Joe Mileti:
Well, it's funny, because students now think of geometry, a point on the plane is given by two numbers, the X coordinate and the Y coordinate. But it took a very long time in the history of mathematics for those to be connected, the algebra of assigning numbers to points, and then manipulating it algebraically to express geometric facts, this took a long time for people to connect. Because they feel very different, they didn't to me in high school, because that's how I learned it. But this cross fertilization of mathematics is incredibly important.
Marshall Poe:
Yeah, it's very fascinating, and I wish I wouldn't have taken calculus, I should've gone on to something else, and I would've really like this. Because I like this level of abstraction, it's very good, and of course it's very satisfying to be able to say, "Oh, I have a proof for that." I mean, you're kind of done, you've contributed something, and then you can move on. And obviously that's going to raise questions, especially if you release the constraints, which you can always do. And then you're exploring new territory, if you believe it's territory, and not something we're just making up. It goes back to our previous, that's very good. Well, we've taken up a lot of your time, and I find this conversation absolutely fascinating. We have a traditional final question on the New Books Network, and that is what are you working on right now?
Joe Mileti:
Yeah, so I was chair recently for three years, and during the pandemic, and then this book. So it was my big goal to finish this book, so I just finished it, so it's coming out, so I haven't been working on other things in the very recent past. So bigger projects that I'm involved in that I would like to return too soon, so one is linear algebra, the subject that I talked about a little bit ago, which is the study of higher dimensional spaces and other things. So at Grinnell we teach it in an interesting way that's pretty different, and there's no book out there that satisfies the level of both extraction and computation that we try to balance in it. So I've written a mini book for that, that we use at the college, and so I would like to expand that a little bit more. So that's one project, continuing in the book angle.
But more generally my research project, we didn't talk a lot about computability theory other than the basics, but what computability theory and mathematical logic lets you do is understand the complexity of mathematical objects. So I can say this object is complicated, because no computer could understand it in a certain sense. Or once you have a formal language that mathematical logic provides, you could say any description of this object has to be this complicated. So now we have levels of complexity, you can describe this, but you need certain types of complicated manipulations in the syntactic thing that describes it. So mathematical logic gives us a stratification of complexity of mathematical objects, and what I like to study is, how complicated are certain mathematical objects? So we can prove that this object requires this level of sophistication to define, to understand in a certain sense.
Marshall Poe:
So this would be a complexity index?
Joe Mileti:
Exactly. And so what I like to do, is not just have a dictionary of this mathematical object is this complicated, that's part of what we do, but it's much more so take a theorem in mathematics, something that's been proven. And maybe for every object there's another object that has a certain property, this is a theorem of mathematics. And what people in my line of work like to do is say, "Okay, maybe you have a very simple object, but the only thing that works with it is incredibly complicated, and we can prove that." That using this sort of complexity index we can say that the relationship between the inputs and the outputs is very complicated, we can then infer from that any proof of this theorem must involve certain non-constructive techniques.
Marshall Poe:
Well, that's very helpful. Then you wouldn't have to go to Pierre Deligne, you could just say, "Okay, I have something that has this complexity index, I need these things."
Joe Mileti:
Right. So this goes back to the simplification thing you were talking about earlier, we'd like to take a proof in simplified as much as we can, but logic gives you the tools to argue that any proof of this must involve these certain manipulations.
Marshall Poe:
Yeah, it can't be more simple than this.
Joe Mileti:
Right, that's a big part of my research program, is classifying mathematical theorems in this way, of saying, "Yeah, this is a theorem, but any proof must involve non-constructive techniques, non-computable techniques, or other things."
Marshall Poe:
So we got to stop soon, but I'm really interested. So is this a continuous variable, or are you going to end up with a typology of proofs?
Joe Mileti:
So this can go on for a long time. So this is changing now, there used to be a belief that there are only a small number of complexity classes of theorems, and that has been challenged very vigorously over the last 20 years. And so now there's a huge zoo, it's sometimes called the reverse math zoo, understanding the complexity of these things. And they happen at different levels, and so it is incredibly rich and complicated.
Marshall Poe:
Yeah, so we don't know. Anyway, this has been fascinating. Joe, thank you very much for being on the show.
Joe Mileti:
Great, thanks for having me. It's been really fun.
Marshall Poe:
Okay, all right. Bye-bye.
Joe Mileti:
Bye.
Listen to more episodes of the Grinnell College Authors and Artists Podcast.