Why single variable analysis is easier

Secret Blogging Seminar 2018-03-12

Jadagul writes:

Got a draft of the course schedule for next year. Looks like I might get to teach real analysis.

I probably need someone to talk me out of trying to do everything in R^n.

A subsequent update indicates that the more standard alternative is teaching one variable analysis.

This is my second go around teaching rigorous multivariable analysis — key points are the multivariate chain rule, the inverse and implicit function theorems, Fubini’s theorem, the multivariate change of variables formula, the definition of manifolds, differential forms, Stokes’ theorem, the degree of a differentiable map and some preview of de Rham cohomology. I wouldn’t say I’m doing a great job, but at least I know why it’s hard to do. I haven’t taught single variable, but I have read over the day-to-day syllabus and problem sets of our most experienced instructor.

Here is the conceptual difference: It is quite doable to start with the axioms of an ordered field and build rigorously to the Fundamental Theorem of Calculus. Doing this gives students a real appreciation for the nontrivial power of mathematical reasoning. I don’t want to say that it is actually impossible to do the same for Stokes’ theorem (according to rumor, Brian Conrad did it), but I never manage — there comes a point where I start waving my hands and saying “Oh yes, and throw in a partition of unity” or “Yes, there is an inverse function theorem for maps between n-folds just like the one for maps between open subsets of \mathbb{R}^n.” I think most students probably benefit from seeing things done carefully for a term first.

Below the fold, a list of specific topics much harder in more than one variable. If you have found ways not to make them harder, please chime in in the comments!

• No need for linear algebra. Just defining the multivariate derivative uses the concept of a linear map, and stating the chain rule requires you to compose them. If you want your students to ever be able to check the hypotheses of the inverse function theorem, they have to be able to check if matrices are invertible.

• One variable Riemann sums are so nice! If u : [a,b] \to [c,d] is an increasing bijection, and a=x_0 < x_1 < \cdots < x_N = b is a partition of [a,b], then c=u(x_0) < u(x_1) < \cdots < u(x_N) =d is a partition of [c,d]; the u-substitution formula for integrals follows immediately. If u : [a_1, b_1] \times [a_2, b_2] \to [c_1, d_1] \times [c_2, d_2] is a smooth bijection, and we have a partition of [a_1, b_1] \times [a_2, b_2] into rectangles, its image in [c_1, d_1] \times [c_2, d_2] is quite hard to describe. This is why the change of variables formula is such a pain.

• Regions of integration: In one variable we always integrate over an interval. In many variables, we integrate over complicated regions, so we need to describe the geometry of complicated regions. If you want to give a region up into simpler pieces, you need to introduce some rudimentary notion of "measure zero", to make sure the boundaries you cut along aren't too large.

• Improper integrals: In one variable, we always take limits as the bounds of the integral go somewhere. In many variables, there are uncountably many different limiting processes which could define \int_{\mathbb{R}^2} f(x,y).

And that's not even getting into manifolds, or multilinear algebra…