Mathematics versus other escapes from reality

Of all escapes from reality, Mathematics is the most successful ever. It is a fantasy that becomes all the more addictive because it works back to improve the same reality we are trying to evade. All other escapes — sex, drugs, hobbies, whatever — are ephemeral by comparison. The mathematician’s feeling of triumph, as he/she forces the world to obey the laws his/her imagination has created feeds on its own success. The world is permanently changed by the workings of his/her mind, and the certainty that his/her creations will endure renews his/her confidence as no other pursuit.

Gian-Carlo Rota, an MIT Mathematician.

Function Algebras — by Walter Rudin

(The following is reproduced from the book “The Way I Remember It” by Walter Rudin. The purpose is just to share the insights of a formidable analyst with the student community.)

When I arrived at MIT in 1950, Banach algebras were one of the hot toppers. Gelfand’s 1941 paper “Normierte Ringe” had apparently only reached the USA in the late forties, and circulated on hard-to-read smudged purple ditto copies. As one application of the general theory presented there, it contained a stunningly short proof of Wiener’s lemma: the Fourier series of the reciprocal of a nowhere vanishing function with absolutely convergent Fourier series also  converges absolutely. Not only was the proof extremely short, it was one of those that are hard to forget. All one needs to remember is that the absolutely convergent Fourier series form a Banach algebra, and that every multiplicative linear functional on this algebra is evaluation at some point of the unit circle.

This may have led some to believe that Banach algebras would now solve all our problems. Of course, they could not, but they did provide the right framework for many questions in analysis (as does most of functional analysis) and conversely, abstract questions about Banach algebras often gave rise to interesting problems in “hard analysis”. (Hard analysis is used here as Hardy and Littlewood used it. For example, you do hard analysis when, in order to estimate some integral, you break it into three pieces and apply different inequalities to each.)

One type of Banach algebras that was soon studied in detail were the so-called function algebras, also known as uniform algebras.

To see what these are, let $C(X)$ be the set of all complex-valued continuous functions on a compact Hausdorff space X. A function algebra on X is a subset A of $C(X)$ such that

(i) If f and g are in A, so are $f+g$, $fg$, and $cf$ for every complex number c (this says that A is an algebra).

(ii) A contains the constant functions.

(iii) A separates points on X (that is, if $p \neq q$, both in X, then $f(p) \neq f(q)$ for some f in A), and

(iv) A is closed, relative to the sup-norm topology of $C(X)$, that is, the topology in which convergence means uniform convergence.

A is said to be self-adjoint if the complex conjugate of every f in A is also in A. The most familiar example of a non-self-adjoint function algebra is the disc algebra $A(U)$ which consists of all f in $C(U)$ that are holomorphic in U. (here, and later, U is the open unit disc in C, the complex plane, and $\overline{U}$ is its closure). I already had an encounter with  $A(U)$, a propos maximum modulus algebras.

One type of question that was asked over and over again was: Suppose that a function algebra on X satisfies … and …and is it C(X)? (In fact, 20 years later a whole book, entitled “Characterizations of C(X) among its Subalgebras”  was published by R. B. Burckel.)  The Stone-Weierstrass Theorem gives the classical answer. Yes, if A is self-adjoint.

There are problems even when X is a compact interval I on the real line. For instance, suppose A is a function algebra on I, and to every maximal ideal M of A corresponds a point p in I such that M is the set of all f in A having $f(p)=0$ (In other words, the only maximal ideals of A are the obvious ones). Is $A=C(I)$? This is still unknown, in 1995.

If $f_{1}, f_{2}, \ldots f_{n}$ are in $C(I)$, and the n-tuple $(f_{1}, f_{2}, \ldots, f_{n})$ separates points on I, let $A(f_{1}, \ldots , f_{n})$ be the smallest closed subalgebras of $C(I)$ that contains $f_{1}, f_{2}, \ldots f_{n}$ and I.

When $f_{1}$ is 1-1 on I, it follows from an old theorem of Walsh (Math. Annalen 96, 1926, 437-450) that $A(f_{1})=C(f)$.

Stone-Weierstrass implies that $A(f_{1}, f_{2}, \ldots, f_{n})=C(I)$ if each $f_{i}$ is real-valued.

In the other direction, John Wermer showed in Annals of Math. 62, 1955, 267-270, that $A(f_{1}, f_{2}, f_{3})$ can be a proper subset of $C(I)$!

Here is how he did this:

Let E be an arc in C, of positive two-dimensional measure, and let $A_{E}$ be an algebra of all continuous functions on the Riemann sphere S (the one-point compactification of C). which are holomorphic in the complement of E. He showed that $g(E)=g(S)$ for every g in $A_{E}$, that $A_{E}$ contains a triple $(g_{1}, g_{2}, g_{3})$ that separates points on S and that the restriction of $A_{E}$ to E is closed in $C(E)$. Pick a homeomorphism $\phi$ of I onto E and define $f_{i}=g_{i} \circ \phi$. Then, $A(f_{1}, f_{2}, f_{3}) \neq C(I)$, for if h is in $A(f_{1}, f_{2}, f_{3})$ then $h= g \circ \phi$ for some g in $A_{E}$, so that

$h(I)=g(E)=g(S)$ is the closure of an open subset of $C$ (except when h is constant).

In order to prove the same with two function instead of three I replaced John’s arc E with a Cantor set K, also of positive two-dimensional measure (I use the term “Cantor set” for any totally disconnected compact metric space with no isolated points; these are all homeomorphic to each other.) A small extra twist, applied to John’s argument, with $A_{K}$ in place of $A_{E}$, proved that $A(f_{1}, f_{2})$ can also be smaller than $C(I)$.

I also used $A_{K}$ to show that $C(K)$ contains maximal closed point-separating subalgebras that are not maximal ideals, and that the same is true for $C(X)$ whenever X contains a Cantor set. These ideas were pushed further by Hoffman and Singer in Acta Math. 103, 1960, 217-241.

In the same paper, I showed that $A(f_{1}, f_{2}, \ldots, f_{n})=C(I)$ when $n-1$ of the n given functions are real-valued.

Since Wermer’s paper was being published in the Annals, and mine strengthened his theorem and contained other interesting (at least to me) results, I sent mine there too. It was rejected, almost by return mail, by an anonymous editor, for not being sufficiently interesting. I have had a few others papers rejected over the years, but for better reasons. This one was published in Proc. AMS 7, 1956, 825-830, and is one of six whose Russian transactions were made into a book “Some Questions in Approximation Theory”, the others were three by Bishop and two by Wermer. Good company.

Later, Gabriel Stolzenberg (Acta Math. 115, 1966, 185-198) and Herbert Alexander (Amer. J. Math., 93, 1971, 65-74) went much more deeply into these problems. One of the highlights in Alexander’s paper is:

$A(f_{1}, f_{2}, \ldots f_{n})=C(I)$ if $f_{1}, f_{2}, \ldots, f_{n-1}$ are of bounded variation.

A propos the Annals (published by Princeton University) here is a little Princeton anecdote. During a week that I spent there, in the mid-eighties, the Institute threw a cocktail party. (What I enjoyed best at that affair was being attacked by Armand Borel for having said, in print, that sheaves had vanished into the background.) Next morning I overheard the following conversation in Fine Hall:

Prof. A: That was a nice party yesterday, wasn’t it?

Prof. B: Yes, and wasn’t it nice that they invited the whole department.

Prof. A: Well, only the full professors.

Prof. B: Of course.

The above-mentioned facts about Cantor sets led me to look at the opposite extreme, the so-called scattered spaces. A compact Hausdorff space Q is said to be shattered if Q contains no perfect set, every non-empty compact set F in Q thus contains a point that is not a limit point of F. The principal result proved in Proc. AMS 8, 1957, 39-42 is:

THEOREM: Every closed subalgebra of $C(Q)$ is self-adjoint.

In fact, the scattered spaces are the only ones for which this is true, but I did not state this in that paper.

In 1956, I found a very explicit description of all closed ideals in the disc algebra $A(U)$ (defined at the beginning of this chapter). The description involves inner function. These are the bounded holomorphic functions in U whose radial limits have absolute value 1 at almost every point of the unit circle $\mathcal{T}$. They play a very important role in the study of holomorphic functions in U (see, for instance, Garnett’s book, Bounded Analytic Functions) and their analogues will be mentioned again, on Riemann surfaces, in polydiscs, and in balls in $C^{n}$.

Recall that a point $\zeta$ on $\mathcal{T}$ is called a singular point of a holomorphic  function f in U if f has no analytic continuation to any neighbourhood of $\zeta$. The ideals in question are described in the following:

THEOREM: Let E be a compact subset of $\mathcal{T}$, of Lebesgue measure 0, let u be an inner function all of whose singular points lie in E, and let $J(E,u)$ be the set of all f in $A(U)$ such that

(i) the quotient f/u is bounded in U, and

(ii) $f(\zeta)=0$ at every $\zeta$ in E.

Then, $J(E,u)$ is a closed ideal of A(U), and every closed ideal of $A(U) (\neq \{0\})$ is obtained in this way.

One of several corollaries is that every closed ideal of A(U) is principal, that is, is generated by a single function.

I presented this at the December 1956 AMS meeting in Rochester, and was immediately told by several people that Beurling had proved the same thing, in a course he had given at Harvard, but had not published it. I was also told that Beurling might be quite upset at this, and having Beurling upset at you was not a good thing. Having used this famous paper about the shift operator on a Hilbert space as my guide, I was not surprised that he too had proved this, but I saw no reason to withdraw my already submitted paper. It appeared in Canadian J. Math. 9, 1967, 426-434. The result is now known as Beurling-Rudin theorem. I met him several times later, and he never made a fuss over this.

In the preceding year Lennart Carleson and I, neither of us knowing what the other was doing proved what is now known as Rudin-Carleson interpolation theorem. His paper is in Math. Z. 66, 1957, 447-451, mine in Proc. AMS 7, 1956, 808-811.

THEOREM. If E is a compact subset of $\mathcal{T}$, of Lebesgue measure 0, then every f in C(E) extends to a function F in A(U).

(It is easy to see that this fails if $m(E)>0$. To say that F is an extension of f means simply that $F(\zeta)=f(\zeta)$ at every $\zeta$ in E.)

Our proofs have some ingredients in common, but they are different, and we each proved more than is stated above. Surprisingly, Carleson, the master of classical hard analysis, used a soft approach, namely duality in Banach spaces, and concluded that F could be so chosen that $||F||_{U} \leq 2||f||_{E}$. (The norms are sup-norms over the sets appearing as subscripts.) In the same paper he used his Banach space argument to prove another interpolation theorem, involving Fourier-Stieltjes transforms.

On the other hand, I did not have functional analysis in mind at all, I did not think of the norms or of Banach spaces, I proved, by a bare-hands construction combined with the Riemann mapping theorem that if $\Omega$ is a closed Jordan domain containing $f(E)$ then f can be chosen so that $F(\overline{U})$ also lies in $\Omega$. If $\Omega$ is a disc, centered at 0, this gives $||F||_{U}=||f||_{E}$, so F is a norm-preserving extension.

What our proofs had in common is that we both used part of the construction that was used in the original proof of the F. and M. Riesz theorem (which says that if a measure $\mu$ on $\mathcal{T}$ gives $\int fd\mu=0$ for every f in $A(U)$ then $\mu$ is absolutely continuous with respect to Lebesgue measure). Carleson showed, in fact, that F. and M. Riesz can be derived quite easily from the interpolation theorem. I tried to prove the implication in the other direction. But that had to wait for Errett Bishop. In Proc. AMS 13, 1962, 140-143, he established this implication in a very general setting which had nothing to do with holomorphic functions or even with algebras, and which, combined with a refinement due to Glicksberg (Trans. AMS 105, 1962, 415-435) makes the interpolation theorem even more precise:

THEOREM: One can choose F in $A(U)$ so that $F(\zeta)=f(\zeta)$ at every $\zeta$ in E, and $|f(z)|<||f||_{E}$ at every z in $\overline{U}\\E$.

This is usually called peak-interpolation.

Several variable analogues of this and related results may be found in Chap. 6 of my Function Theory in Polydiscs and in Chap 10 of my Function Theory in the Unit Ball of $C^{n}$.

The last item in this chapter concerns Riemann surfaces. Some definitions are needed.

A finite Riemann surface is a connected open proper subset R of some compact Riemann surface X, such that the boundary $\partial R$ of R in X is also the boundary of its closure $\overline{R}$ and is the union of finitely many disjoint simple closed analytic curves $\Gamma_{1}, \ldots, \Gamma_{k}$. Shrinking each $\Gamma_{i}$ to a point gives a compact orientable manifold whose genus g is defined to be the genus of R. The numbers g and k determine the topology of R, but not, of course, its conformal structure.

$A(R)$ denotes the algebra of all continuous functions on $\overline{R}$ that are holomorphic in R. If f is in $A(R)$ and $|f(p)|=1$ at every point p in $\partial R$ then, just as in U, f is called inner. A set $S \subset A(R)$ is unramified if every point of $\overline{R}$ has a neighbourhood in which at least one member of S is one-to-one.

I became interested in these algebras when Lee Stout (Math. Z., 92, 1966, 366-379; also 95, 1967, 403-404) showed that every $A(R)$ contains an unramified triple of inner functions that separates points on $\overline{R}$.  He deduced from the resulting embedding of R in $U^{S}$ that $A(R)$ is generated by these 3 functions. Whether every $A(R)$ is generated by some pair of its member is still unknown, but the main result of my paper in Trans. AMS 150, 1969, 423-434 shows that pairs of inner functions won’t always do:

THEOREM: If $A(\overline{R})$ contains a point-separating unramified pair f, g of inner functions, then there exist relatively prime integers s and t such that f is s-to-1 and g is t-to-1 on every $\Gamma_{i}$, and

(*) $(ks-1)(kt-1)=2g+k-1$

For example, when $g=2$ and $k=4$, then (*) holds for no  integers s and t. When $g=23$ and $k=4$, then $s=t=2$ is the only pair that satisfies (*) but it is not relatively prime. Even when $R=U$ the theorem gives some information. In that case, $g=0, k=1$, so (*) becomes $(s-1)(t-1)=0$, which means:

If a pair of finite Blaschke products separates points on $\overline{U}$ and their derivatives have no  common zero in U, then at least one of them is one-to-one (that is, a Mobius transformation).

There are two cases in which (*) is not  only necessary but also sufficient. This happens when $g=0$ and when $g=k=1$.

But there are examples in which the topological condition (*) is satisfied even though the conformal structure of R prevents the existence of a separating unramified pair of inner functions.

This paper is quite different from anything else that I have ever done. As far as I know, no  one has ever referred to it, but I had fun working on it.

*************************************************************************************************

More blogs from Rudin’s autobiography later, till then,

Nalin Pithwa

Interchanging Limit Processes — by Walter Rudin

As I mentioned earlier, my thesis (Trans. AMS 68, 1950, 278-363) deals with uniqueness questions for series of spherical harmonics, also known as Laplace series. In the more familiar setting of trigonometric series, the first theorem of the kind that I was looking for was proved by Georg Cantor in 1870, based on earlier work of Riemann (1854, published in 1867). Using the notations

$A_{n}(x)=a_{n} \cos{nx}+b_{n}\sin{nx}$,

$s_{p}(x)=A_{0}+A_{1}(x)+ \ldots + A_{p}(x)$, where $a_{n}$ and $b_{n}$ are real numbers. Cantor’s theorem says:

$If \lim_{p \rightarrow \infty}s_{p}(x)=0$ at every real x, then $a_{n}=b_{n}=0$ for every n.

Therefore, two distinct trigonometric series cannot converge to the same sum. This is what is meant by uniqueness.

My aim was to prove this for spherical harmonics and (as had been done for trigonometric series) to whittle away at the hypothesis. Instead of assuming convergence at every point of the sphere, what sort of summability will do? Does one really need convergence (or summability) at every point? If not, what sort of sets can be omitted? Must anything else be assumed at these omitted points? What sort of side conditions, if any, are relevant?

I came up with reasonable answers to these questions, but basically the whole point seemed to be the justification of the interchange of some limit processes. This left me with an uneasy feeling that there ought to be more to Analysis than that. I wanted to do something with more “structure”. I could not  have explained just what I meant by this, but I found it later when I became aware of the close relation between Fourier analysis and group theory, and also in an occasional encounter with number theory and with geometric aspects of several complex variables.

Why was it all an exercise in interchange of limits? Because the “obvious” proof of Cantor’s theorem goes like this: for $p > n$,

$\pi a_{n}= \int_{-\pi}^{\pi}s_{p}(x)\cos{nx}dx = \lim_{p \rightarrow \infty}\int_{-\pi}^{\pi}s_{p}(x)\cos {nx}dx$, which in turn, equals

$\int_{-\pi}^{\pi}(\lim_{p \rightarrow \infty}s_{p}(x))\cos{nx}dx= \int_{-\pi}^{\pi}0 \cos{nx}dx=0$ and similarly, for $b_{n}$. Note that $\lim \int = \int \lim$ was used.

In Riemann’s above mentioned paper, the derives the conclusion of Cantor’s theorem under an additional hypothesis, namely, $a_{n} \rightarrow 0$ and $b_{n} \rightarrow 0$ as $n \rightarrow \infty$. He associates to $\sum {A_{n}(x)}$ the twice integrated series

$F(x)=-\sum_{1}^{\infty}n^{-2}A_{n}(x)$

and then finds it necessary to prove, in some detail, that this series converges and that its sum F is continuous! (Weierstrass had not yet invented uniform convergence.) This is astonishingly different from most of his other publications, such as his paper  on hypergeometric functions in which mind-boggling relations and transformations are merely stated, with only a few hints, or  his  painfully brief paper on the zeta-function.

In Crelle’s J. 73, 1870, 130-138, Cantor showed that Riemann’s additional hypothesis was redundant, by proving that

(*) $\lim_{n \rightarrow \infty}A_{n}(x)=0$ for all x implies $\lim_{n \rightarrow \infty}a_{n}= \lim_{n \rightarrow \infty}b_{n}=0$.

He included the statement: This cannot be proved, as is commonly believed, by term-by-term integration.

Apparently, it took a while before this was generally understood. Ten years later, in Math. America 16, 1880, 113-114, he patiently explains the differenence between pointwise convergence and uniform convergence, in order to refute a “simpler proof” published by Appell. But then, referring to his second (still quite complicated) proof, the one in Math. Annalen 4, 1871, 139-143, he sticks his neck out and writes: ” In my opinion, no further simplification can be achieved, given the nature of ths subject.”

That was a bit reckless. 25 years later, Lebesgue’s dominated convergence theorem became part of every analyst’s tool chest, and since then (*) can be proved in a few lines:

Rewrite $A_{n}(x)$ in the form $A_{n}(x)=c_{n}\cos {(nx+\alpha_{n})}$, where $c_{n}^{2}=a_{n}^{2}+b_{n}^{2}$. Put

$\gamma_{n}==\min \{1, |c_{n}| \}$, $B_{n}(x)=\gamma_{n}\cos{(nx+\alpha_{n})}$.

Then, $B_{n}^{2}(x) \leq 1$, $B_{n}^{2}(x) \rightarrow 0$ at every x, so that the D. C.Th., combined with

$\int_{-\pi}^{\pi}B_{n}^{2}(x)dx=\pi \gamma_{n}^{2}$ shows that $\gamma_{n} \rightarrow 0$. Therefore, $|c_{n}|=\gamma_{n}$ for all large n, and $c_{n} \rightarrow 0$. Done.

The point of all this is that my attitude was probably wrong. Interchanging limit processes occupied some of the best mathematicians for most of the 19th century. Thomas Hawkins’ book “Lebesgue’s Theory” gives an excellent description of the difficulties that they had to overcome. Perhaps, we should not be too surprised that even a hundred years later many students are baffled by uniform convergence, uniform continuity etc., and that some never get it at all.

In Trans. AMS 70, 1961,  387-403, I applied the techniques of my thesis to another problem of this type, with Hermite functions in place of spherical harmonics.

(Note: The above article has been picked from Walter Rudin’t book, “The Way I  Remember It)) — hope it helps advanced graduates in Analysis.

More later,

Nalin Pithwa

Pure versus Applied Mathematics

Relations between pure and applied mathematicians are based on trust and understanding. Pure mathematicians do not trust applied mathematicians, and applied mathematicians do not understand pure mathematicians. — Ian Stewart

Five things to do as a graduate student in Mathematics

The following are the views of Mr. Mohammed Kaabar posted on AMS Graduate Blog just today:

I would like to share with you my first year experience as a graduate student in mathematics at Washington State University, and I want to give you some suggestions about what you should do as a graduate student in mathematics?. In Spring 2015, I started my first semester as a Ph.D student in Applied Mathematics, and during that semester, I wrote two math textbooks in differential equations and linear algebra, and I also gave three seminar’s talks in Applied Mathematics, as well as, I participated as invited technical program committee (TPC) member and invited reviewer for many international conferences and journals in applied math, physics, electrical engineering, and computer engineering. Most of these conferences published their accepted papers in major trade peer-reviewed publishing companies such as Springer and IEEE Xplore. Therefore, the following is a list of five different things that I highly recommend you to do as a graduate student in mathematics:

• Join professional organizations in mathematics and other related fields: When I started my graduate studies in mathematics, I joined the Society for Industrial and Applied Mathematics (SIAM). It is easy to become a student member in SIAM because they offer a free membership for graduate students in some universities. My university was one of them, so I got a free membership. There are several math associations and societies such as American Mathematical Society (AMS) and Mathematical Association of America (MAA) that can provide you with good discounts on the prices of their student memberships.
• Create a professional website: If you are a newly admitted graduate student in mathematics, I recommend you to create a professional website that includes your research interests, curriculum vitae, work experience, and courses you are currently teach. The advantage of having your own professional website is that many people will contact you by your website email to invite you as technical program committee (TPC) member, reviewer, editor, math team member for conferences and journals in mathematics and applied sciences. If you are going to teach a class, it is a good idea to add a section in your professional website that contains your lecture notes, solutions to your class assignments and quizzes, and study guides for exams.
• Teach a course you like to teach: If you have been offered a teaching assistantship position at your department, I believe that most universities give you the option to request the courses you like to teach. Therefore, I recommend you to choose courses that interest you more than others because if you like the course you teach, your students will more likely appreciate the way you teach.
• Participate in research groups: If you are a new graduate student in your department, I recommend you to contact your department chair, coordinator, advisor, and graduate studies chair to ask them about any available research groups to join them as a member so you can participate in the group’s research publications and seminar’s talks.
• Participate in extra-curricular and academic related activities: When you start you graduate studies in mathematics, you will have a stress of work and study load. So, what you should do to relief this stress?. The answer is simple; many universities and colleges have student’s associations and clubs such as graduate student association and peer leadership program. For example, when I was student at Washington State University (WSU) and American University of Sharjah (AUS), I was an active member in peer leadership program, and I had also taken part taken part in competitions such as The International Electronics Synopsys Competition.

In conclusion, from the fifth point I mentioned above, I would like to focus on one case which I consider a great achievement in my work as a peer leader. One day, while I was walking along the corridors of AUS, I saw a student wondering, knowing neither where to go nor what to do. That student was as perplexed as a person going astray in the desert without being able to decide his direction. I approached him and asked him what he wanted. He told me that it was his first day at AUS and he did not know where and how to start. I assumed him that everything would be alright. Then, that student released a sigh of relief exactly the same feeling of our friend in the desert when a plane out of the blue sky took him out of the mire. That freshmen student was like a ship in a rough sea beaten by high waves, sometimes taking it west and some other times right. Imagine what would that person feel when he suddenly finds someone to lead him to the shores of safety. I helped him throughout the registration process. Since then, that student became one of my best friends. Maybe you are still thinking of our poor friend in the desert? Relax; he was lifted by a helicopter. So my job strengthens relations and builds a highly cooperative community. During my work as peer leader, I oftentimes go around talking to students, familiarizing myself with their problems and offering them the help they may stand in need of. This is just to show you an example of how a successful graduate student can positively impact the lives of other students. Finally, I recommend you to follow at least most of the five things mentioned above to be successful in your career as a graduate student.

**************************************************************************************************************************

If you like it, please send a thank you note to Mr. Mohammed Kaabar on the AMS blog.

More later,

Nalin Pithwa

Analysis: Chapter 1: part 11: algebraic operations with real numbers: continued

(iii) Multiplication.

When we come to multiplication, it is most convenient to confine ourselves to positive numbers (among which we may include zero) in the first instance, and to go back for a moment to the sections of positive rational numbers only which we considered in articles 4-7. We may then follow practically the same road as in the case of addition, taking (c) to be (ab) and (O) to be (AB). The argument is the same, except when we are proving that all rational numbers with at most one exception must belong to (c) or (C). This depends, as in the case of addition, on showing that we can choose a, A, b, and B so that C-c is as small as we please. Here we use the identity

$C-c=AB-ab=(A-a)B+a(B-b)$.

Finally, we include negative numbers within the scope of our definition by agreeing that, if $\alpha$ and $\beta$ are positive, then

$(-\alpha)\beta=-\alpha\beta$, $\alpha(-\beta)=-\alpha\beta$, $(-\alpha)(-\beta)=\alpha\beta$.

(iv) Division.

In order to define division, we begin by defining the reciprocal $\frac{1}{\alpha}$ of a number $\alpha$ (other than zero). Confining ourselves in the first instance to positive numbers and sections of positive rational numbers, we define the reciprocal of a positive number $\alpha$ by means of the lower class $(1/A)$ and the upper class $(1/a)$. We then define the reciprocal of a negative number $-\alpha$ by the equation $1/(-\alpha)=-(1/\alpha)$. Finally, we define $\frac{\alpha}{\beta}$ by the equation

$\frac{\alpha}{\beta}=\alpha \times (1/\beta)$.

We are then in a position to apply to all real numbers, rational or  irrational the whole of the ideas and methods of elementary algebra. Naturally, we do not propose to carry out this task in detail. It will be more profitable and more interesting to turn our attention to some special, but particularly important, classes of irrational numbers.

More later,

Nalin Pithwa

Analysis versus Computer Science

Somebody from the industry was asking me what is the use of Analysis (whether Real or Complex or Functional or Harmonic or related) in Computer Science. Being an EE major, I could not answer his  question. But, one of my contacts, Mr. Sankeerth Rao, (quite junior to me in age), with both breadth and depth of knowledge in Math, CS and EE gave me the following motivational reply:

***************************************************************************************

Analysis is very useful in Computer Science. For instance many areas of theoretical computer science use analysis – like check Pseudo randomness, Polynomial Threshold functions,…. its used everywhere.

Even hardcore discrete math uses heavy analysis – See Terence Tao’s book on Additive Combinatorics for instance. My advisor uses higher order fourier analysis to get results in theory of computer science.

Most of the theoretical results in Learning theory use analysis. All the convergence results use analysis.

At first it might appear that Computer Science only deals with discrete stuff – Nice algos and counting problems but once you go deep enough most of the latest tools use analysis. To get a feel for this have a look at the book Probabilistic Methods by Noga Alon or Additive Combinatorics by Terence Tao.

********************************************************************************************

More later,

Nalin Pithwa

On learning languages versus programming

Below are the views of the master expositor of mathematics, Paul Halmos:

Some graduate students now-a-days object to being made to learn to read two languages as a Ph.D. requirement. “Why should we learn about flowers and families and genitives and past principles? — all we want is to read last month’s Paris seminar report.” Some go further:”Who needs German? — for me Fortran (C/C++) is much more relevant.”

Horrors! I am upset and I predict that the result of such anti-linguistic, anti-cultural, anti-intellectual attitudes will lead to a deterioration of international scientific information exchange, and to a lot of bad writing. Every little bit I ever learned about any language was later of help to me as a writer. That is true of the Danish and Portuguese and Russian and Romanian that I learned for specific mathematical reasons, but it is also true of the hint or two of Greek and of Sanskrit that I managed to be exposed to. I have always rued that I was never taught Greek; every ounce of it would have paid off with a pound of linguistic insight. In the course of the years I managed to pick up quite a few Greek root words; my source of them was my shelf of English dictionaries, especially the American Heritage and the second edition of Webster. I feel that I need to look up  the etymologies of words before I can use them precisely, and I know (a small matter, but here is where it belongs) that the reason I have no trouble spelling in English is that even a nodding familiarity with other languages makes me aware of where most of the difficult words come from.

To give the devil his due, I admit that  substituting FORTRAN for German is only 90% bad, not 100. What it loses in the understanding of culture and mastering the art of communication, it gains in meticulous attention to detail and moving closer to mastering the science of communication. A knowledge of the theory and practice of formal languages might be a help for writing with precision, especially to students whose talents are not mathematical but it is of no help at all for writing with clarity. The  distinction is sometimes ignored or even argued away, but that is a sad error — there is all the difference in the world between an exposition that cannot be misunderstood and one that is in fact understood.

(From: I want to be a mathematician: An Automathography: Paul R. Halmos).

More later,

Nalin Pithwa

Chapter 1: Real Variables: examples II

Examples II.

1) Show that no rational number can have its cube equal to 2.

Proof 1.

Proof by contradiction. Let $x=p/q$. $q \neq 0, p, q \in Z.$ (p, q have no common factors).

Let $x^{3}=2$. Hence, $\frac{p^{3}}{q^{3}}=2$. Hence, $p^{3}=2q^{3}$. Hence, p^{3} is even because we know that even times even is even and even times odd i also even and odd times odd is odd. Hence, p ought to be even. Let $p=2m$. Then, again $q^{3}=4m^{3}$. Hence, $q^{3}$ is even. Hence, q is even. But, this means that p and q have a common factor 2 which contradicits our hypothesis. Hence, the proof.. QED.

Proof 2)

Let given rational fraction be $\frac{p}{q}$, $q \neq 0, p, q \in Z$.

Let $\frac{p}{q}=\frac{m^{3}}{n^{3}}$, $n \neq 0, m,n \in Z$.

Since p and q do not have any common factors, m and n also do not have any common factors.

Case I: p is even, q is odd so clearly, they do not have any common factors.

Case IIL p is odd, q is odd but with no common factors.

Case I: since m and n are without any common factors, and $m^{3}, n^{3}$ are also in its lowest terms, we have $p=m^{3}, q=n^{3}$.

Case II: similar to case I above.

Proof 3.

A more general proposition, due to Gauss, includes those two above problems as special cases. Consider the following algebraic equation;

$x^{n}+p_{1}x^{n-1}+p_{2}x^{n-2}+\ldots +p_{n}=0$.

with integral coefficients,, cannot have a rational root but non integral root.

Proof 3:

For suppose that the equation has a root a/b, where a and b are integers without a common factor, and b is positive. Writing a/b for x, and multiply both the sides of the equation $b^{n-1}$, e obtain

$-\frac{a^{n}}{b}=p_{1}a^{n-1}+p_{2}a^{n-2}b+\ldots +p_{n}b^{n-1}$,

a fraction in the lowest terms equal to an integer, which is absurd. thus, b=1, and the root is a. It is clear that a must be a divisor of $p_{n}$

Proof 4.

Show that if $p_{n}=1$ and neither of

$1+p_{1}+p_{2}+p_{3}+\ldots$,, $1-p_{1}+p_{2}-p_{3}+\ldots$ is zero, then the equation cannot have a rational root.

I will put the proof later.

Problem 5.

Find the rational toots, if any of $x^{4}-4x^{3}-8x^{2}+13x+10=0$.’

Solution.

The roots can only be integral and so $\pm 1, \pm 2, \pm 3, \pm 5 pm 10$ are the only possibilities: whether these are roots can be determined by tiral. it is clear that can in this way determine the rational roots of any equation.

More later,

Nalin Pithwa

Analysis: Chapter 1: part 10: algebraic operations with real numbers

Algebraic operations with real numbers.

We now proceed to meaning of the elementary algebraic operations such as addition, as applied to real numbers in general.

(i),  Addition. In order to define the sum of two numbers $\alpha$ and $\beta$, we consider the following two classes: (i) the class (c) formed by all sums $c=a+b$, (ii) the class (C) formed by all sums $C=A+B$. Clearly, $c < C$ in all cases.

Again, there cannot be more than one rational number which does not belong either to (c) or to (C). For suppose there were two, say r and s, and let s be the greater. Then, both r and s must be greater than every c and less than every C; and so $C-c$ cannot be less than $s-r$. But,

$C-c=(A-a)+(B-b)$;

and we can choose a, b, A, B so that both $A-a$ and $B-b$ are as small as we like; and this plainly contradicts our hypothesis.

If every rational number belongs to (c) or to (C), the classes (c), (C) form a section of the rational numbers, that is to say, a number $\gamma$. If there is one which does not, we add it to (C). We have now a section or real number $\gamma$, which must clearly be rational, since it corresponds to the least member of (C). In any case we call $\gamma$ the sum of $\alpha$ and $\beta$and write

$\gamma=\alpha + \beta$.

If both $\alpha$ and $\beta$ are rational, they are the least members of the upper classes (A) and (B). In this case it is clear that $\alpha + \beta$ is the least member of (C), so that our definition agrees with our previous ideas of addition.

(ii) Subtraction.

We define $\alpha - \beta$ by the equation $\alpha-\beta=\alpha +(-\beta)$.

The idea of subtraction accordingly presents no fresh difficulties.

More later,

Nalin Pithwa