Homework Assignments
MAT 4930, Section 3F84 (17710)
Curves and Surfaces in \({\bf R}^3\): An Introduction to Differential Geometry
Spring 2019


Last updated Tue Apr 23 01:51 EDT 2019

Homework problems and due dates (not the dates the problems are assigned) are listed below. This list, especially the due dates, will be updated frequently, usually in the late afternoon or evening the day of class or the next morning. If assignments with a due-date beyond the next lecture are posted, these assignments are estimates that may be revised. In particular, due dates may be moved either forward or back, and problems not currently on the list from a given section may be added later (but prior to their due dates, of course).

Exam dates and some miscellaneous items may also appear below.

Unless otherwise indicated, problems are from our textbook (O'Neill, Elementary Differential Geometry, revised 2nd edition). It is intentional that some of the problems assigned do not have answers in the back of the book or solutions in a manual. An important part of learning mathematics is learning how to figure out by yourself whether your answers are correct.

Read the corresponding section of the book before working the problems. The advice below from James Stewart's calculus textbooks is right on the money:

Some students start by trying their homework problems and read the text only if they get stuck on an exercise. I suggest that a far better plan is to read and understand a section of the text before attempting the exercises.

Date due Section # / problem #s
W 1/9/19 You should do the problems below by Wednesday, and save your work, but don't hand anything in on Wednesday. Periodically, once I've assigned enough problems, I'll give you instructions like "Hand in the following problems [from earlier homework] on [indicated day]." (It won't be all the problems I've assigned.) You'll write these up from your saved work and hand them in at that time. I'll give you what should be ample warning to write up the hand-in problems; if I end up underestimating what "ample warning" is, please let me know.

  • Show that if real-valued functions f and g on R3 are \(C^\infty\), then so are f + g and fg. (This is now non-book problem 1 on the non-book problems page.) Hint for the product: induction.

  • 1.1/ 1–4. Some things to note:
    • In these problems, \( x, y\) and \(z\) are the coordinate functions \(x_1, x_2, x_3\) on \({\bf R}^3\). That's why a function-definition like "\(f=x^2y\)" is precisely correct as stated; we do not need to write "\(f(x,y,z)= x^2y \)". In problem 1, we could correctly, but less efficiently, define the function \(f\) by writing "\(f(p_1,p_2,p_3) = (p_1)^2p_2\)" or "\(f(q_1,q_2,q_3) = (q_1)^2q_2\)" or "\(f(a,b,c) = a^2 b\)", etc. In the latter formulas, the variables \(p_1,p_2, p_3, q_1, q_2, q_3, a,b,c\) are dummy variables. We can't use \(x,y,z\) as dummy variables since we've given them a reserved meaning.
          However, the (very common) notation for composition that O'Neill uses in problem 4 allows us to give a meaning to "\( f(x,y,z) \)"; as O'Neill mentions in a footnote, this notation leads to the identity "\(f=f(x,y,z)\)". I don't want to encourage writing "\(f=f(x,y,z)\)", since it distracts from the concept of treating coordinate-functions as specific functions rather than as dummy variables, and has the potential to lead to confusion. In O'Neill's notation, the letters \(x,y,z\) in "\( f(x,y,z) \)" are not dummy variables, and the value of \(f(x,y,z)\) at the point \((2,3,7)\) is the unpleasant-looking \(f(x,y,z)(2,3,7)\)—which happens to equal \(f(2,3,7\)).
          My preferred notation for composition of functions involves the "\(\circ\)" symbol, except when I'm being lazy. For example, rather than defining "\(f=h(g_1,g_2,g_3)\)" the way O'Neill does in problem 4, I would prefer to first define "\( (g_1,g_2,g_3)\)" to be the function from \({\bf R}^3\) to \({\bf R}^3\) given by \({\bf p}\mapsto (g_1({\bf p}), g_2({\bf p}), g_3({\bf p}))\), and then define \(f=h\circ (g_1, g_2, g_3)\). Even in problem 1d, my preference would be to write "\(\sin\circ f\)" rather than "\( \sin f\)".

    • In the notation for partial derivatives appearing in these problems (especially problem 4), don't get lost worrying about "What does (for example) \(\partial f/ \partial x\) mean now that \(x\) is a coordinate function?" Treat the notation "\(\partial f/ \partial x\)" as meaning "Calc-3 partial derivative of \(f\) with respect to its first variable", a variable that you will eventually call \(x\), even though you might want to use a different letter in intermediate steps. In Calculus 3, a typical way of writing a problem with the content of problem 4a would be,
        "Compute \(\partial w/ \partial r\) if \(w=x^2-yz, \ x=r+s,\ y=s^2\), and \(z=r+t.\)"
      Another typical way would be,
        "Compute \(\partial w/ \partial r\) if \(w=h(x(r,s,t), y(r,s,t), z(r,s,t))\), where \(h(x,y,z)=x^2-yz, \ x(r,s,t)=r+s, \ y(r,s,t)=s^2\), and \(z(r,s,t)=r+t.\)"
      In these ways of writing the problem, the distinction between coordinate-functions and dummy variables is blurred (both meanings are used, in different parts of the problem-statement). We could equally well write these two statements, respectively, as
        "Compute \(\partial w/ \partial x\) if \(w=r^2-st, \ r=x+y,\ s=y^2\), and \(t=x+z\),"
      and
        "Compute \(\partial w/ \partial x\) if \(w=h(r(x,y,z), s(x,y,z), t(x,y,z))\), where \(h(r,s,t)=r^2-st, \ r(x,y,z)=x+y,\ s(x,y,z)=y^2\), and \(t(x,y,z)=x+z\)."
      In O'Neill's problem 4a, since the letters \(x,y,z\) have a fixed, non-dummy meaning as the first, second, and third coordinate-functions on \({\bf R}^3\), he doesn't use the two different triples \((x,y,z)\) and \( (r,s,t) \) that were used partly as dummy variables and partly as coordinate functions in my Calculus-3 versions of the problem.

  • Even though you're not yet handing anything in, read the Rules for Hand-In Homework so that you know what to do when the time comes.
  • F 1/11/19
  • Do all the homework I assigned in class. (This is always a part of your HW, implicitly, even when I don't say it explicitly on this webpage.)

  • 1.2/ 1–3, 5.
            In 5a, the instructions should end with "at each point \({\bf p}\) of \({\bf R}^3\)."
            In 5b, "linear combination" should be interpreted pointwise. At each point \({\bf p}\), the vector field \(xU_1+yU_2+zU_3\) is a linear combination of \( \{ V_1({\bf p}), V_2({\bf p}), V_3({\bf p}) \} \), but the coefficients depend on \( {\bf p}\). Thus you will end up expressing \(xU_1+yU_2+zU_3\) in the form \( fV_1+gV_2+hV_3\) for some real-valued functions \(f,g,h\).
  • M 1/14/19
  • 1.3/ 1a, 2abc, 3–5. Hint for #5: There are only three functions \(f\) that you need to consider!
  • W 1/16/19
  • Read Lemma 4.6 (and its proof) on p. 21. I'll assume in class that you've done this.

  • 1.4/ 1, 2, 4, 5–7, 8ac, 9.
        In #4, "log" means "natural log", i.e. \(\log_e\) (what you're used to calling "ln"). Except when teaching calculus, differential equations, and lower-level courses, most mathematicians use the notation "log" to mean \(\log_e\), and use the notation \(\log_{10}\) for the base-10 logarithm in the rare instances that they want to refer to this function. Older calculus textbooks also use "log" to mean \(\log_e\). Base-10 logarithms have essentially no use in higher mathematics, so the default base of "log" is the one of greatest use, \(e\).
        Problems 6 and 7 should be thought of as a pair. The point of #7 is to drive home what's proved in #6. After doing 7a, use problem 6 to predict the answers to 7b. Then do 7b and check your predictions.

  • Do non-book problems 2 and 3.
  • F 1/18/19
  • Read Section 1.5. I'll cover a good deal of it on Friday, but but may not get to everything; I want to finish Chapter 1 as soon as possible. Chapter 1 is foundational for the more interesting topics I plan to cover, but I want to make sure we have time for those topics.

  • Based on your reading, start on the homework problems from Section 1.5 due Wed. 1/23/19.
  • W 1/23/19
  • 1.5/ 1–10.
    Some things to note:
    • In #8, "its differential" is a typo for "a differential".

    • In #10, "continuation" refers to what's being shown in problems 9 and 10 for general functions \(f\); you're not continuing with the specific function \(f\) of problem 9.

    • For what you're being asked to prove in #10, it's not sufficient to say, "This is true because of what we learned in Calculus 3." (However, I will allow you to use what you learned about critical points in Calculus 1.) Chances are, proofs were not emphasized in your Calc 3 class, so you may have just been told "Believe this, it's true." If you were fortunate enough to have an instructor who did prove this fact about max/min, it still will not harm you to write down the proof again. (Suggestion: proof by contradiction. If \(df\) is not zero at \({\bf p}\), then at least one of the first partials of \(f\) is nonzero at that point. Use that to choose an appropriate one-variable function to which you can apply Calc-1 facts about critical points.)

  • In the assignment due Friday Jan. 25, I've listed your first hand-in assignment (a subset of the problems with due-dates of Jan. 23 or earlier), as well as new problems due Jan. 25. Start writing up the hand-in problems early, so that you don't have a time-crunch. Before you start writing up anything to hand in, make sure you read the rules for hand-in homework.
  • F 1/25/19
  • New homework:
    • Read Section 1.6 (all of it).

    • 1.6/ 1a, 5 (read Example 6.1, pp. 29–30, first), 6.
  • Old homework to be handed in. (Label this "Homework 1", for purposes of keeping track of grades.) Hand in the following problems:
    • 1.2/ 3e, 5
    • 1.3/ 4
    • 1.4/ 6, 8c (sketch not required), 9 (just for the tangent line to the helix at \(\alpha(\pi/4)\))
    • 1.5/ 6a, 7def, 8, 10 (just the "local maximum" case)
    • Non-book problems 2cd, 3
    Please make sure you've followed the rules for hand-in homework.
  • M 1/28/19
  • 1.6/ 8, 9 (just the first two sentences)

  • Do non-book problems 4, 5.

  • Read Section 2.1 (almost all of which should be review from Calculus 3). Also review the facts stated in exercises 2.1/ 2, 5. Fact 2(c) is called the triangle inequality.

  • 2.1/ 1, 3, 4. (You may also remember from Calculus 3 the facts stated in problem 4.) In problem 4, facts (b)–(d) can be deduced quickly from fact (a) and properties of determinants.

  • Read Section 1.7 through at least Example 7.3 (which ends with the paragraph "Thus the effect of \(F\) ..." on p. 36).
  • W 1/30/19
  • 2.1/ 6, 7, 9–11.

  • Continue reading Section 1.7, through at least Corollary 7.6.
  • F 2/1/19
  • 2.1/ 12. Either assume that the interval \(I\) contains \(0\), or replace the \(0\) in \(f(0), g(0)\) and the integral by an arbitrary \(t_0\in I\).

            One motivation for this problem is the following (there will be another motivation later when we study surfaces). As we will soon discuss for the case \(n=3\), the unit tangent vector field \({\bf T}\) associated to a regular curve \(\alpha:I\to {\bf R}^n\) is defined by \({\bf T}(t)= \alpha'(t)/ \|\alpha'(t)\|\). In this problem, we are interested in the case \(n=2\)—curves in the plane, not three-space. For these curves we can express the unit vector \({\bf T}(t)\) as \( (f(t), g(t))\) (omitting basepoints for simplicity), where \(f,g:I\to{\bf R}\) are differentiable functions satisfying \(f(t)^2+g(t)^2=1\). Thus for each \(t\) there are numbers \(\theta(t)\) for which \(f(t)= \cos \theta(t)\) and \(g(t)=\sin \theta(t)\), but these numbers are not unique; for each \(t\) we can add to any given \(\theta(t)\) any integer multiple of \(2\pi\), say \(2\pi n(t)\), where \(n(t)\) is a (possibly) \(t\)-dependent integer. Call any function \(\theta:I\to {\bf R}\) that satisfies \((f(t), g(t))= (\cos \theta(t),\sin \theta(t))\) throughout \(I\) an angle function for \({\bf T}\).
            As we move along the curve from (say) \(\alpha(t_0)\) to \(\alpha(t)\), we'd like to have a continuous angle function for which \(\theta(t)-\theta(t_0)\) is the total angle through which \({\bf T}(t)\) has turned from "time" \(t_0\) to "time" \(t\). The question is: is there always a continuous angle function? Conceivably, some initially-chosen angle-function \(\theta\) might not be continuous, but \(t\mapsto \theta(t)+2\pi n(t)\) might be continuous for some choice of \(n(t)\).
            To illustrate where the issue lies, suppose we move counterclockwise around the circle \(x^2+y^2=25\), starting at \((5,0)=\alpha(0)\). Let's parametrize our circle by \(\alpha(t) = (5 \cos (2\pi t), 5\sin (2\pi t)), -\infty < t < \infty\), so that traveling from any time \(t_0\) to \(t_0+1\) constitutes a full trip around the circle. For each \(t\) there is unique angle \(\theta_c(t)\in [0,2\pi) \) such that \({\bf T}(t)=(\cos \theta_c(t),\sin \theta_c(t))\) for every \(t\). If we start at \(t=0\), we arrive back at at our starting point at \(t=1\). During this journey, the vector \({\bf T}(t)\) vector rotates through a total angle of \(2\pi\), so we would like to define \(\theta(t_1)\) to be \(\theta(0)+2\pi\), even though \({\bf T}(t_1)= {\bf T}(0)\); we can't achieve this with the "confined" angle function \(\theta_c\). The total rotation keeps increasing continuously if we continue our counterclockwise travel. But the function \(\theta(t)= 2\pi t\) has exactly the properties we want: it is continuous and satisfies \({\bf T}(t)=(\cos \theta(t),\sin \theta(t))\) for all \(t\in{\bf R}\). This angle-function is related to the discontinuous angle-function \(\theta_c\) by \(\theta(t)=\theta_c(t)+2\pi n(t)\), where \(n(t)=[t]\), the greatest integer less than or equal to \(t\).
            That's fine for the circle, but does every regular curve in \({\bf R}^2\) have some continuous angle-function for its unit tangent vector field? Problem 2.1/12 shows that the answer is yes, and that in fact there's a differentiable angle function.
            Note that issue we saw (and surmounted explicitly) with the circle does not result from the circle's being closed. The same issue would have arisen had we taken, say, the spiral defined by \(\alpha(t) = (e^t \cos (2\pi t), e^t\sin (2\pi t)), -\infty < t < \infty\), or a curve whose unit tangent vector winds counterclockwise for a long enough while, then clockwise for a while, then counterclockwise for a while, etc. (The unit tangent vector can wind a lot even if the curve itself doesn't.)

  • Read Section 2.2. Based on your reading, get a head-start on the problems due Monday if you can (since there are quite a few); otherwise finish reading Section 1.7.
  • M 2/4/19
  • 2.2/ 1–4, 8, 10, 11 (assume \(\alpha\) is regular). In #3, in case you've forgotten or never learned the hyperbolic trig functions, the two in this problem are defined by \(\sinh t = (e^t-e^{-t})/2, \cosh t = (e^t+e^{-t})/2\).
        Problems 1, 3, and 4 are examples of something I mentioned in class: coefficients are fine-tuned so that, assuming you make no mistakes, the speed is the square-root of a recognizable square. In #3, see what happens if the coefficient of \(t\) in the last component of \(\alpha(t)\) is changed to anything other than 1. Similarly, see what happens if you change the coefficient in any of the three components of \(\alpha(t)\) in #1 or #4 (without changing the others).

  • Do non-book problem 6.

  • Finish reading Section 1.7.
  • W 2/6/19
  • Sect. 2.3/ 2, 3, 5, 7, 8, 10. Note: In any problem in which the binormal \({\bf B}\) and/or the torsion \(\tau\) appear, the assumption "wherever \(\kappa > 0 \)" is implicit.

        In #8, "rotating through \(+90^\circ\)" means "rotating counterclockwise by \(90^\circ\)". I prefer to use the terminology "unit tangent vector field" and "unit normal vector field" for \({\bf T}\) and \({\bf N}\) when introducing these objects, rather than just "unit tangent" and "unit normal". After these objects have been introduced, I don't mind the abbreviated names "unit tangent" and "unit normal" when the omitting "vector field" is not likely to cause confusion. Also, the unit normal (vector field) defined in this problem is best called by a name like "positive unit normal", to distinguish it from \(-{\bf N}\). This "positive unit normal" has the property that, for each \(s\), the orthonormal basis \( \{ {\bf T}(s), {\bf N}(s)\} \) of \(T_{\beta(s)}{\bf R}^2\) is positively oriented. One feature of the "positive unit normal" is that when you're traversing a smooth, simple closed Curve \( C\) in \({\bf R}^2\) "essentially counterclockwise" (meaning that the region enclosed by \(C\) is always on your left as you travel forward), \({\bf N}(s)\) is the inward-pointing unit normal to \(C\) at \(\beta(s)\). When dealing with simple closed curves, there is a different, common convention: the "conventional" or "preferred" unit normal is often the outward-pointing normal.
        Note that, in contrast to the situation for curves in \({\bf R}^3\), for unit-speed curves \(\beta\) in \({\bf R}^2\) we are able to single out (via the definition in problem 8) a "special" unit normal \({\bf N}(s)\) for all \(s\) in the domain of \(\beta\), including those for which \({\bf T}'(s)=0\). The reason is that for a nonzero vector \({\bf v}\in {\bf R}^2\), there are only two unit vectors \({\bf w}\) orthogonal to \({\bf v}\) rather than a circle's worth. Each of these \({\bf w}\)'s is the negative of the other, and we can use our notion of positively/negatively oriented bases of of \({\bf R}^2\) to single out the choice for which unit vector \(\{{\bf v}, {\bf w}\}\) is a positively oriented basis.
        In part (d) of problem 8, the parenthetic instructions mean "identify the point \((p_1,p_2)\in {\bf R}^2\) with the point \((p_1,p_2,0)\in {\bf R}^3\)". The requirement that \(\tilde{\kappa}\) not change sign is unnecessary. However, if \(\tilde{\kappa}\) does change sign, then there will be some open \(s\)-intervals on which \(\tilde{\kappa}(s)=\kappa(s)\) and others on which \(=\tilde{\kappa}(s)=-\kappa(s)\). With this identification of \({\bf R}^2\) with the \(xy\) plane in \({\bf R}^3\), you may also ask whether the unit normal \(N(s)\) in this problem is the principal unit normal at points where the latter is defined. The answer is no, in general. At each point, the two are the same up to sign, but the sign-relation between them switches wherever \(\tilde{\kappa}\) changes sign. Furthermore, changing the orientation of \(\beta\) (traversing its route in the opposite direction) changes the sign of this problem's \(N(s)\), but does not change the sign of the principal unit normal in \({\bf R}^3\) (see problem 7).

        Geometric interpretation of #5. For every \({\bf v}\in {\bf R}^3\), the map \(R_{\bf v}: {\bf R}^3\to {\bf R}^3\) defined by \(R_{\bf v}({\bf w})= {\bf v} \times {\bf w }\) is linear. If \({\bf v}={\bf 0}\) then \(R_{\bf v}\) maps every vector \({\bf w}\) to \({\bf 0}\), of course. If \( {\bf v} \neq {\bf 0}\), then every vector \({\bf w}\) can be expressed uniquely in the form \(c{\bf v}+ {\bf w}_\perp\), where \(c \in {\bf R}\) and \({\bf w}_\perp\) is perpendicular to \({\bf v}\). Since \(R_{\bf v}({\bf v})= {\bf v} \times {\bf v}= {\bf 0}\), the "interesting part" of \(R_{\bf v}\) is what this map does to vectors orthogonal to \({\bf v}\). The set of these vectors is a two-dimensional subspace of \({\bf R}^3\), the orthogonal complement \(V^\perp\) of the 1-dimensional subspace \(V=\mbox{\{all multiples of}\ {\bf v}\}\). For every \({\bf w}\in V^\perp\), the map \(R_{\bf v}\) rotates \({\bf w}\) by \(\pi/2\) within the plane \(V^\perp\), and multiplies the length by \( \| {\bf v} \| \). The sense of the rotation is counterclockwise as seen from the tip of \({\bf v}\); i.e. for every nonzero \({\bf w} \in V^\perp\), the ordered triple \( \{{\bf w}, R_{\bf v}({\bf w}), {\bf v}\} \) is a right-handed triple of mutually orthogonal vectors. For reasons beyond the scope of this course, the linear map \(R_{\bf v}\) is called an infinitesimal rotation.
        For \({\bf p}\in {\bf R}^3\) and \({\bf v}_{\bf p} \in T_{\bf p}{\bf R}^3\), we can analogously define the linear map \(R_{{\bf v}_{\bf p}}: T_{\bf p}{\bf R}^3\to T_{\bf p}{\bf R}^3\) by \(R_{{\bf v}_{\bf p}}({\bf v}_{\bf p}) =({\bf v}\times {\bf w})_{\bf p}\). The set of equations in problem 5 says that for all \(s\) at which the Frenet frame \( \{{\bf T}(s), {\bf N}(s), {\bf B}(s)\} \) is defined (i.e. those s for which \(\kappa(s)> 0\) ), the derivative of each element of the Frenet frame is given by applying the infinitesimal rotation \(R_{A(s)}\) to that element.

  • F 2/8/19
  • Read (or finish reading) Section 2.3.

  • Sect. 2.3/ 1, 6, 9. In #6, you should find that \(r=1/\kappa(0)\). This quantity is called the radius of curvature of the corresponding Curve at the point \(\beta(0)\) (if the Curve does not cross itself at that point).

  • Read Section 2.4 through Example 4.4. (The rest of the section is optional reading.)

  • Sect. 2.4/ 1, 11ab, "11d", 12. I think you'll find #11 fun, and also #14 in the next assigment. See notes below.
    • In #11, in the definition of the curve \(\alpha^*\), for each \(t\) the quantity \(\frac{1}{\kappa(t)}{\bf N}(t)\) is treated not as a tangent vector in \(T_{\alpha(t)}{\bf R}^3\), but just as the vector part of this tangent vector (so that \(\alpha(t)+\frac{1}{\kappa(t)}{\bf N}(t)\) is a well-defined element of \({\bf R}^3\)). A similar comment applies in #13 (part of the next assignment).
    • 11(c) is optional, but you may enjoy it if you're good with computer graphics.

    • Exercise "11(d)": Let \(I\subset {\bf R}\) be the domain of \(\alpha\) and let \(t_0\) be an element of \(I\). Show that \( (\alpha^*)'(t_0)=0 \) if and only if \(\kappa'(t_0)=0=\tau(t_0)\).
          For "most" curves in \({\bf R}^3\) there are no \(t_0\) that meet these conditions. But if there is such a \(t_0\), then \( \alpha^*\) is not a regular curve, and its image may not be smooth.

    • In #12, observe that \(J\) rotates vectors counterclockwise by \(90^\circ\). Thus, the positive unit normal of problem 2.3/8, expressed in terms of \(t\), is \({\bf N}(t) = J({\bf T}(t))\).

  • Do non-book "problem" 7.
  • M 2/11/19
  • In case you haven't already done so: in the posting for the assignment that was due 2/6/19, read the now-corrected version of the first paragraph below the problem-list, starting with the sentence before the red text.

  • Non-book problems 8, 9.

  • Sect. 2.4/ 13a, "13d", "13e", 14a. See notes below.

    • In #13, assume that \(\alpha\) is regular.

    • Exercise "13(d)": Let \(I\subset {\bf R}\) be the domain of \(\alpha\) and let \(t_0\) be an element of \(I\). Show that \( (\alpha^*)'(t_0)=0 \) if and only if \(\tilde{\kappa}'(t_0)=0\), i.e. if and only if \(t_0\) is a critical point of \(\tilde{\kappa}\). Hence if \(\kappa\) has any critical points, then \(\alpha^*\) is not regular, and the Curve it parametrizes may not be smooth. In "13(e)" you will see that for a closed curve \(\alpha\) satisfying a mild "nondegeneracy" condition, the image of the evolute is never smooth. Thus we have smooth Curves "giving birth" to non-smooth Curves!
          Exercise "13(d)" is, of course, the 2D analog of "11(d)" above. However, in contrast to the 3D case, in 2D it is very common for there to be values \(t_0\) for which \(\tilde{\kappa}'(t_0)=0\); for example all relative extrema of \(\kappa\) are critical points of \(\kappa\). If the Curve \(C\) parametrized by \(\alpha\) is a closed curve, then automatically \(\kappa\) will achieve an absolute maximum and absolute minimum, so there will be definitely be at least two points at which \(\alpha\) is not regular.
          For reasons that are not at all obvious, the curvature \(\tilde{\kappa}\) of a regular, closed, plane curve has at least two relative maxima and two relative minima. This is the Four-Vertex Theorem.

    • Exercise "13(e)": Show that if \(\tilde{\kappa}'\) changes sign at \(t_0\in I\) (i.e. if there is some \(\delta>0\) such that the continuous function \(\tilde{\kappa}'\) is positive throughout one of the intervals \((t_0-\delta,t_0)\),   \((t_0, t_0+\delta)\), and negative throughout the other), then \(\alpha^*\) has a cusp at \(t_0\).
          Of course the sign-change condition implies that \(\tilde{\kappa}'(t_0)=0\); even more strongly, it implies that \(\tilde{\kappa}\) has a relative extremum at \(t_0\). But the converse is false: having a relative extremum at \(t_0\) doesn't imply that \(\tilde{\kappa}'\) changes sign at \(t_0\) (there may be no interval \((t_0-\delta, t_0+\delta)\) in which \(t_0\) is the only point at which \(\tilde{\kappa}'\) is zero). However, if \(\tilde{\kappa}'(t_0)=0 \neq \tilde{\kappa}''(t_0)\), then \(\tilde{\kappa}'\) does change sign at \(t_0\). A critical point of a smooth real-valued function \(f\) on an open interval \(I\) is called nondegenerate if \(f''(a)\neq 0\). Thus, if all critical points \(t_0\) of \(\tilde{\kappa}\) are nondegenerate, we are guaranteed that \(\tilde{\kappa}'\) changes sign at each critical point—and hence that the evolute of \(\alpha\) has a cusp at \(t_0\).

    • In #14(a):
      • I'm not requiring you to show any of the construction lines \(\lambda_t\). If you want to sketch a few, sketch them in a different color from what you're using for the ellipse and the evolute, and don't sketch so many that they interfere with viewing the ellipse and evolute themselves.
      • "\(a(t)\)" is a misprint for \(\alpha(t)\).
      • Find an explicit formula for \(\alpha^*(t)\).
      • You are allowed, not required, to use graphing software to sketch the evolute. Obviously, a computer can graph more accurately than you can graph by hand. But (i) from exercises "13de", you should be able to tell that the image of \(\alpha^*\) is non-smooth at exactly four points, each of which is a cusp, and (ii) from your formula for \(\alpha^*\) you should be able to write down exactly where the cusps are. Using (i) and (ii) you should be able to figure out at least a rough sketch of the evolute.
      • On your graph of the ellipse and its evolute, indicate with arrows the direction in which the ellipse is traced out, and the direction in which its evolute is traced out.

  • In the assignment due Friday Feb. 15, I've listed your second hand-in assignment (a subset of the problems with due-dates of Feb. 13 or earlier), as well as new problems due Feb. 15.
  • W 2/13/19
  • Sect. 2.4/ 16, 17d. For each of these, read the paragraph (for #16) or sentence (for #17) directly above the problem, which gives the motivation. In #16, restrict \(t\) to the open interval \( (-\sqrt{2/3}, \sqrt{2/3}) \)   (otherwise the curvature will also be zero at \(t=\pm \sqrt{2/3}\) ).

  • Non-book problem 10.

  • Sect. 1.7/ 1–5, 7. Read the instructions at the start of the exercises to see what map \(F\) the first four problems refer to.
  • F 2/15/19
  • New homework: None.

  • Old homework to be handed in. (Label this "Homework 2", for purposes of keeping track of grades.) Hand in the following problems:
    • 2.1/ 12
    • 2.4/ 14a, 16
    • Non-book problems 6bdf, 9, 10
    Please remember to follow the rule for hand-in homework that says, in blue boldface, "leav[e] wide margins (left and right and top and bottom) and enough other space for me to write comments.". The first set of hand in problems consisted of mostly computational problems and some easy proofs, so in most cases I had very few comments, and most of these were short. But you should be leaving me enough space to write sentence-long, or several-sentence-long, comments in the close vicinity of what I'm commenting on.
  • M 2/18/19
  • Sect. 3.1/ 1–3, 7, 8. Notes for #7: (i) In between problems 6 and 7, the definition of a group is given. (Those of you who've taken MAS 4301 or MAS 5311 will already know this definition.) (ii) It is more common to call \(E(3)\) the Euclidean group in dimension 3 rather than of order 3. The terminology "order of a group" is usually used only for finite groups, where it means the number of elements in the group.

  • Sect. 3.3/ 4. The hint given in the book at the top of p. 116, namely that \(C\) has an eigenvector with eigenvalue 1 (equivalently, that the matrix of \(C\) with respect to any basis of \({\bf R}^3\) has an eigenvector with eigenvalue 1), needs some justification. Here is an outline of an argument whose details you should fill in.
    1. A cubic polynomial with real coefficients has at least one real root. (Hint: if the variable in the polynomial is \(\lambda\), consider what happens as \(\lambda\to\infty\) and as \(\lambda\to -\infty\), and apply the Intermediate Value Theorem.)
    2. If \(\lambda_3\) is a real root of the real, cubic polynomial \(p(\lambda)\), then \(p(\lambda)/(\lambda-\lambda_3)\) is a quadratic polynomial \( q(\lambda)\) with real coefficients.
    3. If a quadratic polynomial \( q(\lambda) \) with real coefficients has no real roots, then its roots are a complex-conjugate pair \( \{a+ bi, a-bi\} \), where \(b\neq 0 \).
    4. Conclude from the above that if \(p(\lambda)\) is a cubic polynomial with real coefficients, then \(p(\lambda) = c(\lambda-\lambda_1)(\lambda-\lambda_2)(\lambda-\lambda_3)\) for some nonzero \(c\in {\bf R}\), some \(\lambda_1,\lambda_2\in {\bf C}\) that are either both real or are complex conjugates of each other, and some \(\lambda_3\in {\bf R}\).
    5. Apply the preceding the to the characteristic polynomial of a \(3\times 3\) real matrix \(A\), i.e. the polynomial \(p_A(\lambda) = \det(A-\lambda I)\), to show that
      \(p_A(\lambda)= -(\lambda-\lambda_1)(\lambda-\lambda_2)(\lambda-\lambda_3)\), where \(\lambda_1 , \lambda_2\) and \(\lambda_3\) are as above. Recall that \(\lambda_1 , \lambda_2\) and \(\lambda_3\) are the eigenvalues of \(A\). Hence \(A\) has at least one real eigenvalue \(\lambda_3\).
    6. Recall that \(\det(A) = \lambda_1\, \lambda_2\, \lambda_3\). Hence if \(A\) is invertible, which is the case for all orthogonal matrices, then it has no zero eigenvalues, so every real eigenvalue is either positive or negative.
    7. If \(A\), as above, has a pair of complex-conjugate eigenvalues \(a\pm bi\), deduce that \(\det(A) = (a^2+b^2)\lambda_3\), and therefore that if \(\det(A) > 0\) then \(\lambda_3 > 0\).
    8. Since an orthogonal transformation preserves norms, and since there is at least one eigenvector for every real eigenvalue, the only possible real eigenvalues of an orthogonal matrix are \(\pm 1\).
    9. If \(A\) is the matrix of an orthogonal transformation of \({\bf R}^3\) and \(\det(A) > 0\), then no matter how many real eigenvalues \(A\) has (we saw above that it has either one or three), at least one of the eigenvalues must be 1, and there must be an eigenvector with this eigenvalue.
    Hint for the remainder of problem 3.3/4: show that if \({\bf e}_3\) is an eigenvector of an orthogonal transformation \(C\) of \({\bf R}^3\), then \(C\) preserves the space of all vectors perpendicular to \({\bf e}_3\) (i.e. if \({\bf v}\perp {\bf e}_3\), then \(C({\bf v})\perp{\bf e}_3\)), a two-dimensional subspace (the orthogonal complement of the span of \({\bf e}_3\)). Then apply this fact to a basis \(\{{\bf e}_1, {\bf e}_2\}\) of this orthogonal complement.
  • W 2/20/19
  • Sect. 3.2/ 3
  • F 2/22/19
  • Sect. 3.5/ 1, 3

  • Non-book problem 11.
  • M 2/25/19
  • 4.1/ 1, 3
  • W 2/27/19
  • 4.1/ 4–6, 8–10. Additional hint for #6: monkeys have tails. (Apes do not. Don't let me catch you calling King Kong a monkey if you want to pass this course.)
        If you've taken complex analysis, the function \(f\) in #6 may look familiar to you: it's the imaginary part of \(-(x + iy)^3\). Getting rid of the minus sign has the same effect as rotating the surface by \(\pi\) about the \(z\)-axis; it doesn't change the shape. Similarly, using the real part of \((x + iy)^3\) instead of the imaginary part has the same effect as rotating the surface by \(\pi/2\) about the \(z\)-axis.
  • F 3/1/19
  • 4.1/ 12

  • 4.2/ 1–4, 6a, 9. "Partial velocities" are defined in Definition 2.1 on p. 139. Optional: if you're good with computer graphics, do 6b for fun.
           In Calculus 3 you probably graphed hyperbolic paraboloids such as \(\{z=x^2-y^2\}\), and learned that they are often called "saddle surfaces". The saddle surface in problem 6, \(\{z=xy\}\), is another hyperbolic paraboloid. It can be obtained by rotating the surface \(\{z=\frac{1}{2}(x^2-y^2)\}\) through an angle \(\frac{\pi}{4}\) about the \(z\)-axis.
  • M 3/11/19
  • No new homework (spring break should be a break).
  • W 3/13/19
  • No new homework; just study for the midterm.
  • F 3/15/19
  • Be aware that O'Neill often chooses to omit the composition-symbol "\(\circ\)" when composing functions; e.g. his "\({\bf x}{\bf x}^{-1}\alpha \)" in the proof of Lemma 3.1 on p. 150 is what I write as \({\bf x}\circ {\bf x}^{-1}\circ \alpha \), and his "\({\bf x}^{-1}{\bf y}\)" is what I write as \({\bf x}^{-1}\circ {\bf y}\).

  • 4.3/ 1, 2, 3bc, 6, 11ab, 13, 14. In problem 3, part (a) gives the context for parts (b) and (c), namely Corollary 3.4 on p. 151. We proved this corollary in class, except that I forgot to state uniqueness. Also, "Jacobian" in 3c means the Jacobian determinant, i.e. the determinant of the Jacobian matrix.

  • In place of the uniqueness statement in Corollary 3.4, prove the following: there exist unique functions \(\bar{u}, \bar{v},\) such that the equation at the top of p. 152 holds for all \( (u,v)\) in the domain of \({\bf x}^{-1}\circ {\bf y}\), and these uniquely determined functions are differentiable. (This is stronger than what Corollary 3.4 states. The functions \(\bar{u}, \bar{v}\) for which the above equation holds are not just unique among differentiable functions; they are unique, period. O'Neill's wording would have been equivalent to this if he'd placed a comma after "unique" and a comma after "differentiable".)
  • M 3/18/19
  • 4.3/ 4, 5, 7. In 4b, the left-hand sides of the two equations should actually be \( \left({\bf x}_u[f]\right)\circ {\bf x}\) and \( \left({\bf x}_v[f]\right)\circ {\bf x}\), in order for the left- and right-hand sides of the equations to be defined on the same domain. (Thanks go to Kenneth DeMason for pointing out the discrepancy.) Problem 4a is useful for doing 4b.
  • W 3/20/19
  • 4.3/ 9, 10, 12
  • Read Section 4.4.

  • Under the assignment due Friday Mar. 22, I've listed your third hand-in assignment (a subset of the problems with due-dates of Mar. 20 or earlier), as well as new problems due Mar. 22.
  • F 3/22/19
  • New homework: Read the handout on differential forms (on \({\bf R}^3\)) through at least the first three lines on p. 6. This was posted in January when we were covering O'Neill's Section 1.6, but the same ideas apply to differential forms on surfaces. To define differential forms on surfaces, we just replace \({\bf R}^3\) in the handout's Definition 1.7 (and beyond) with \(M\). In statement (4) on p. 3, for surfaces we can replace "\(k > 3\)" with "\(k > 2\)".
           The ideas after the first three lines on p. 6 also have analogs for surfaces, but you can't get them just by replacing \({\bf R}^3\) with \(M\). We'll talk about these later in the semester if time permits.

  • Old homework to be handed in. (Label this "Homework 3", for purposes of keeping track of grades.) For problems that have answers (or partial answers) in the back of the book, you won't get credit just for copying what's the back of the book. You have to show valid work that leads to the answers.
           Hand in the following problems:
    • Non-book problem 11a.
    • 3.5/ 3 (second part only). This problem-part can be done either from what you figure out in the first part of the problem, or by using non-book problem 11a.
             Problem 3 on your midterm was based on the two problems above (non-book problem 11a and 3.5/ 3). If the exam's typo "\(\sqrt{4}\)" is corrected to "\(\sqrt{3}\)," the orthogonal-transformation part of the isometry \(F\) in exam-problem 3b is exactly the same as the isometry in 3.5/ 3.

    • 4.1/ 4ab (prove your answers), 5, 8
    • 4.2/ 4
    • 4.3/ 3c (in your writeup you may assume the result of 3b), 4
  • M 3/25/19
  • No new homework.
  • W 3/27/19
  • 4.4/ 1–4, 6. Equation (3) referred to in problem 6 has two typos: the second \({\bf y}_v\) should be \({\bf y}_u\), and the second \({\bf x}_v\) should be \({\bf x}_u\).
  • F 3/29/19
  • Read Section 4.5 through Example 5.5.
  • M 4/1/19
  • Read all my comments on your returned homework. Any comment you don't understand is something you should see me about soon in office hours.
  • 5.1/ 1–4
  • W 4/3/19
  • 5.1/ 5–7

  • Read the remainder of Section 4.5. I probably won't have time to cover Section 4.5 in class, but will want to use some of this section's concepts, definitions, and results.
  • F 4/5/19
  • 5.1/ 9
  • M 4/8/19
  • 5.2/ 3
  • Read whatever portion of Section 5.2 you haven't read yet.
  • W 4/10/19
  • 5.2/ 1

    If there appears to be a typesetting problem beyond this point, reload the webpage. There is some weird interaction that I don't understand among HTML, the mathematics word-processing software being used, and some or all browsers, which sometimes results in what looks like gibberish or chopped-off lines. Whenever this has happened to me, simply reloading the page has fixed the problem.

  • 5.3/ 1, 3. In #1, a point \({\bf p}\in M\) is called planar if both principal curvatures at \({\bf p}\) are zero. In #3, there's a typo: the factor in front of the integral should be \(\frac{1}{2\pi}\). The \(k(\vartheta)\) in #3 is as in the proof of Theorem 2.5 on p. 213. (By the way, "\(\vartheta\)" is just a cursive version of \(\theta\), not a weirdly written v, or any other creative interpretation you may have come up with in problem 2.1/12! Feel free to write \(\vartheta\) whatever way you're used to writing \(\theta\).)

  • Non-book problem 12.
  • F 4/12/19
  • 5.3/ 2, 4a, 6a. In #2, you're not assuming that the indicated conditions on \(S\) hold for all orthonormal pairs \( ({\bf u}_1, {\bf u}_2)\); you're just assuming that these conditions hold for one given orthonormal pair. In 2d, assume also that \({\bf p}\) is not an umbilic point.
  • M 4/15/19
  • 5.4/ 2–4, 6, 12, 13. In #4, "log" means natural log (the function you're probably used to seeing written as "ln").
  • W 4/17/19
  • Read Section 6.1 up through Example 1.6.
  • Read Section 6.2 up through Lemma 2.1.

  • Under the assignment due Friday Apr. 19, I've listed your fourth hand-in assignment (a subset of the problems with due-dates of Apr. 15 or earlier).
  • F 4/19/19
  • New homework: Read Section 6.4 through Example 4.6(1).

  • Old homework to be handed in. (Label this "Homework 4", for purposes of keeping track of grades.) Hand in the following problems:
    • 4.4/ 6
    • 5.1/ 2, 4bd , 6 (just the last question)
    • 5.3/ 2ac
    • Non-book problem 12e(i). For purposes of writing this up, you may assume parts (a)–(d) of this problem.
    • 5.4/ 4
  • M 4/22/19
  • No new homework
  • W 4/24/19
  • 5.4/ 8

  • Read Section 5.7 through Example 7.4, omitting Theorem 7.2. (You are certainly allowed to read Theorem 7.2; I'm just not requiring it.) Example 7.4 assumes that you're acquainted with the hyperbolic trig functions \(\sinh\) (pronounced like "cinch") and \(\cosh\) (pronounced the way it's spelled). In case you've forgotten, or never learned, these useful functions, their definitions are: \(\sinh x = (e^x-e^{-x})/2, \ \ \cosh(x)= (e^x+e^{-x})/2\). You should check that these functions satisfy the identity \(\cosh^2 x - \sinh^2 x =1\), and that \(\frac{d}{dx} \sinh x = \cosh x\) and \(\frac{d}{dx} \cosh x = \sinh x\). The function \(\sinh\) is one-to-one, and its range is the whole real line, so it has an inverse function \(\sinh^{-1}: {\bf R}\to {\bf R}\). (This inverse function can be computed explicitly, as you should be able to show: \(\sinh^{-1}(y) = y+\sqrt{y^2+1}\). However, this fact is not needed to understand Example 7.4.)

  • In Section 6.4, read Example 4.6(2).

  • 6.5/ 5abe (see notes below about part (b)). The helicoid is an example we discussed in class, so I didn't assign homework involving it previously, but you may wish to look at exercise 4.2/ 5ab to review this surface. In part (e), "constant on orbits" means that \(K_t({\bf x}_t(u,v))\) is independent of \(t\).

        Notes about 6.5/5b: (i) Part (b) continues through the first line of p. 294. (ii) There are several typos in part (b):

    • The parenthetic sentence that starts with "Show that" near the end of p. 293, should end after the equation on the last line of p. 293. (I.e. there should be a right-parenthesis after the period at the end of the equation. Since a parenthesis is needed at the end, the typesetter would have done better not to place this equation on its own line.)
    • On the first line of p. 294, the phrase "for \(t < \pi/2\,,\)" should be deleted (including the comma).
    • On the first line of p. 294, the right-parenthesis at the end of the line should be deleted. (Effectively, this is the right-parenthesis that belonged at the end of the last line on p. 293).
    • The difference between the cases \(t < \pi/2\) and \(t=\pi/2\) is that \(F_t\) is one-to-one for \(t < \pi/2\) but not for \(t=\pi/2\). (Thus, \(F_t\) is a local isometry in both cases, and is an isometry when \(t < \pi/2\), but is not an isometry when \(t=\pi/2\).) Showing that \(F_t\) is one-to-one for \(t < \pi/2\) takes some algebraic skill. Unless you have a lot of experience playing around with equations, you may not succeed in showing one-to-one-ness. In that case, it's okay to content yourself with showing just that \(F_t\) is a local isometry.
    • For students who've taken enough topology to know what the following means: the map \(F_{\pi/2}\) is a covering map.
        Here is a nice animation showing the helicoid, the catenoid, and this family of local isometries carrying the helicoid to the catenoid: https://www.youtube.com/watch?v=E6JtYMVayeI . (The animation also shows the time-reversed version, but to go in that direction you have to "tear" the catenoid, because the map \(F_{\pi/2}\) isn't invertible.) There's an animation at https://en.wikipedia.org/wiki/Catenoid#Helicoid_transformation in which the graphics are "cleaner" (the surface-rendering in the YouTube video makes it look like the catenoid has a gap, instead of smoothly meeting itself), but the Wikipedia animation is so fast that I find it hard to follow.

  • Class home page