Homework Assignments
MAA 4402/5404: Functions of a Complex Variable
Fall 2022


Last updated Mon Dec 12 19:11 EST 2022

Homework problems will be listed on this page shortly before or shortly after we cover the relevant material. Since the amount of homework will far exceed what I could possibly grade, and since it is far too easy to obtain unauthorized solutions manuals for the Brown and Churchill textbook, I do not plan to collect homework. Nonetheless, the homework is mandatory. In this 4000/5000-level course, I expect you to have the maturity and self-discipline to do all assigned work its due-date (at least most of the time).The chief enforcement mechanism will be your exams, but there are also in-class telltale signs that students have not been doing their homework.

If it becomes apparent that a lot of students are not keeping up with the homework, I may start collecting it—all of it—but grading only a small, unannounced subset of the problems. If I collect homework, you'll be required to write up all of it neatly and clearly, with some strict formatting requirements and subject to considerable restrictions on what sources you may consult. (For example, see the homework rules I've used in Advanced Calculus. See also the one-strike-you're-out "You cheat, you fail" rule that would apply if I end up collecting homework.) You would find this quite a lot more work than simply doing all the homework exercises.

The due-dates in the homework list are "do-by" dates. Sometimes my lectures will assume knowledge you're expected to have gained from the homework. The due-dates are also there to make it easier for you to pace yourselves, rather than postponing a lot of the homework till shortly before an exam—which would be a prescription for failure.

The list of problems will be updated frequently, usually in the late afternoon or evening after each lecture. Due dates, and assignments more than one lecture ahead, are estimates. In particular, due dates may be moved either forward or back, and problems not currently on the list from a given section may be added later (but prior to their due dates, of course).

If one day's assignment seems lighter than average, it's a good idea to read ahead and start doing the next assignment, which may be longer than average.

Unless otherwise indicated, problems are from our textbook (Brown and Churchill, Complex Variables and Applications, 9th ed.). Read the corresponding sections of the book before working the problems. (In this textbook, the exercises "the corresponding sections" means "all sections since the last set of exercises". Each exercise-set tends to correspond to one topic that, in most textbooks, would be a single section with subsections for subtopics. Instead, this edition of Brown and Churchill gives each sub-sub-subtopic its own 2–3 page section.) Don't read only the examples, and don't try the homework problems first and refer to the textbook only if you get stuck.

Exam dates and some miscellaneous items may also appear below.

Date due page # / problem #s
F 8/26/22
  • Read the Class home page, and Syllabus and course information handout. Also read everything on the present page preceding this assignment-chart.

  • pp. 4–5/ 1–11.
           Note for #8: In class we showed that the complex numbers 0 and 1 are, respectively, additive and multiplicative identities. So all you need to do in #8 is to show that these are, respectively, the only additive and multiplicative identities. (In mathematics, the word unique is used with precision! An object with some given property [or properties] is unique if [and only if] it is the only object with that property [or those properties]. This is a much stricter than the colloquial usage of "unique" for something that's merely very unusual.) Suggestion: To show uniqueness in part (a), show that if two complex numbers, say \(0\) and \(0'\), are additive identities then \(0=0'\). To show uniqueness in part (b), use an analogous strategy.
  • M 8/29/22
  • (a) Compute \( i^3 \) and \( i^4 \). (b) Show that \( i^{n+4}=i^n\) for every integer \( n \). (c) Use parts (a) and (b), together with your knowledge of \(i^1\) and \(i^2\), to determine \(i^n\) for every integer \(n\).

  • On p. 7, read the discussion about the binomial formula.

  • pp. 7–8/ 1–8. The point of #1 is to illustrate that it is possible for an algebraic combination of complex numbers to end up being real. "Random" combinations of arbitrary complex numbers will not be real, of course.

  • pp. 13–14/ 5 (read Section 4 first), 8, 9

  • Read Section 6. (You can skip over items that I covered in class, but there are a few relations, like equation (2), that I didn't go over.)

  • pp. 16–17/ 1abc, 2, 3 (I did half of #3 in class), 10b, 11, 12, 13
  • W 8/31/22
  • pp. 16–17/ 15

  • pp. 23–25/ 1, 6.

  • Read Sections 7–9. I'll go over some of this in class, but there's more here that you need to know than there's time for me to cover in lecture.

  • Based on your reading, get started on the problems from pp. 23–25 listed in the next assignment.
  • F 9/2/22
  • pp. 23–25/ 3–5, 9, 10

  • Read Sections 10 and 11. I'll go over some of this in class, but not all. The problems below require you to have done most of this reading.

  • pp. 30–32/ 1–4
  • W 9/7/22
  • pp. 30–32/ 5–9.
        In #6, a zero of the polynomial \(p(z)=z^4+1\) is complex number \(z\) for which \(p(z)=0\). (More generally, a zero of any complex-valued or real-valued function \(f\) on a domain \(D\) is an element \(z\in D\) for which \(f(z)=0\).)
        In #7, "unity" means the number 1. It is common to refer to roots of 1 as "roots of unity", although in most other contexts "unity" and "1" are not synonymous.
        In #8, observe that the quadratic formula is missing the "\(\pm\)" sign you're used to. Why? (It's not a mistake.)

  • Read Section 12.
  • F 9/9/22
  • pp. 34–35/ 1–9. Reading Section 12 was part of the previous assignment, so even though there are two things we didn't define on Wednesday Sept. 7 (bounded set in \({\bf C}\), and accumulation point of a set in \({\bf C}\) ), you should be able to do all the problems in this list.
  • M 9/12/22
  • Read Sections 13 and 14.
          Better notation and terminology for "the mapping (or transformation) \(w=z^2\)" is "the mapping (or transformation) \(z\mapsto z^2\); the symbol "\(\mapsto\)" is read "maps to" or "goes to". (The arrow must have the little bar at the tail to be read this way!)
          In class, I mentioned Brown & Churchill's comment (on 9. 37) that "[I]t is not always convenient to used notation that distinguishes between a function and its values." While this comment is true, almost all the inconvenience can be avoided by using the "\(\mapsto\)" symbol. When we're talking about a specific function defined by some formula, the "\(\mapsto\)" symbol eliminates the need for an introducing an extra letter (e.g. "\(w\)" in "\(w=z^2\)", or "\(f\)" in "\(f(z)\)") that we may have no need for after defining the function. Simultaneously, this notation maintains the distinction between a function and its values. (For example, "\(z\mapsto z^2\)" is the function to which Section 14 is devoted; "\(z^2\)" represents only the values of this function, and does that much only if we agree in advance that the letter \(z\) always means "the domain-variable for any function of a complex variable.")
          For getting a feel for what the integer-power transformations \(z\mapsto =z^n\) do geometrically, I find it most useful to consider images of straight lines or rays (half-lines) or line segments, images of arcs of circles centered at the origin, and images of regions bounded by these such lines and/or and/or rays and/or line segments and/or arcs (i.e., whose boundaries are formed from these lines/rays/segments/arcs). In Section 14, I don't find the first two pages very illuminating for picturing \(z\mapsto z^2\), so it's fine if you just skim these; Example 2 is more useful. (The first two pages are based on inverse images of straight lines, though the book doesn't make that explicit.)
  • In each case below, sketch the indicated set(s) and its/their image(s) under the transformation \(z\mapsto z^2\). (I.e. do what's done in Figures 19, 20, and 21, but with the sets I'm giving you for the left-hand picture.) As in the book, \(x+iy\) and \(re^{i\theta}\) are, respectively, the Cartesian and polar representations of the domain-variable \(z\).
    1. The set \( \{x={\rm constant}\}\), for each value of the constant.
          (You should find that, except for one value of the constant, the image is a parabola opening to the left, with its vertex on the positive \(u\)-axis. For the exceptional value of the constant, you should find that the image is the portion of the \(u\)-axis with \(u\leq 0\).)

    2. The set \( \{y={\rm constant}\}\), for each value of the constant.
          (You should find that, except for one value of the constant, the image is a parabola opening to the right, with its vertex on the negative \(u\)-axis. For the exceptional value of the constant, you should find that the image is the portion of the \(u\)-axis with \(u\geq 0\).)

    3. The ray \(\theta=\theta_0\), for each of the following values of \(\theta_0\): \(0; \ \pi/6;\ \pi/4;\ \pi/3;\ \pi/2;\ 3\pi/4;\ \pi;\ 7\pi/6\).
         (The main reason for these is to help with the next set of sketches. It's okay to combine this part of the problem with the next part.)

    4. The region \( \{0\leq r\leq 2,\ \ 0\leq \theta\leq\theta_0\}\), where \(\theta_0\) is each of the nonzero values in the preceding set of sketches.
          (You should find that, in each case, each sketch is a "piece of pie" (a sector)—where the "piece" can be most or all of the whole pie, depending the value of \(\theta_0\).)

    5. The region \( \{1\leq r\leq 2,\ \ \pi/8\leq \theta\leq \pi/3\}\).

    6. The region \( \{0\leq x\leq 1,\ \ 0\leq y\leq 1\}\).
          (Use parts 1, 2, and 3 above to help with this one.)

  • pp. 43–44/ 1–4, 8.
  • W 9/14/22
  • Read Sections 15 and 16.

  • pp. 54–55/ 1,2ac, 3–5, 7–9.
  • F 9/16/22
  • pp. 54–55/ 6

  • Re-read the first three paragraphs after the "Last updated" line on this page, and take them seriously.

  • NONE of the homework I assign is optional. THIS INCLUDES READING.
    • If there is any previous homework that you haven't done yet, CATCH UP NOW. Remember that you're supposed to do be doing homework three times a week (in most weeks), NOT saving it up to do once a week, or even less frequently.
          Remember that you're supposed to be doing all the homework by its "due by" date. If you occasionally find that impossible, that's okay, as long as it's truly occasional (no more than once or twice a month) and you catch by the next due-by date.

    • If you're stuck on a homework problem, or want to check whether you've done it correctly, try to see me in office hours soon (within a week). I'll ask you to show me the work you've done on the problem. I won't show you how to do, say, a two-week-old homework problem that it's obvious you didn't spend more than two minutes on, or even look at until a day or two ago. My job is to help you learn the material well enough that it stays with you; it's not to help you cram something into short-term memory that will be purged after a looming exam.
  • Luck eventually runs out. If there is any homework problem you didn't do, assume you're going to see it on the next exam. The universe is out to get you.
  • F 9/19/22
  • Read Section 17.

  • In Section 18, read Theorem 1 and its proof (the paragraph that follows Theorem 1). Also read Theorem 4 and its proof. (We covered the rest of Section 18 in class.)

  • pp. 54–55/ 10–13
  • W 9/21/22
  • Read Sections 19–22. (Don't worry, these are short! And in class on Monday 9/19 we covered the first part of Section 19, formulas (1) and (2) in Section 20, and all of Section 21, so a lot of this reading will be review.)

        In Section 20 you'll note that the differentiation rules (1)–(6) look the same as Calculus 1 formulas; we can remember them by pretending that \(z\) is a real variable, and pretend that the complex differentiation is "just like" real differentation. (Neither of these "pretends" is true; thye just provide a way to rememember the formulas). For the product, quotient, and chain rules, in this class it's more important for now that you simply know these rules than that you be able to derive them.

  • pp. 61–62/ 2, 5, 7, 8, 9

  • pp. 70–72/ 1
        For Friday's midterm, the cutoff for "fair game" material is Sections 21–22. (I consider Section 22 to be part of Section 21; the authors just decided to give the examples for Section 21 their own section number.)

        Any examples based on Section 21 to be fair-game material, not just the specific examples in Section 22; the whole purpose of examples like those in Section 22 is to prepare you to do problems you haven't seen before that use the same principles. An analogous comment applies to the earlier sections of the book, and will apply to other exams: you're expected to be able to do straightforward examples based on the material in these sections, whether or not these specific examples are in the book.

        Most of the exam questions will be similar to problems that were assigned for homework. However, "fair game" material also includes some items that don't appear in homework problems, such as definitions and the results of theorems.

  • F 9/23/22 First midterm exam (assignment is to study for it).
      If you've been faithfully doing all the homework (working out the problems to completion, not looking at someone else's solution and thinking that you'll remember it), and regularly studying your notes to make sure you understand everything presented in class, you should be in good shape for the exam.
    M 9/26/22 No new homework.
    W 9/28/22
  • Read sections 25 and 26.
      In case you've forgotten or never learned what the hyperbolic cosine and hyperbolic sine functions cosh and sinh (pronounced "cinch") are, they're defined by \(\cosh x =\frac{e^x+e^{-x}}{2}, \ \ \ \sinh x =\frac{e^x-e^{-x}}{2} \ \ \ \mbox{where}\ x\in {\bf R}\ \mbox{(for now)}\). They satisfy \(\cosh'=\sinh\) and \(\sinh'=\cosh\).

    • pp. 76–77/ 1abc, 2, 3, 4, 7. For #7, use a strategy similar to the one used to prove the theorem on p. 73.
  • F 9/30/22 No new homework, in view of Hurricane Ian and class cancellations.
    M 10/3/22 Read Section 27. Because of the class-cancellations, I won't take time to talk about this section in class.
    W 10/5/22 No new homework. (More precisely, since I didn't get this assignment posted early enough, it's absorbed into the next assignment.)
    M 10/10/22
  • Read Sections 30, 31, and 32.

  • Based on your reading, try to get a head start on the exercises due Wednesday 10/12.

    We're skipping Sections 24, 28, and 29 for now. For the sections we've covered recently, I've already assigned all the exercises that I think are worth assigning. So, while I don't like having you go so long without exercises to do, followed by having more than usual to do, I have no worthwhile exercises (based on material we've already covered) to assign for Monday.

    But I am assigning reading for you to do in advance of Monday's class. We're behind schedule, so I need to pick up the pace. This means it's going to become important from now on that you read chapter-sections before I discuss that material in class. So from now on I'll regularly be assigning reading from sections I plan to cover in the next lecture. To the extent that I end up repeating something you've read, it won't be a waste of your time; it generally takes more than a first reading, or first listening, to understand new mathematics.

    Sometimes, you'll have questions even after reading the book's discussion and hearing mine. That's normal and good, and I encourage you to ask those questions; they're necessary even if they slow down the lecture a bit.

    The in-class questions that that will unnecessarily slow the class down are the ones that you wouldn't be asking if you'd done the assigned reading on time. (Of course, you won't know which questions those might be if you haven't done the reading! The moral, as always, is: do all your homework—reading as well as exercises—on time.)

  • W 10/12/22
  • On p. 87, re-read the last paragraph; I didn't mention it in class.

  • pp. 89–90/ 1–4, 6–8, 10, 13, 14.

  • pp. 95–97/ 1–5, 8 (reworded as: "Find all solutions of the equation \( \underline{\mbox{L}}\mbox{og}\,z =i\pi/2 \) "), 11, 12

  • pp. 99–100/ 1–5

  • Read Sections 33, 34, and 35.
  • F 10/14/22 Notational reminder: The symbols "\( {\mathbb R} \)" and "\( {\mathbb C} \)" that I use in class are "blackboard bold" symbols, intended to mean the corresponding boldface letters; they were invented a long time ago to get around the problem that it's effectively impossible to write boldface on the blackboard. At the time these symbols were invented, it was just as impossible to achieve blackboard-bold when typing as it was to achieve boldface with chalk. But now that "\( {\mathbb R} \)" and "\( {\mathbb C} \)" are symbols that can be typeset, many authors use them to replace the more historical "\({\bf R}\)" and "\({\bf C}\)," perhaps because, unlike me, these authors are young enough to have grown up seeing the blackboard-bold symbols in textbooks, or perhaps so that students would not have to translate between the two fonts. I prefer the historical symbols, and expect my students to remember that my typed "\({\bf R}\)" and "\({\bf C}\)" mean the same thing as my blackboard symbols "\( {\mathbb R} \)" and "\( {\mathbb C} \)."

    Another notational reminder: "\(\tan^{-1}\)" denotes the arctangent function—also denoted \(\arctan\), and often called "inverse tangent"—and not the reciprocal of the tangent function (which is the cotangent function "\( \cot\)".

  • At the end of class on Wed. Oct. 12, I was in the middle of computations aimed at showing that the function \({\rm Log}\) is differentiable at every point that's not in the the set \(C=\{x+0i: x\in {\bf R}\ \mbox{and}\ x\leq 0\}\subseteq {\bf C}\), and to compute the derivative. (I've written "\(x+0i\)" rather than just "\(x\)" only as an extra reminder that we're viewing real numbers as special complex numbers. Also note that there are two different capital C's I'm using here: the boldface \({\bf C}\) for the complex plane, and the un-bold italic \(C\) for the non-positive real axis.) Complete these computations by following the steps below. (This is not a long problem to do; it just looks like a lot to read because I'm guiding you through every detail.)
    1. Recall that in the open first quadrant (the quadrant of the \(xy\) plane in which both \(x\) and \(y\) are positive), $$\tan^{-1}\left(\frac{y}{x}\right) \ =\ \frac{\pi}{2}-\tan^{-1}\left(\frac{x}{y}\right)\ \ \ \ \ \ (*).$$
    2. Check that, for \(z=x+yi\), $$ \mbox{Arg}\, z = \left\{ \begin{array}{ll} \tan^{-1}\left(\frac{y}{x}\right) & \mbox{if}\ \ x>0 \ \ \mbox{(the open right half-plane)}, \\ \frac{\pi}{2}-\tan^{-1}\left(\frac{x}{y}\right) &\mbox{if}\ \ y>0 \ \ \mbox{(the open upper half-plane)}, \\ -\frac{\pi}{2}-\tan^{-1}\left(\frac{x}{y}\right) &\mbox{if}\ \ y<0 \ \ \mbox{(the open lower half-plane)}, \\ \tan^{-1}\left(\frac{y}{x}\right)+\pi &\mbox{if}\ \ y>0\ \ \mbox{and}\ x<0 \ \ \mbox{(the interior of Quadrant II)}, \\ \tan^{-1}\left(\frac{y}{x}\right)-\pi &\mbox{if}\ \ y<0\ \ \mbox{and}\ x<0 \ \ \mbox{(the interior of Quadrant III)}. \end{array} \right. $$ (I miswrote the last line in class.) Note that the open right half-plane and open upper half-plane overlap; the interior of Quadrant I belongs to both. However, because of equation \( (*) \), the formulas on the first and second line of the "table" agree on the overlap. Similarly the open right half-plane and open lower half-plane overlap—the interior of Quadrant IV belongs to both— but part of what you're checking is that the formulas on the first and third line of the table agree on the interior of Quadrant IV.
        nbsp; The first three lines of the table are sufficient for the goals of this problem; I've included the last two lines just to show that in each of the four open quadrants, not just Quadrants I and IV, \({\rm Arg}\ z\) can be expressed by two formulas&mdsah;one involving \(\tan^{-1}\left(\frac{y}{x}\right)\), the other involving \(\tan^{-1}\left(\frac{x}{y}\right)\)— that agree in that quadrant.

    3. Let \(D\) be the region consisting of all complex numbers that do not lie on the nonpositive real axis \(C\). Check that every \(z\in D\) is accounted for by at least one of the top three lines of the table. (Most of these points are accounted for twice, but points on the positive or negative \(y\)-axis are accounted for only once.) In other words, \(D\) is the union of the three open half-planes mentioned in the top three lines of the table.
          Check also that we can't "get away with" just two open half-planes; i.e. that \(D\) is not the union of any two of those half-planes.

    4. So far, you've checked that every \(z\in D\) is contained in an open set (an open half-plane) in which \({\rm Arg}\ z\) is given by at least one of the following three formulas: $$ v_1(x,y):= \tan^{-1}\left(\frac{y}{x}\right), \ \ \ v_2(x,y):=\frac{\pi}{2}-\tan^{-1}\left(\frac{x}{y}\right), \ \ \ v_3(x,y):= -\frac{\pi}{2}-\tan^{-1}\left(\frac{x}{y}\right),$$ hence in which \({\rm Log}\ z\) is given by at least one of the formulas $$u(x,y)+i v_1(x,y), \ \ u(x,y)+i v_2(x,y), \ \ u(x,y)+i v_3(x,y),$$ where \(u(x,y)=\ln\sqrt{x^2+y^2}=\frac{1}{2}\ln (x^2+y^2).\) Recalling that \(\frac{d}{dt}\tan^{-1} t =\frac{1}{1+t^2}\),   now check that $$ \frac{\partial}{\partial x} v_1(x,y) =\frac{\partial}{\partial x} v_2(x,y) =\frac{\partial}{\partial x} v_3(x,y) =\frac{-y}{x^2+y^2}$$ and that $$ \frac{\partial}{\partial y} v_1(x,y) =\frac{\partial}{\partial y} v_2(x,y) =\frac{\partial}{\partial y} v_3(x,y) =\frac{x}{x^2+y^2}\ .$$ (More precisely: on any open set for which the function \(v_j\) is defined (\(j=\) 1,2, or 3),   \( \partial v_j/\partial x =\frac{-y}{x^2+y^2}\)   and \( \partial v_j/\partial y =\frac{x}{x^2+y^2}\ \).)

    5. Check that the function \( {\rm Log}\) satisfies the Cauchy-Riemann equations on each of the open half-planes in the top three lines of our earlier table, and thereby show that \({\rm Log}\) is analytic on \(D\).

    6. With \(v(x,y)\) equal to \(v_1(x,y), v_2(x,y), \) or \(v_3(x,y)\) on an appropriate half-plane, check that $$ \frac{d}{dz} {\rm Log}\ z = u_x +iv_x \ =\ \frac{x-iy}{x^2+y^2} \ =\ \frac{1}{z}\ . $$ You have now established that \(\frac{d}{dz} {\rm Log}\ z = \frac{1}{z}\) on the whole region \(D\).

  • Read Section 36. (You will need to have read Section 35, which was part of the last assignment, first! These sections are ahead of where we are in class, so I do not expect that you will fully understand the material in these sections by Friday Oct. 14. However, you can learn and practice computational rules even before you fully understand why those rules are correct.)

  • p. 103/ 1a, 2a, 5, 7.
  • M 10/17/22

  • Read Sections 37, 38, and 39.
  • W 10/19/22

  • pp. 107–108/ 1–3, 5, 8, 9, 10. For #3, instead of doing the problem the way the book suggests, treat it as if it were problem "2(c)", with instructions analogous to 2(b). This is a more direct, easier-to-remember method for deriving the \(\cos(z_1+z_2)\) formula than the method in #3.
       Optionally, you may do the problem a second time using the instructions given for #3. The method given in the book is clever and elegant, but I don't want you to get the impression that any cleverness is needed for deriving the formula for either \(\cos(z_1+z_2)\) or \(\sin(z_1+z_2)\). I wish the book had had problem-part 2(c), and had then given problem 3 as an alternate method for deriving the \(\cos(z_1+z_2)\) from the \(\sin(z_1+z_2)\) formula.

      Some general notes concerning Sections 37–39
      It's important to know the definitions of \(\sin z, \cos z, \sinh z,\) and \(\cosh z\), and that the hyperbolic and ordinary trigonometric functions can be expressed in terms of each other by replacing \(z\) with \(iz\). Fortunately, as we've seen (in class and in the book), the definitions of \(\sin z\) and \(\cos z\) are easily remembered by making use of the real-\(x\) definition "\(e^{ix}=\cos x + i\sin x\)" to express \(\cos x\) and \(\sin x\) in terms of \(e^{ix}\) and \(e^{-ix}\), then replacing \(x\) by \(z\) at the end. For \(\sinh z\) and \(\cosh z\), fortunately the definitions are the same as if \(z\) were real. (This last "fortunately" assumes you remember the definitions of \(\sinh x\) and \(\cosh x\) for real \(x\) from Calculus 1, which assumes that the syllabus for your Calculus 1 course did not omit this topic!)

      For trig functions of a real variable, you're expected to remember the derivatives of sine and cosine (with correct signs!) and the definitions of the other four trig functions. For these other four functions, you're expected either to remember their derivatives (exactly, including signs) or to be able to derive them quickly from the function-definitions and the quotient rule and/or chain rule. (A derivative-computation like this should take you one minute or less, not five minutes.) You're also expected to remember basic real trig identities (e.g. equations (3) and (5)–(11) on pp. 104–105, with \(z\) replaced by a real variable \(x\)). If you know these, then all you need to remember for the complex case is that the same formulas are valid if you cross out \(x\) and replace it with a complex variable \(z\).

      But if any of these trig identities, or the derivatives of the tangent, cotangent, secant, and cosecant functions, are not already (or still) in your memory, trying to (re-)memorize them now would not be a good use of your time. Imperfectly memorized formulas are often worthless, undeserving of any partial credit. What you should be able to do is derive formulas quickly from definitions. (The last two sentences are universal truths, not limited to Sections 37–39, or to this textbook, or to this course.)

      In the same vein, it is NOT worth memorizing Section 39's equations (3) and (4); it's more reliable just to plug "\(iz\)" in for \(z\) in the definition of \(\sin z\) and \(\cos z\), or in the definition of \(\sinh z\) and \(\cosh z\), and see what drops out. Most of the other formulas in Section 39 are also not worth memorizing. The ones I happen to remember are \(\cosh(-z)=\cosh z, \ \sinh(-z)=\sinh z\), and \(\cosh^2 z -\sinh^2 z=1\). But even if I didn't remember these, the first two can be seen instantly just from looking at the definitions of \(\cosh z\) and \(\sinh z\), and the third can be derived in seconds from the definitions of \(\cosh\) and \(\sinh\). I have never, in my life, known equations (9)-(12) in Section 39 by heart (or, for that matter, equations (15) and (16) in Section 38). All that stays in my memory, for these equations, is that there are two-term formulas "like these" for \(|\sin z|^2, |\cos z|^2, |\sinh z|^2,\) and \(|\cosh z|^2,\) that I can derive whenever needed; I make no attempt to try to remember where the \(x\)'s, \(y\)'s, sines, cosines, sinh's, and cosh's go in these formulas.

      If you've forgotten some definitions and need to re-memorize them, that's fine; you can't derive definitions. There are a ton of formulas that can be derived from definitions, but only a handful of definitions. Trying to memorize a ton of derivations would also miss the point I'm trying to make: the number of definitions and ideas you need to know is small; the number of consequences is large. When preparing for an exam, if you spend too much time memorizing things that nobody should need to memorize, you won't have enough time to study something that really does need studying. (Everything in this paragraph is also a universal truth.)

  • F 10/21/22

  • Read Section 40.
      Note: Churchill and Brown (C&B) do not explicitly state what they would mean, generally, by "inverse function"; you have to infer their meaning from the contents of Section 40. Unfortunately, C&B do not warn the student that their meaning of "inverse function" is not consistent what the meaning taught in Calculus 1.

      In Calculus 1, and other courses, you are taught (correctly) that for a function \(f\) to have an inverse function, \(f\) has to be one-to-one (i.e. if \(x_1\neq x_2\), then \(f(x_1)\neq f(x_2) \)). If \(D\) is the domain on which \(f\) is defined, and \(R\) is the range of \(f\), and \(f\) is one-to-one, then we can define a true inverse function \(f^{-1}\). Specifically, if \(t\in R\), then \(f^{-1}(t)\) is the one and only \(s\in D\) for which \(f(s)=t\). (The superscript "\(-1\)" in \(f^{-1}\) is not an exponent; \(f^{-1}\) has nothing to do with \(\frac{1}{f}\).)

      But C&B's "inverse functions" are allowed to be multi-valued functions, which are not true complex-valued functions. In C&B's notation, for any of the trigonometric or hyperbolic functions \(f\), the notation \(f^{-1}(z)\) means the set of all complex numbers \(w\) for which \(f(w)=z\).

      (This is not a terrible thing to do in the context of complex analysis; I just wish that C&B told you explicitly that, for the purposes of this book, they're changing the meaning of something your instructors tried to teach you carefully in earlier courses—so that, if you remember the way "inverse function" was defined in earlier courses [as your professors hope you do], you won't start doubting whether your memory is correct.)

      Outside this class, the usual meaning of "\(\sin^{-1}\)" is the inverse function you learned about in Calculus 1 (and is also called "arcsine" and denoted "arcsin"). This function "\(\sin^{-1}\)" is not the inverse of the "full" sine function; it's the inverse of a restricted sine function (we restrict sine to the domain-interval \([-\pi/2, \pi/2]\), which then becomes the range of \(\sin^{-1}\)). On this restricted domain, the sine function is one-to-one, and has the same range \([-1,1]\) as the full sine function—an interval that then becomes the domain of the usual \(\sin^{-1}\) function.

      But as noted above for a more general function \(f\), in C&B the notation \(\sin^{-1}(z)\) means the set of all complex numbers \(w\) for which \(\sin(w)=z\). As seen in class, the range of the complex sine function is all of \({\bf C}\), so the domain of this "\(\sin^{-1}\)" is all of \({\bf C}\).

      This multi-valued function \(\sin^{-1}\) has single-valued, analytic branches that are defined by choosing a branch of log and a branch of the square-root function in equation (2). Similarly, \(\cos^{-1}, \sinh^{-1},\) and \(\cosh^{-1}\) have analytic branches that are defined by choosing a branch of log and a branch of the square-root function in equations (3), (8), and (9) respectively. Analytic branches of the multi-valued functions \(\tan^{-1}\) and \(\tanh^{-1}\) are determined by choosing a branch of log alone; no square-roots are involved.

  • p. 114/ 1–3, 5, 6, with the modified (or clarified) instructions below.
    • In 1–3, copy down the expressions you're being asked to find the values of, then close the book. I.e., compute these values without looking at any formulas in the book. This means doing calculations like the ones I did for \(\sin^{-1}(z)\) and \(\tan^{-1}(z)\) in class on Wednesday. (If you missed that class, see "Note about skipping class" below.)
          If you want to copy down the book's answers for 1ad and 2 before closing the book, so that you can check your answers against these, that's fine.
          Part 2a is optional. For part 2b, my instructions above are telling you not to do what the book says to do. (I want you to learn how to solve an equation like this one, not how to look up a formula for the solution.)

    • #3 could have been #1(e). As I discussed above, the book's (implicit) definition of \(\cos^{-1}(\sqrt{2})\) is the set of solutions \(z\) of \(\cos z =\sqrt{2}\).

    • For #5:
      • The formula I derived in class was \(\tan^{-1}w = \frac{i}{2}\log \frac{1+iw}{1-iw}\). Show that, after changing my letter \(w\) to \(z\), the expression above equals the book's expression for \(\tan^{-1}z\) in equation (4) (p. 113).

      • In my haste to derive this formula in the last few minutes of class, I neglected to address two special cases:

        • In one step of my derivation (which started from the equation \(\tan z =w\), I divided by \(1-iw\). Thus, I implicitly assumed that \(1-iw\neq 0\); i.e. that \(w\neq -i\). Go back through that derivation and, based on the equation I had before dividing by \(1-iw\), show that if \(1-iw=0\) then there is no \(z\) for which \(\tan z =w\). I.e. there is no \(z\) for which \(\tan z = -i\). (Thus there is no such thing as "\(\tan^{-1}(-i)\).")

        • In the next step of my derivation, I had \(e^{\rm something}= \frac{1+iw}{1-iw}\), which is a contradiction if the right-hand side is 0. Use this to show that there is no \(z\) for which \(\tan z = i\). (Thus there is no such thing as "\(\tan^{-1}(i)\).")

          Note: Obviously the right-hand side of equation (4) makes no sense if \(z=i\) or \(z=-i\), but that fact by itself doesn't mean that there are no complex numbers \(w\) for which \(\tan w= \pm i\); all this fact by itself tells you is that if there are any such \(w\)'s, they can't possibly be given by the right-hand side of equation (4). An argument like "If [a certain number] existed, then it would have to equal this other number that doesn't exist" is devoid of logic. There's no such thing as "being equal to something that doesn't exist;" the concept of "equals" makes sense only for things that exist. For example, an argument like "The equation \(0x=1\) has no solutions because if \(0x=1\) then \(x=1/0\), which doesn't exist [or which isn't defined]" is utter nonsense, relying on utterly meaningless symbol-manipulation. By contrast, an argument like "The equation \(0x=1\) has no solutions because, for any \(x\),    \(0x=0\neq 1\)" is completely valid.

    Note about skipping class.
        For students who may have voluntarily skipped Wednesday's class or some others: the policies I announced in the syllabus have not changed. If you're able to learn 100% of the material on your own, good for you; I'm not going require you to waste your time by coming to class. But the way classes work (other than online, asynchronous classes) is by agreement that all the students will be in the same place at the same time, so that the instructor only has to say things once to the group, not individually to different students. Please remember that, among other things you may need to review, the syllabus said this:

      "If you choose not to attend regularly, you forfeit certain rights:

      • You may not ask questions in class.
      • You may not ask me questions outside of class either. This includes questions about the grading of your exam or homework.
      • The only use you may make of my office hours is to pick up or drop off work, and you may not phone or email me.
      • ...

      In other words, if you're regularly an absentee for voluntary reasons, you're on your own; I will grade your work (except for unnecessarily complicated homework) but will not spend any other time on you."

  • M 10/24/22 Second midterm exam (assignment is to study for it).
        For Monday's midterm, the cutoff for "fair game" material is Section 40. (We skipped Sections 24, 28, and 29, so you're not responsible for these.) The emphasis will be on material covered since the first midterm, but material from earlier sections can still appear on the exam, since it's used in later material.

        Most of the exam questions will be similar to problems that were assigned for homework. However, "fair game" material also includes some items that don't appear in homework problems, such as definitions and the results of theorems, derivations of formulas and equations, and items that may not be represented in the homework but that were covered in class and/or in the book (just in the sections we've covered, including sections that I only had you read). Questions that combine ideas that you've had practice with individually but may not have seen together in the same problem are also fair game.

    W 10/26/22 Merged into homework due Friday 10/28/22.
    F 10/28/22
  • Read Sections 41–43

  • pp. 119–120/ 1–5

  • pp. 123–125/ 1–5

  • Read Sections 44–45
  • M 10/31/22
  • pp. 132–135/ 1–5, 10, 11, 13.
        In #13, for now ignore the reference to #8. Exercise 13 is much more important than #8, does not need the result of #8 at all, and is better done without Exercise 8. If I were writing this book, I'd have put #13 much earlier in this set of exercises, earlier than #8. We will see later in the course that #13 has some very important consequences, and that the \(n=0\) case is a special case of a very important theorem (the one on p. 162, the Cauchy integral formula, applied to the constant function \(f(z)=1\)).
  • W 11/2/22
  • Read Section 46
  • pp. 132–135/ 8
  • F 11/4/22
  • Read Sections 47–48
  • M 11/7/22
  • pp. 138–140/ 1, 2, 4, 8
  • W 11/9/22
  • Read Sections 49–50
  • p. 147/ 1–3
  • M 11/14/22
  • Read Sections 52–53.
    Note: In the title of Section 53, "multiply" is an adverb modifying the adjective "connected" (just as is "simply" in the title of Section 52); the "ly" is pronounced "lee". This "multiply" has nothing to do with arithmetic.
      "Time flies like an arrow; fruit flies like a banana."

  • pp. 159–162/ 1
  • W 11/16/22
  • pp. 159–162/ 2, 3, 6
  • F 11/18/22 Third midterm exam (assignment is to study for it).
        For Friday's midterm, the cutoff for "fair game" material is Section 53. (We skipped Section 51, so you're not responsible for that. As of Monday 11/14, we haven't yet gone over Sections 52–53 in class, but you're still responsible for them. I expect to cover most of the material in these two sections on Wed. 11.18, but [before that class], you should be able to understand this material reasonably well from your reading and the homework I've assigned.) The emphasis will be on material covered since the second midterm, but material from earlier sections can still appear on the exam, since it's used in later material.

        Most of the exam questions will be similar to problems that were assigned for homework. However, "fair game" material also includes some items that don't appear in homework problems, such as definitions and the results of theorems, derivations of formulas and equations, and items that may not be represented in the homework but that were covered in class and/or in the book (just in the sections we've covered, including sections that I only had you read). Questions that combine ideas that you've had practice with individually but may not have seen together in the same problem are also fair game.

    M 11/21/22
  • Read Sections 54, 55, and 56
  • pp. 170–172/ 1
  • M 11/28/22
  • Read Section 57
  • pp. 170–172/ 2, 3, 5, 6, 9, 10

    A note about the Cauchy Integral Formula and its extension

      The extended version of the Cauchy Integral Formula (CIF) theorem asserts that if \(f\) is analytic inside and on a simply closed contour \(C\), oriented positively ("counterclockwise"), and \(z_0\) is any point interior to \(C\), then for all integers \(n\geq 0\) $$f^{(n)}(z_0)=\frac{n!}{2\pi i}\int_C \frac{f(s)}{(s-z_0)^{n+1}} \, ds, \ \ \ \ \ \ \ (*)$$ where we define "\(f^{(0)}(z_0)\)" to be \(f(z_0)\). The \(n=0\) case of (*) is the Cauchy Integral Formula; the \(n\geq 1\) case of (*) is what I called the Cauchy Integral Formula for derivatives in class. When Brown & Churchill write equations like (*), e.g. equations (4) and (5) on p. 165, they generally write "\(z\)" for the point I'm calling \(z_0\) in (*).

          If \(z_0\) is a point on \(C\), then the integral (*) generally does not exist, because the integrand is undefined at \(z_0\). However, one can ask what happens if \(z_0\) is a point exterior to \(C\).

          Answer: if \(z_0\) is exterior to \(C\), and the other hypotheses above are still in effect, then for every \(n\geq 0\), the value of the integral in (*) is 0.
      Reason: Let \(g(z)=f(z)/(z-z_0)^{n+1}\). Then the integral in (*) is simply \(\int_C g(s)\, ds\) (which is just another name for \(\int_C g(z)\, dz\); the variable of integration is a "dummy variable" for which any notation can be used, as long as it is not also being used with a different meaning in the same integral). But \((z-z_0)^{n+1}\) is an analytic function of \(z\) everywhere (an entire function), and is not zero at any point \(z\) inside or on \(C\) (since \(z_0\) is exterior to \(C\)), so \(g\) is analytic inside and on \(C\). Hence, by the Cauchy-Goursat Theorem, \(\int_C g(s)\, ds=0\).

          In the exercises on pp. 170–172, the above fact is relevant to the last parts of exercises 3 and 4. (Only the first of these was assigned for homework, but both could have been, and I was asked a question about #4 in office hours.)

  • W 11/30/22 Read Sections 58 and 59. (Exercises will be in the next assignment.)
    F 12/2/22

    pp. 177–178/ 2, 3, 8

    M 12/4/22

  • Read Sections 60, 61, 62, 64, and 65. (You may skip Section 63.)

  • Read "Summary of important facts about power series" below. This covers some of the same ground as Sections 62 and 64, but there are several important facts that are buried in the book's presentation. I think you'll find my summary easier to read; you may want to read it before the book sections.

  • Exercises moved to the next assignment

    ---------------------

    Summary of important facts about power series (without proofs).

    1. Vocabulary and Sigma-notation for power series. (a) We say that the series $$\sum_{n=0}^\infty a_n(z-z_0)^n \ \ \ \ \ \ \ \ (*),$$ is centered at \(z_0\). (b) In the notation "\(\sum_{n=0}^\infty a_n(z-z_0)^n\)", the expression \( (z-z_0)^0\) in the \(n=0\) term is interpreted as 1, even when \(z=z_0\). (This is not a definition of \(0^0\); it's a definition of what Sigma-notation means in the setting of power series. It's equivalent to defining \(\sum_{n=0}^\infty a_n(z-z_0)^n\) to mean \(a_0+\sum_{n=1}^\infty a_n(z-z_0)^n\).)

          When \(z=z_0\), (*) is therefore the series $$a_0 +0 +0 +0 +0 +\dots,$$ which converges (trivially) to \(a_0\). Thus, a power series always converges at its center point; it may or may not converge anywhere else.

    2. Radius of convergence. For this terminology, we enlarge the interval \([0,\infty) \subseteq {\bf R}\) to a set "\([0,\infty]\)" = \( [0,\infty) \cup\{\infty\}\). (This "\(\infty\)" corresponds to "positive infinity" in the real sense, not to the "point at infinity" in the Riemann sphere. The "\(<\)" relation on \([0,\infty)\) is extended to \([0,\infty]\) by declaring that \(a<\infty\) for every \(a\in [0,\infty).\) )

      Theorem A. Every power series \(\sum_{n=0}^\infty a_n(z-z_0)^n\) falls into exactly one of the following three cases:

      • Case 1: The series converges only when \(z=z_0\), i.e. only at its center point.

      • Case 2: There is a real number \(R>0\), called the radius of convergence,with the property that the series converges whenever \( |z-z_0| < R \) and diverges whenever \( |z-z_0| > R\). (Nothing is being asserted here about the \(z\)'s for which \(|z-z_0|=R\).)

      • Case 3: The series converges for every \(z\in {\bf C}\).

      We extend the "radius of convergence" (RoC) terminology to Cases 1 and 3 above by defining the RoC to be 0 in Case 1 and \(\infty\) in Case 3. The set of possibilities for radii of convergence is thus \([0,\infty]\). We say the RoC is positive if it is not 0, i.e. in Cases 2 and 3.

    3. When the radius of convergence \(R\) is positive, we define the open disk of convergence \(D\) to be the set \(\{z\in {\bf C}: |z-z_0| < R\}.\) (In the case \(R=\infty\), "\(|z-z_0| < R\)" holds for every \(z\in {\bf C}\), so the open `disk' of convergence is the whole complex plane.) Since the series converges for every \(z\in D\), it defines a function \(f:D\to {\bf C}\)  (for each \(z\in D\), we define \(f(z)\) to be the complex number to which \(\sum_{n=0}^\infty a_n(z-z_0)^n\) converges). We say that the series converges to \(f(z)\) [or just converges to \(f\) ] on \(D\) . We say that (*) is a power series representation of \(f\), or power series expansion of \(f\), centered at \(z_0\) (or based at \(z_0\)).

      Theorem B. Let \(D\) be the open disk of convergence of a given power series (*), and let \(f\) be the function to which the series converges on \(D\). Then \(f\) is analytic on \(D\). Furthermore, for every \(z\in D\), $$f'(z)=\sum_{n=1}^\infty na_n(z-z_0)^{n-1} =\sum_{n=0}^\infty (n+1)a_{n+1}(z-z_0)^n.$$ I.e. we can "differentiate (*) term-by-term" in the open disk of convergence. This is used on p. 193 of Brown and Churchill, Example 4.

      Note: Since analytic functions are continuous, the function \(f\) in Theorem B is continuous on the disk \(D\). In particular, \(f\) is continuous at \(z_0\). Thus, whenever a power series \(\sum_{n=0}^\infty a_n(z-z_0)^n\) has a positive radius of convergence, \(\lim_{z\to z_0} \sum_{n=0}^\infty a_n(z-z_0)^n = a_0.\)

    4. Below, for a function \(f\) that is analytic at a point \(z_0\), the notation \({\rm TS}(f,z_0; z)\) denotes the Taylor series of \(f\) at \(z_0\), with variable \(z\): $${\rm TS}(f,z_0; z)=\sum_{n=0}^\infty \frac{f^{(n)}(z_0)}{n!} (z-z_0)^n.$$ When I don't want to specify a name for the variable, or mention the variable, I'll simply write \({\rm TS}(f,z_0).\)

      Even when a Taylor series \({\rm TS}(f,z_0; z)\) converges, nothing above guarantees that it converges to f(z). Conceivably, there might be two (or more) different functions \(f\) and \(g\), defined on an open disk \(D\) centered at \(z_0\), with the property that \(f^{(n)}(z_0)=g^{(n)}(z_0)\) for all \(n\geq 0\). In this case, the Taylor series \({\rm TS}(f,z_0)\) and \({\rm TS}(g,z_0)\) would be identical, but could converge on \(D\) to at most one of the functions \(f\) and \(g\).

      For functions of a real variable, this phenomenon does occur. For example, for any \(x_0\in {\bf R}\), there are non-constant functions \(f:{\bf R}\to {\bf R}\) with the properties that (i) \(f^{(n)}(x_0)=0\) for all \(n\geq 0\), but (ii) \(f(x)\neq 0\) for every \(x\neq x_0\). For such a function \(f\), the Taylor series \({\rm TS}(f,x_0; x)\) is \(\sum_{n=0}^\infty 0(x-x_0)^n\), which converges to 0 for every \(x\). Thus, for such an \(f\), the Taylor series \({\rm TS}(f,x_0)\) converges on every open interval centered at \(x_0\), but does not converge to f on any open interval centered at \(x_0\).

      But for functions of a complex variable, this phenomenon does not occur. This is part (ii) of the following theorem.

      Theorem C. Let \(f\) be a function (of a complex variable) that is analytic at a point \(z_0\in {\bf C}\). Then:

        (i) The Taylor series \({\rm TS}(f,z_0)\) has a positive radius of convergence.

        (ii) Let \(D\) be the open disk of convergence of   \({\rm TS}(f,z_0).\) For each \(z\in D\), the series \({\rm TS}(f,z_0; z)\) converges to f(z).

        (iii) \({\rm TS}(f,z_0)\) is the unique power-series representation of \(f\) centered at \(z_0\), in the following sense:

          If \( (a_n)_{n=0}^\infty\) is a sequence if complex numbers for which the series (*) converges to \(f(z)\) on some open disk centered at \(z_0\), then (*) is the Taylor series  \({\rm TS}(f,z_0; z)\);   i.e. \(a_n= \frac{f^{(n)}(z_0)}{n!} \) for each \(n\geq 0\).

          Every student should know the Taylor series, centered at 0 (also known as Maclaurin series), of at least the functions exp, sin, and cos, as well as the function \(f(z)=\frac{1}{1-z}\) (whose Maclaurin series is the geometric series \(\sum_{n=0}^\infty z^n\)); see Churchill and Brown, p. 190. (It is also important to know the radii of convergence of these Maclaurin series: \(\infty\) for the Maclaurin series of exp, sin, and cos; and \(1\) for the geometric series \(\sum_{n=0}^\infty z^n\). For these series, the radii of convergence can be derived several ways, but for purposes of this course, it suffices to have one way of remembering how to figure them out: Theorem D, below.) The uniqueness guaranteed by part (iii) of Theorem C allows us to compute Taylor series of many functions using only our knowledge of the Taylor series (centered at 0) of these fundamental functions, without ever having to compute a derivative.

        Example: Let \(f(z)=ze^{z^2}\). Since the exponential function is entire, \(e^z\ =\ {\rm TS}(\exp,0;z) \ =\ \sum_{n=0}^\infty z^n/n!\) for every \(z\in {\bf C}\). Hence, for every \(z\in {\bf C}\), $$e^{z^2}\ =\ \sum_{n=0}^\infty \frac{(z^2)^n}{n!} \ =\ \sum_{n=0}^\infty \frac{z^{2n}}{n!},$$ so $$ f(z)\ =\ ze^{z^2}\ =\ z\sum_{n=0}^\infty \frac{z^{2n}}{n!} \ =\ \sum_{n=0}^\infty z \frac{z^{2n}}{n!} \ =\ \sum_{n=0}^\infty \frac{z^{2n+1}}{n!}.$$ This last series is a power series centered at 0 (it's of the form \(\sum_{m=0}^\infty a_m z^m\), just with \(a_m=0\) whenever \(m\) is even), and the algebra above shows that it converges to \(f(z)\) for every \(z\in {\bf C}\). Hence \(\sum_{n=0}^\infty \frac{z^{2n+1}}{n!}\) is the Taylor series \(TS(f,0; z)\). As a "side benefit", we can immediately deduce the values of every derivative of \(f\) at 0, without any performing any derivative computations: $$f^{(2n)}(0)\ =\ 0$$ and $$\frac{f^{(2n+1)}(0)}{(2n+1)!} \ =\ \frac{1}{n!}$$ (so the even-order derivative are all 0, while the odd-order derivatives are given by \(f^{(2n+1)}(0) \ =\ \frac{(2n+1)!}{n!} \)).

      • The radius of convergence of a power-series representation \(\sum_{n=0}^\infty a_n(z-z_0)^n\) of a function \(f\) (this power series automatically being the Taylor series \({\rm TS}(f,z_0;z)\), by Theorem C part (iii), if \(f\) is analytic at \(z_0\)) can be predicted from the location of singularities of \(f\):

        Theorem D. Let \(E\subseteq {\bf C}\) be an open region on which a function \(f\) is analytic, and let \(z_0\in E\). Assume that \(E\) is as large as possible in the sense that \(f\) cannot be extended to an analytic function on an open set \(\tilde{E}\supsetneq E\). (The notation "\(\tilde{E}\supsetneq E\)" means that \(\tilde{E}\) strictly contains \(E\); i.e. that \(\tilde{E}\) contains \(E\) but is not equal to \(E\).) Then the radius of convergence of  \({\rm TS}(f,z_0; z)\) is the "distance from \(z_0\) to the nearest singular point of \(f\)  " (this is imprecise; see below).

            Note: the condition that \(E\) is "as large as possible" is equivalent to: every boundary point of \(E\) is a singular point of \(f\). For example, if \(f(z)=1/(p(z)\), where \(p(z)\) is a polynomial whose set of roots is \(S=\{z_1, z_2, \dots, z_k\}\), then \(E\) is simply \({\bf C}-S\), and the boundary of \(E\) is \(S\) itself.

            "Distance from \(z_0\) to the nearest singular point of \(f\)": The precise wording that should go into the theorem this is "(i) \(\infty\) if \(f\) is entire, and (ii) distance to the set of singular points of \(f\) if \(f\) is not entire." I don't want to distract from this "summary of highlights" by listing everything that's wrong with the "nearest singular point" wording, or giving the precise definition of distance from a point to a nonempty set.

        Example: Let \(f(z)=\frac{1}{z^2+1}\). The set of singular points of \(f\) is \( \{\pm i\} \); \(f\) is analytic everywhere else. If we expand \(f\) as a power series centered at 2 (necessarily \({\rm TS}(f,2; z)\)), the radius of convergence is \(\sqrt{5}\), since [distance from 2 to \(i\)] = [distance from 2 to \(-i\)] \(=\sqrt{5}.\)

  • W 12/6/22

  • Read Sections 66 and 68. (I would have also had you read Sections 69–72, but the summary I gave in the previous assignment covers everything you need to know from these sections.) I am skipping Section 73 because of time.

  • Read Sections 74–76 and 78. (Skip Section 77.)

  • Read the "Summary of some important facts ..." below. You may find it helpful to read my summary before the book sections (and before doing the exercises below).

    In case you don't have time to do both the reading and the exercises before Wednesday's class, do the reading first, and then do the exercises. Exception: You don't need to read the last section of my summary, "Examples of a removable singularity", unless you're having trouble understanding why a removable singularity is even a possible phenomenon.

  • pp. 195–197/ 1, 3, 9, 11
  • pp. 205–208/ 1–6
  • pp. 218–221/ 1, 4
  • pp. 237–238/ 1abce, 2
  • p. 242/ 1. (For the definition of principal part, see the top of p. 239.)

    Summary of some important facts about Laurent series and residues, and some examples

      Below, for \(R\in (0,\infty]\) write \(D_R(z_0)\) for the open disk \( \{z\in {\bf C}: |z-z_0| < R\} \), and write \(D_R^*(z_0)\) for the corresponding punctured open disk \(D_R(z_0) -\{z_0\} \ =\ \{z\in {\bf C}: 0 < |z-z_0| < R \}.\)

    • Recall (from the last assignment or the Monday Dec. 5 lecture) that if \(f\) is analytic at \(z_0\), then \(f\) has a unique power-series representation centered at \(z_0\). If \(f\) is analytic on some open annulus centered at \(z_0\), then \(f\) has at least one Laurent-series representation centered at \(z_0\), a series that converges on that annulus. However, \(f\) may have more than one Laurent-series representation centered at \(z_0\). For example the function \(f(z)=\frac{1}{z-5}-\frac{1}{z-(6+i)}\) has three Laurent series centered at \(0\). One of these is the Taylor series centered at 0, which converges on the (whole) open disk \(D_5(0)\); another converges on the annulus \(\{z\in {\bf C}: 5<|z|<\sqrt{37}\}\) (\(\sqrt{37}=|6+i|\) = distance from 0 to \(6+i\)); and the third converges on the "infinite outer-radius annulus" \(\{z\in {\bf C}: \sqrt{37}<|z|\}\). The first of these has no negative-exponent terms; each of the other two does have negative-exponent terms.
          (Note: in other contexts, an annulus is the region between two concentric circles, and hence has inner and outer radii that are nonzero and finite. For the sake of convenience, in the context of Laurent series we still use the word annulus even if the inner "radius" is 0 [in which case the inner "circle" is a point] and/or the outer "radius" is \(\infty\) [in which there is no outer circle at all.])

    • However, if \(z_0\) is an isolated singular point of \(f\), then for some \(R > 0\) there is a (unique) Laurent series $$\sum_{n=-\infty}^\infty c_n(z-z_0)^n\ :=\ \sum_{n=0}^\infty a_n(z-z_0)^n+ \sum_{n=1}^\infty b_n(z-z_0)^{-n} \ \ \ \ \ \ \ \ \ (**) $$ that converges to \(f(z)\) for every point \(z\) in the punctured open disk \(D_R^*(z_0)\) (a "zero inner-radius annulus"). If \(z_0\) is the only singular point of \(f\), we may take \(R=\infty\); otherwise the largest \(R\) that works is the distance from \(z_0\) to the nearest different singular point of \(f\).

    • The word "singularity" is a synonym for "singular point".

    • In practice, we rarely use the integral expressions (2) and (3) on p. 197 of Brown   Churchill to compute coefficients for a Laurent series. (These formulas are more important for proving theorems than for computations.) More often, we rely on algebraic manipulations and our knowledge of geometric series [specifically, that \(\frac{1}{1-z}=\sum_{n=0}^\infty z^n\) whenever \(|z|<1\) ], and the uniqueness of Laurent series on annular regions (or disks) of analyticity.

      Example: the two Laurent series for   \(f(z)=\frac{1}{z+2}\)   centered at 0.
      Since the only singularity of \(f\) is at \(-2\), there are two Laurent series centered a 2, one valid for \(|z| < |-2|=2\) and the other valid for \(|z| > 2\).

      • If \(|z| < 2\), then $$ \begin{eqnarray*} \frac{1}{z+2} &=& \frac{1}{2} \,\frac{1}{1-(-\,\frac{z}{2})}\\ &=& -\,\frac{1}{2} \sum_{n=0}^\infty \left(-\,\frac{z}{2}\right)^n\ \ \ \ \ \ (\mbox{since}\ \big|-\,\frac{z}{2}\big|<1)\\ &=& \sum_{n=0}^\infty \left(-\,\frac{1}{2}\right)^{n+1} z^n. \end{eqnarray*} $$ This is the Laurent series for \(f\) on \(\{z\in {\bf C}: |z|<2\). (Every power series is also a Laurent series. A Laurent series doesn't have to have negative-exponent terms.)

      • If \(|z| > 2\), then \(|-\,\frac{2}{z}| < 1\), so $$ \begin{eqnarray*} \frac{1}{z+2} &=& \frac{1}{z} \frac{1}{1-(-\,\frac{2}{z})} \\ &=& \frac{1}{z} \sum_{n=0}^\infty \left(-\,\frac{2}{z}\right)^n\ \ \ \\ &=& \sum_{n=0}^\infty (-2)^n z^{-n-1}\\ &=& \sum_{n=1}^\infty (-2)^{n-1} z^{-n}\ . \end{eqnarray*} $$

    • An isolated singularity is classified as exactly one of the following: removable singularity, pole, or essential singularity. Specifically, if \(z_0\) is an isolated singularity of \(f\), then the singularity is:

      • removable if the coefficients \(b_n\) in (**) are all zero (I give examples later showing how this can happen);
      • a pole if at least one of the \(b_n\) is nonzero, but only finitely many;
      • essential if infinitely many of the \(b_n\) are nonzero.

      For a pole, the largest \(n\) for which \(b_n\neq 0\) is called the order of the pole. Thus, if \(f\) has a pole of order \(m\) at \(z_0\), then on a small enough punctured disk \(D_R^*(z_0)\), \(f\) can be represented as a Laurent series of the form \(\sum_{n=-m}^\infty c_n(z-z_0)^n\).

    • A pole of order 1 is also called a simple pole. Thus, for a simple pole, the punctured-disk Laurent series (**) takes the form $$ \sum_{n=-1}^\infty c_n(z-z_0)^n= \frac{b_1}{z-z_0}+\sum_{n=0}^\infty a_n (z-z_0)^n.$$

    • If \(z_0\) is an isolated singularity of \(f\), the residue of \(f\) at \(z_0\), written \({\rm Res}_{z=z_0}f(z)\) or \({\rm Res}_{z_0}(f)\), is defined to be the coefficient of \( (z-z_0)^{-1}\) in the Laurent series representation of \(f\) on a small enough punctured open disk \(D_R^*(z_0)\) (the criterion for "small enough" being that \(f\) has no singular points in \(D_R^*(z_0)\)). Thus, this residue is the coefficient \(b_1\) in the notation the right-hand side of (**); \(c_1\) in the notation the left-hand side.

    • If \(f\) has a simple pole at \(z_0\), then on a small enough punctured open disk \(D_R^*(z_0)\), $$\begin{eqnarray*} (z-z_0)f(z) &=& b_1+\sum_{n=0}^\infty a_n(z-z_0)^{n+1}\\ &=& b_1+\sum_{n=1}^\infty a_{n-1}(z-z_0)^n \\ &=& \sum_{n=0}^\infty c_{n-1}(z-z_0)^n, \end{eqnarray*} $$ a power series with positive radius of convergence (at least \(R\)). Hence the function it represents the disk \(D_R(z_0)\) is continuous (see the green "Note" after Theorem B in the last assignment). Thus, if \(f\) has a simple pole at \(z_0\), then $$ {\rm Res}_{z=z_0}f(z)=\lim_{z\to z_0} (z-z_0)f(z)$$ (since each side is equal to \(b_1\), in the notation above). Conversely, if the above limit exists, then \(f\) has either \(f\) a simple pole at \(z_0\) or a removable singularity there. (The singularity is removable if \(\lim_{z\to z_0} (z-z_0)f(z)=0\), and is a simple pole if the limit exists but is not 0.)

      Example. Let \(f(z)=\frac{1}{(z^2+9)(z-1)}\), which has isolated singularities at \(z=\pm 3i\) and \(z=1\). Since \(z^2+9=(z-3i)(z+3i)\), we have $$ \begin{eqnarray*} \lim_{z\to 3i} (z-3i)f(z) &=& \lim_{z\to 3i} \frac{1}{(z+3i)(z-1)} \ =\ \frac{1}{(6i)(3i-1)} \ =\ \frac{1}{6(-3-i)} \ = \ \frac{-3+i}{6(-3-i)(-3+i)} \ =\ \frac{-3+i}{60} , \\ \\ \lim_{z\to -3i} (z+3i)f(z) &=& \lim_{z\to -3i} \frac{1}{(z-3i)(z-1)} \ =\ \frac{1}{(-6i)(-3i-1)} \ =\ \frac{1}{6(-3+i)} \ = \ \frac{-3-i}{6(-3+i)(-3-i)} \ =\ \frac{-3-i}{60} , \\ \\ \mbox{and}\ \ \ \lim_{z\to 1} (z-1)f(z) &=& \lim_{z\to 1} \frac{1}{z^2+9} \ =\ \frac{1}{10}\ . \end{eqnarray*} $$ Thus the residues of \(f\) at \(3i,\ -3i,\) and \(1\) are \(\frac{-3+i}{60}, \frac{-3-i}{60},\) and \(\frac{1}{10}\) respectively.

      Note: The above method works only for simple poles (poles of order 1). If \(\lim_{z\to z_0} (z-z_0)f(z)\) does not exist, that doesn't mean that there's no residue, or that the residue is 0, or that the residue can't be computed. It means that the singularity is either a pole of higher order or an essential singularity, and that the residue must be computed some other way (e.g. by figuring out the relevant Laurent series).

    • The Residue Theorem ("Cauchy's Residue Theorem" in the book). Read Section 76. This section is important but short; I have no better summary of my own.

    • Examples of a removable singularity

        Define \(g:{\bf C}-\{0\}\to {\bf C}\) and \(h:{\bf C}\to {\bf C}\) by $$ \begin{eqnarray*} g(z)&=&\frac{\sin z}{z}, \ \ \ z\neq 0,\\ h(z)&=& \left\{\begin{array}{ll}\frac{\sin z}{z} &\mbox{if}\ z\neq 0, \\ 2 & \mbox{if}\ z=0. \end{array}\right. \end{eqnarray*} $$     First consider \(g\). Observe that \(g\) is analytic on \({\bf C}-\{0\}\), but is not defined at \(0\), and hence has an isolated singularity at \(0\). Since \(\sin z\) equals its Maclaurin series \(\sum_{n=0}^\infty (-1)^{n}\frac{z^{2n+1}}{(2n+1)!}=z-\frac{z^3}{3!}+\frac{z^5}{5!} +\dots\) for all \(z\in {\bf C}\), when \(z\neq 0\) we have $$ \frac{\sin z}{z} = 1-\frac{z^2}{3!}+\frac{z^4}{5!} +\dots =\sum_{n=0}^\infty (-1)^{n}\frac{z^{2n}}{(2n+1)!}\ .$$ Hence this last power series converges (specifically to \(g(z)\)) for all \(z\neq 0\), so its radius of convergence is infinite. Therefore the series defines an entire function \(\tilde{g}\) that coincides with \(g\) on the domain of \(g\). It is clear from the series defining \(g(z)\) that \(\tilde{g}(0)=1.\) If we simply extend the definition of \(g\) by setting \(g(0)=1\), the function we obtain is \(\tilde{g}\), which is analytic at \(0\) (and everywhere else). We have "removed" the singularity.

        Observe also that \(\lim_{z\to 0} h(z)=\lim_{z\to 0} g(z) =1\neq h(0).\) Hence \(h\) is not continuous at \(0\), so cannot be analytic there, but (like the original \(g\)) is analytic everywhere else. Thus \(0\) is an isolated singularity of \(h\). If we change the definition of \(h\) by simply replacing "2" with "1" in the definition of \(h\), the function we obtain is, again, the entire function \(\tilde{g}\). We have "removed" the singularity by the stroke of a pen.

          The examples above illustrate the two ways that a removable singularity of a function \(f\) at a point \(z_0\) can arise:

      • (i) We have defined \(f\) by a formula that yields an analytic function on some punctured open disk \(D_R^*(z_0)\), but is meaningless at \(z_0\), yet there is some \(c\in {\bf C}\) such that if we extend our definition by setting \(f(z_0)=c\), the extended function is analytic; or

      • (ii) We have defined \(f\) "stupidly" at \(z_0\), yielding a function that is analytic function on some punctured open disk \(D_R^*(z_0)\), and for which \(\lim_{z\to z_0}f(z)\) exists but does not equal \(f(z_0)\).

      If \(z_0\) is a pole or essential singularity of a function \(f\), it can be shown that \(\lim_{z\to z_0}f(z)\) does not exist, so there is no value we could choose for a (re)definition of \(f(z_0)\) that would even make the modified \(f\) continuous at \(z_0\), let alone analytic there.

  • Thurs 12/15/22
    Final Exam
    The final exam will be given on Thursday, December 15, starting at 12:30 p.m., in our usual classroom.


    Class home page