\( \newcommand{\lb}{\langle} \newcommand{\rb}{\rangle} \newcommand{\call}{{\mathcal L}} \newcommand{\N}{{\sf N}} \newcommand{\R}{{\sf R}} \newcommand{\V}{{\sf V}} \newcommand{\W}{{\sf W}} \newcommand{\bfr}{{\bf R}} \newcommand{\sfr}{{\sf R}} \newcommand{\sfl}{{\sf L}} \newcommand{\span}{{\rm span}} \newcommand{\T}{{\sf T}} \newcommand{\mmnr}{M_{m\times n}(\bfr)} \newcommand{\a}{\alpha} \newcommand{\b}{\beta} \newcommand{\g}{\gamma} \renewcommand{\l}{\lambda} \newcommand{\abcd}{\left( \begin{array}{rr} a&b\\ c&d \end{array}\right) \newcommand{\va}{{\bf a}} \newcommand{\lb}{\langle} \newcommand{\rb}{\rangle} } \)

Homework assignments and rules for written work
MAS 4105 Section 5287 (24829) — Linear Algebra 1
Spring 2025

Last updated Fri Apr 25   03:15 EDT   2025

  • General information
  • Homework
  • Assignments


    General information


    Some Rules for Written Work (Quizzes and Exams)

    • Academic honesty

        On all work submitted for credit by students at the University of Florida, the following pledge is implied:
          "On my honor, I have neither given nor received unauthorized aid in doing this assignment."

    • Write in complete, unambiguous, grammatically correct, and correctly punctuated sentences and paragraphs, as you would find in your textbook.
         Reminder: Every sentence (excluding questions and exclamations) begins with a CAPITAL LETTER and ends with a PERIOD.

    • On every page, leave margins (left AND right AND top AND bottom; note that "and" does not mean "or"). For example, never write down to the very bottom of a page. Your margins on all four sides should be wide enough for a grader to EASILY insert corrections (or comments, or partial scores) adjacent to what's being corrected (or commented on, or scored).

      On your quizzes and exams, to save time you'll be allowed to use the symbols \(\forall, \exists\), \(\Longrightarrow, \Longleftarrow\), and \(\iff\), but you will be required to use them correctly. The handout Mathematical grammar and correct use of terminology, assigned as reading in Assignment 0, reviews the correct usage of these symbols. You will not be allowed to use the symbols \(\wedge\) and \(\vee\), or any symbol for logical negation of a statement. There is no universally agreed-upon symbol for negation; such symbols are highly author-dependent. Symbols for and and or are used essentially as "training wheels" in courses like MHF 3202 (Sets and Logic). The vast majority of mathematicians never use \(\wedge\) or \(\vee\) symbols to mean "and" or "or"; they use \(\wedge\) and \(\vee\) with very standard different meanings. (Note: the double-arrows \( \Longrightarrow, \Longleftarrow,\) and \(\iff\) are implication arrows. Single arrows do not represent implication, so you may not use them to substitute for the double-arrow symbols.) [Depending on which Sets and Logic section you took, you may have had the misfortune to use a textbook that uses single arrows for implication. If so, you've been taught implication-notation that most of the mathematical world considers to be wrong, and, starting now, you'll need to un-learn that notation in order to avoid confusion in almost all your subsequent math courses. As an analogy: if you had a class in which you were taught that the word for "dog" is "cat", your subsequent teachers would correct that misimpression in order to spare you a lot of future confusion; they would insist that you learn that "cat" does not mean "dog". They would not say, "Well, since someone taught you that it's okay to use `cat' for 'dog', I'll let you go on thinking that that's okay."]


  • Assignments

    Below, "FIS" means our textbook (Friedberg, Insel, and Spence, Linear Algebra, 5th edition). Unless otherwise indicated, problems are from FIS. A problem listed as (say) "2.3/ 4" means exercise 4 at the end of Section 2.3.

    Date due Assignment
    F 1/17/25 Assignment 0 (just reading, but important to do before the end of Drop/Add)

  • Read the Class home page and Syllabus and course information.

  • Read all the information above the assignment-chart on this page.

  • Go to the Miscellaneous Handouts page and read the handouts "What is a proof?" and "Mathematical grammar and correct use of terminology". (Although this course's prerequisites are supposed to cover most of this material, most students still enter MAS 4105 without having had sufficient feedback on their work to eliminate common mistakes or bad habits.)
        I recommend also reading the handout "Taking and Using Notes in a College Math Class," even though it is aimed at students in Calculus 1-2-3 and Elementary Differential Equations.

  • Read these tips on using your book.

  • In FIS, read Appendix A (Sets) and Appendix B (Functions). Even though this material is supposed to have been covered in MHF3202 (except for the terminology and notation for images and preimages in the first paragraph of Appendix B that you're not expected to know yet), you'll need to have it at your fingertips. Most students entering MAS4105 don't.
        Also read the handout "Sets and Functions" on the Miscellaneous Handouts page, even though some of it repeats material in the FIS appendices. I originally wrote this for students who hadn't taken MHF3202 (at a time when MHF3202 wasn't yet a prerequisite for MAS4105), so the level may initially seem very elementary. But don't be fooled: these notes include some material that most students entering MAS4105 are, at best, unclear about, especially when it comes to writing mathematics.
      For the portions of my handout that basically repeat what you saw in FIS Appendices A and B, it's okay just to skim.

  • In the book by Hammack that's the first item on the Miscellaneous Handouts page, read Section 5.3 (Mathematical Writing), which has some overlap with my "What is a proof?" and "Mathematical grammar and correct use of terminology". Hammack has a nice list of 12 important guidelines that you should already be following, having completed MHF3202. However, most students entering MAS4105 violate almost all of these guidelines. Be actively thinking when you read these guidelines, and be ready to incorporate them into your writing. Expect to be penalized for poor writing otherwise.
        I'd like to amplify guideline 9, "Watch out for 'it'." You should watch out for any pronoun, although "it" is the one that most commonly causes trouble. Any time you use a pronoun, make sure that it has a clear and unambiguous antecedent. (The antecedent of a pronoun is the noun that the pronoun stands for.)
  • T 1/21/25 Assignment 1  

  • Read Section 1.1 (which should be review).

  • 1.1/ 1–3, 6, 7.

    See my Fall 2023 homework page for some notes on the reading and exercises in this assignment.

      As you'll see if you scroll through that page, up through Fall 2023 I inserted a lot of notes into the assignments. While this had the advantage of putting those notes right in front of you, they made the assignments themselves harder to read, and some students commented that these notes could be overwhelming. So this year, I'm experimenting with referring you to a different page for many of those notes.

      As I hope is obvious: on the Fall 2023 page, anywhere you see a statement referring to something I said in class, the thing that I "said" may be something I haven't said yet this semester, and could end up not saying this semester at all. My lectures are not word-for-word the same every time I teach this class, so you'll need to use some common sense when looking at the inserted notes in a prior semester's assignments.

  • Read Section 1.2. Remember that, in this class, whenever the book refers to a general field \(F\), you may mentally substitute \(\bfr\) for \(F\) unless I say otherwise. If an exercise I've assigned refers specifically to the field \({\bf C}\) (plain "C " in FIS) of complex numbers, e.g. 1.2/ 14, then you need to use \({\bf C}\) as the problem-setup indicates. Everything in Section 1.2 works if \(\bfr\) if replaced by \({\bf C}\); no change (other than notational) in any definition or proof is needed.

  • 1.2/ 1 except for parts c and d,  2–4, 8, 12–14, 17–21.

  • Do this non-book problem.

  • Read Section 1.3 through at least Example 3.
  • T 1/28/25 Assignment 2  

  • 1.2/ 1cd

  • Finish reading Section 1.3.

  • Read the current (1/25/2025) version of the handout Polynomials and Polynomial Functions posted on the Miscellaneous Handouts page. (I will be adding some material to this handout over the next few weeks.)

  • 1.3/ 1b–g, 2–10, 12–16, 18, 19, 22.   Note:
    • In 8f, "\(a_1^2\)" means \( (a_1)^2\), etc. for \(a_2^2\) and \(a_3^2\).
    • I did #18 in class very hurriedly on Friday 1/24. I'm having you re-do it in case you weren't able to catch some step(s) in the argument. (It's good practice with the ideas and with proof-writing anyway.)
    • In #22, assume \(F_1= F_2=\bfr\), of course, just as I've said to assume \(F=\bfr\) when you see a single field \(F\) in the book.

  • Do the following, in the order listed below.
    1. Read the first definition—the definition of \(S_1+S_2\)—near the bottom of p. 22. (The second definition is correct, but not complete; there is something else that's also called direct sum. Both types of direct sum are discussed and compared in the handout referred to below.)

    2. Exercise 1.3/ 23. In part (a), if we were to insert a period after \(V\), we'd have a sentence saying, "Prove that \(W_1+W_2\) is a subspace of \(V\)."   Think of this as part "pre-(a)" of the problem. Obviously, it's something you'll prove in the process of doing part (a), but I want the conclusion of "pre-(a)" to be something that stands out in your mind, not obscured by the remainder of part (a).

    3. Read the short handout "Direct Sums" posted on the Miscellaneous Handouts page.

    4. Do exercises DS1, DS2, and DS3 in the Direct Sums handout.

    5. Exercise 1.3/ 24 There are a few more direct-sum exercises from Section 1.3 that I'll be assigning, but I've moved them to the next assignment to avoid further lengthening the current one. However, they're thematically related to the current assignment, so if you have time, doing these problems now wouldn't be a bad idea.
  • T 2/4/25 Assignment 3  

  • 1.3/ 24–26, 28–30. See additional instructions below about 28 and 30.
    • In #28, skew-symmetric is a synomym for the more commonly used term antisymmetric (which is the term I usually use). Don't worry about what a field of characteristic two is; \(\bfr\) is not such a field. See my Fall 2023 homework page for some other comments on #28.

    • For #30, you already proved half of the stated "if and only if" as DS3 in the previous assignment, so all that remains is for you to prove the other half.

  • In the handout Lists, linear combinations, and linear independence (posted on the Miscellaneous Handouts page):
    • Before the Wed. 1/29 class, read up through Remark 6. (Definitions 1, 2ab, and 5, as well as Remark 6, corresponds to certain material in Section 1.4 of FIS; Definitions 2, Proposition 3, and Definition 4 correspond to material in Section 1.5. In class, we'll finish covering Section 1.4 before moving on to Section 1.5, but in my notes it seemed advantageous not to split up the material on pp. 1–3 into separation sections.) Also read the "Some additional comments" section at the end of the handout, and the green text in Assignment 3 on my Fall 2024 homework page.

    • (More from this handout added as last item in assignment.)

  • Read Section 1.4, minus Example 1.
          Note: Two of the three procedural steps below "The procedure just illustrated" on p. 28 should have been stated more precisely. See my Fall 2023 homework page, Assignment 2, "Read Section 1.4 ..." bullet-point, for clarifications/corrections.

  • 1.4/ 3abc, 4abc, 5cdegh, 10, 12, 13, 14, 17. In 5cd, the vector space in consideration is \(\bfr^3\); in 5e it's \(P_3(\bfr)\); in 5gh it's \(M_{2\times 2}(\bfr)\).
      Note on #12. For nonempty sets \(W,\) the way I think of the fact proven in #12 is:
        In a vector space \(V\),   a nonempty subset   \(W\subseteq V\)   is a subspace iff   \(W\)   is "closed under taking linear combinations;"
        i.e. iff every linear combination of elements of   \(W\)   lies in   \(W\).
      The book's wording in #12 is better than this in a couple of ways: (i) it handles the empty-set case as well as the nonempty-set case, and (ii) it is very efficient. However, our minds don't always conceptualize things in the most efficient way.   My less-efficient phrasing indicates more directly what I'm thinking (in this context), and avoids one extra word of recently learned vocabulary: span.   (But this comes at a cost: introducing other new terminology—"closed under taking linear combinations", a non-standard term that itself requires definition—as well as not handling the empty-set case.)

  • Read Section 1.5.
          Once you've read the book's definition of linearly dependent and linearly independent subsets of a vector space (pp. 37 and 38), show that, together, these are equivalent to Definition 4 in my "Lists, ..." handout. Note that in the handout's Definition 4, I've emphasized the word "distinct". I've done this because when reading the book's definition on p. 37, most students don't realize how critical this word is, so they later forget it's there. The handout's Proposition 3 shows why this word is important in Definition 4, and hence also in the book's definition on p. 37. Do this exercise (which the handout's Proposition 3 should make easy): Show that if "distinct" were deleted from the handout's Definition 4, or from the book's definition on p. 37, then every nonempty subset of a vector space would be linearly dependent (and hence the terminology "linearly (in)dependent" would serve no purpose!).

  • In the "Lists, ..." handout: do Exercise 7, read Proposition 8 and its proof, and read Example 9.

    • T 2/11/25 Assignment 4  

    • 1.5/ 1, 2a–2f, 3–7, 10, 12, 13 (modified as below), 15, 16, 17, 20. (Reminder: as mentioned in the previous assignment, ignore any instructions related to the characteristic of a field. Just FYI, without giving you a definition, the field \(\bfr\) happens to have characteristic 0.)
        Modification for #13. Erase all the set-braces and insert the words "the list" in front of "\(u, v\)",   "\(u+v, \ u-v\)",   "\(u, v, w\)", and "\(u+v,\ u+w,\ v+w\)". Furthermore, in part (a), do not assume that the vectors \(u\) and \(v\) are distinct; in part (b), do not assume that \(u, \ v\), and \(w\) are distinct.
            With these changes, the results you are proving are, simultaneously, stronger than what the book asked you to prove (fewer assumptions are needed), and simpler to prove. For the (non-obvious) reason it's simpler to do the modified problem than to do the book's problem correctly, see the green text under "Modification for #13" in my Fall 2024 homework page, Assignment 4.

    • In the updated version (2/2/2025) of the Polynomials and Polynomial Functions handout, read this material that's been added since the previous version:
      • The blue note inserted after Proposition 3.1, and the bold-face paragraph at the top of the next page.
      • Everything from Corollary 3.3 through the end of the handout.

    • In Section 1.6, read from the beginning up through Theorem 1.8 and its (partial) proof. (Only one direction of the "if and only if" is proved—specifically, the "only if" direction. [Make sure you understand that in a birectional implication "P if and only if Q", the "if" direction is "P if Q", i.e. "Q implies P." The implication "P implies Q" is the "only if" direction.]) The argument in the book should look very familiar if you were in class on Friday 2/7, because we used the exact same argument to prove an equivalent result; we just hadn't introduced the word "basis" yet.

        It might appear at first that the "unique representation" result we proved in class is more general than the "only if" direction of Theorem 1.8, since in class we didn't demand that \(\span(S)=V\). But that "extra generality" is illusory. By definition, every subset of a vector space \(V\) spans its own span ("\(S\) spans \(\span(S)"\)). Thus every linearly independent set in \(V\) is a basis of its own span—which is a subspace of \(V\), hence a vector space. So the "only if" half of Theorem 1.8 is neither more nor less general than the unique-representation result we proved in class; the two results are equivalent.

    • In the handout Some Notes on Bases and Dimension (posted on the Miscellaneous Handouts page), read up through Remark 5 on p. 4.

    • Practice writing definitions! Consider this to be a part of every assignment.
         When you write what you think is a definition of, say, a (type of) object X or a property P, ask yourself: Would somebody else be able to tell unequivocally whether some given object is an X or has property P, using only your definition (plus any prior, precise definitions on which yours depends, but without asking you any questions)? In other words, is your definition usable?
         If you can't write precise definitions, you'll never be able be able to write coherent proofs. Many proofs are almost "automatic": if you write down the precise definitions of terminology in the hypotheses and conclusion, the definitions practically provide a recipe for writing out a correct proof.

          When you learn a new concept, a natural early stage of the learning process is to translate new vocabulary into terms that mean something to you. That's fine. But you do have to get beyond that stage, and be able to communicate clearly with people who can't read your mind. You have to do this in an agreed-upon language (English, for us) whose rules of word-order, grammar, and syntax distinguish meaningful sentences from gibberish, and distinguish from each other meaningful sentences that use the same set of words but have different meanings. If someone new to football were to ask you to tell him or her, in writing, what a touchdown is, it wouldn't be helpful to answer with, "If it's a touchdown, it means when they throw or run and the person holding that thing gets past the line." If the friend asks what a field goal is, you wouldn't answer with "A field goal is when they don't throw, but they kick, no running." But this is how limited your understanding of mathematical terminology and concepts, and your ability to use them, are likely to be if you don't practice writing definitions.

          For every object or property we've defined in this class, you should be able to write a definition that's nearly identical either to the one in the book or one that I gave in class (or in a handout). Until you've mastered that, hold off on trying to write the definitions in what you think are other (equivalent!) ways. Most students need these "training wheels" for quite a while.

    • T 2/18/25 Assignment 5  

    • In whichever order you prefer, finish reading Section 1.6—minus the subsection on the Lagrange Interpolation Formula—and read the handout Some Notes on Bases and Dimension. Although you may choose which to read first, finish either the Section 1.6 reading or the handout before the Friday 2/14 class. These two readings have a lot of overlap (covered in different orders), but neither can substitute entirely for the other: my handout covers some material that's not in Section 1.6, but has very few of Section 1.6's examples. (However, whichever you read second, it's okay just to skim anything you thoroughly understood from your first reading.) The handout also has expanded versions of some proofs and other items in FIS that I thought might give you difficulty.

    • 1.6/ 1–8, 12–17, 21, 25 (see below), 29, 30, 33, 34. On Monday 2/17 we'll finish going over the "Replacement Theorem" in class, but you should already be able to use the theorem and its consequences based on classwork and the assigned reading. The last page of my handout on bases and dimension has a summary that includes various facts that can (and should) be used to considerably shorten the amount of work needed for several of the exercises, e.g. #4 and #12.
          See the last item in Assignment 4 on my Fall 2023 homework page for some comments on #25.

          Note: For most short-answer homework exercises (the only exceptions might be some parts of the "true/false quizzes" like 1.6/ 1), if I were putting the problem on an exam, you'd be expected to show your reasoning. So, don't consider yourself done if you merely guess the right answer!

    • Practice writing the statements and proofs of results we've proven. Consider this to be a part of every assignment. This isn't something you can cram. Recall my day-one advice: "Study from day one as if your next exam is next week."
          (It wouldn't hurt to review the other advice in the Some advice on how to do well and Further general advice sections of the syllabus as well.)
    • T 2/25/25 Assignment 6  

    • 1.6/ 18 (note that \({\sf W}\) is not finite-dimensional!), 22, 23, 30, 31, 32

    • Do these non-book problems (updated 2/18/25 to include problem NB 6.5).

    • Read Section 2.1 up through Example 10.
          As was also true of Section 1.6, there is a lot of content in Section 2.1 (more than in any other section of Chapter 2).
        Note that there is actual work for you to do when reading many of the examples in Section 2.1. In Section 2.1, Example 1 is essentially the only example in which the authors go through all the details of showing that the function under consideration is linear. In the remaining examples, the authors assume that all students can, and therefore will, check the asserted linearity on their own. Examples 2–4 are preceded by a paragraph asserting that the transformations in these examples are linear, and saying, "We leave the proofs of linearity to the reader"—meaning you!
            In Example 8, the authors neglected to state explicitly that the two transformations in the example are linear—but they are linear, and you should show this. (That's very easy for these two transformations, but it's still a good drill in what the definition of linearity is, and how to use it.) When going through examples such as 9–11 in this section (and possibly others in later sections of the book) that start with wording like "Let \({\sf T}: {\rm (given\ vector\ space)}\to {\rm (given\ vector\ space)}\) be the linear transformation defined by ... ," or "Define a linear transformation \({\sf T}: {\rm (given\ vector\ space)}\to {\rm (given\ vector\ space)}\) by ...", the first thing you should do is to check that \({\sf T}\) is, in fact, linear. (You should do this before even proceeding to the sentence after the one in which \({\sf T}\) is defined.)

            Some students will be able to do these linearity-checks mentally, almost instantaneously or in a matter of seconds. Others will have to write out the criteria for linearity and explicitly do the calculations needed to check it. After doing enough linearity-checks—how many varies from person to person—students in the latter category will gradually move into the former category (or at least closer to it), developing a sense for what types of formulas lead to linear maps.

            In math textbooks at this level and above, it's standard to leave instructions of this sort implicit. The authors assume that you're motivated by a deep desire to understand; that you're someone who always wants to know why things are true. Therefore it's assumed that, absent instructions to the contrary, you'll never just take the author's word for something that you have the ability to check; that your mindset will NOT be anything like, "I figured that if the book said object X has property Y at the beginning of an example, we could just assume object X has property Y."

    • Rephrase what's being proven in exercise 1.3/ 3 (from Assignment 2) as a statement that a certain map from some (specific) vector space to another is linear.

    • 2.1/ 2–6 (only the "prove that \({\sf T}\) is a linear transformation" part, for now), 7–9, 12, 13, 22 (just the first part, not the generalization), 25. For some of the earlier problems, I've already given proofs in class; I kept them in the assignment to give you extra practice.
    • W 2/26/25

      First midterm exam

      At the exam, you'll be given a booklet with a cover page that has instructions, and has the exam-problems and work-space on subsequent pages. In Canvas, under Files > exam-related, I've posted a sample cover-page. Familiarize yourself with the instructions on this page; your instructions will be similar or identical. In the same folder, the file "fall2024_exam1_probs.pdf" has the list of problems (without the workspace) that were on that exam, with some embedded comments on how the class performed.

            "Fair game" material for this exam is everything we end up covering (in class, homework, or the relevant pages of the book) up through the Friday Feb. 21 lecture and the homework due Feb. 25.
          The portion of FIS Section 2.1 that's fair game for this exam is everything up through Example 10
          In FIS Chapter 1, we did not cover Section 1.7 or the Lagrange Interpolation Formula subsection of Section 1.6. (However, homework includes any reading I assigned, which includes all handouts I've assigned. For example, fair-game material includes everything in my handout on bases and dimension, even though the "maximal linearly independent set" material overlaps with the book's Section 1.7.) You should regard everything else in Chapter 1 as having been covered (probably by Wednesday 2/19), except that the only field of scalars we've used, and that I'm holding you responsible for at this time, is \(\bfr\).

            For this exam, and any other, the amount of material you're responsible for is far more than could be tested in an hour (or even two hours). Part of my job is to get you to study all the material, whether or not I think I'm thinking of putting this or that topic on an exam, so I generally will not answer questions like "Might we have to do such-and-such on the exam?" or "Which topics should I focus on the most when I'm studying?" My job is not to abet focusing on less material than you're supposed to be learning.

            If you've been responsibly doing all the assigned homework, and regularly going through your notes to fill in any gaps in what you understood in class, then studying for this exam should be a matter of reviewing, not crash-learning. (Ideally, this should be true of any exam you take in any class; it will be true of all of mine. Once again, recall my day-one advice: "Study from day one as if your next exam is next week.") Your review should have three essential components:

      • reviewing your class notes;
      • reviewing the relevant material in the textbook and in any handouts I've given; and
      • reviewing the homework (including any reading not mentioned above, and including feedback you've gotten, whether through handed-in assignments or through quizzes and/or exams).
      If you're given an old exam to look at, then of course you should look at that too, but that's the tip of the iceberg; it does not replace any of the review-components above (each of which is more important than looking at an old exam), and it cannot tell you how prepared you are for your own exam in any class of mine. Again, on any exam, there's never enough time to test you on everything you're responsible for; you get tested on a subset of that material, and in my classes you should never assume that your exam's subset will be largely the same as the old exam's subset. (I strongly disagree with giving students a "practice exam" that's essentially the exam they'll be taking, just with different numbers or other minor changes. Doing this leads students, and others, to a highly distorted picture of how much the students actually know. It virtually guarantees that students will learn far less than they should and are capable of learning, and that they'll be under-prepared for follow-up courses.)

            When reviewing work that's been graded and returned to you (e.g. a quiz), make sure you understand any comments made by the grader, even on problems for which you received full credit. There are numerous mistakes that, when made early in the semester, might get you only a warning, but that could cost you points if you're still making them later in the semester. As the semester moves along, you are expected to learn from past mistakes, and not continue making the same ones over and over.

      T 3/4/25 Assignment 7  

    • Read from where you left off in Section 2.1 through at least Example 13.

    • 2.1/ 1a–1f, 2–6 (the parts not done for previous assignment) 10, 11, 14–18, 20 (half of which you already saw on the exam), 21, 36

         Comments on some of these exercises:

      • For 1f, look at #14a first.

      • In 2–6, one thing you're asked to determine is whether the given linear transformation \( \T:\V\to \W\) is onto. In all of these, \(\dim(\V)\leq \dim(\W)\). This makes these questions easier to answer, for the following reasons:

        • If \(\dim(\V)<\dim(\W)\), then \(\T\) cannot be onto; see exercise 17a.

        • When \(\dim(\V)=\dim(\W)\), we may be able to show directly whether \(\T\) is onto, but if not, we can make use of Theorem 2.5 (which says that when \(\dim(\V)=\dim(\W)\), a linear map \(\T: \V\to \W\) is onto iff \(\T\) is one-to-one). We can determine whether \(\T\) is one-to-one using Theorem 2.4.

        Also, regarding the "verify the Dimension Theorem" part of the instructions: You're not verifying the truth of the Dimension Theorem; it's a theorem. What you're being asked to do is to check that your answers for the nullity and rank satisfy the equation in Theorem 2.3. In other words, you're doing a consistency check on those answers.

      • In #10: For the "Is \(\T\) one-to-one?" part, you'll want to use Theorem 2.4, but there's more than one way of setting up to use it. You should be able to do this problem in your head (i.e. without need for pencil and paper) by using Theorem 2.2, then Theorem 2.3, then Theorem 2.4.

      • In 14a, the meaning of "\(\T\) carries linearly independent subsets of \( \V \) onto linearly independent subsets of \( \W \)" is: if \(A\subseteq \V\) is linearly independent, then so is \(\T(A)\). For the notation "\({\sf T}(A)\)", see the note about #20 below.

      • In #20, regarding the meaning of \({\sf T(V_1)}\): Given any function \(f:X\to Y\) and subset \(A\subseteq X\), the notation "\(f(A)\)" means the set \( \{f(x): x\in A\} \). (If you've done all your homework, you already saw this in Assignment 0; it's in the first paragraph of FIS Appendix B. I alse mentioned this in class on Friday 2/28/25.) The set \(f(A)\) is called the image of \(A\) under \(f\). For a linear transformation \(\T: \V\to \W\), this notation gives us a second notation for the range: \({\sf R(T)}={\sf T(V)}\).

      • #36: (Comment added belatedly.) Recall that the definition of "\({\sf V}\) is the (internal) direct sum of two subspaces \({\sf V_1, V_2}\)" had two conditions that the pair of subspaces had to satisfy. Problem 36 says that, when \({\sf V}\) is finite-dimensional and the subspaces are the range and the null space of the same linear map, each of these conditions implies the other. \(V\) is a finite-dimensional vector space, you only have to verify one of these conditions in order to conclude that \({\sf V=R(T)\oplus N(T)}\). This is reminiscent of some other instances we've seen of "things with two conditions" for which, under some hypothesis, each of the conditions implied the other. For example:

        • A set of \(S\) of \(n\) vectors in an \(n\)-dimensional vector space \({\sf V}\) is linearly independent if and only if \(S\) spans \(V\). (Hence \(S\) is a basis of \({\sf V}\) if either condition is satisfied.)

        • Given two vector spaces \({\sf V}, {\sf W}\) of equal (finite) dimension, a linear map \({\sf T: V\to W}\) is one-to-one if and only if \({\sf T}\) is onto.

      1.6/ 26. Although this exercise can be done without using material from beyond Section 1.6, it can also be done as a nice application of the Rank-plus-Nullity Theorem. First show that, for any fixed \(a\in \bfr\), the map \({\rm ev}_a:P_n(\bfr) \to \bfr\) defined by \({\rm ev}_a(f)=f(a)\) is linear. (The notation I'm using comes from the name of this map: evaluation at \(a\).) Observe that the subspace whose dimension you're asked to find is exactly \(\N({\rm ev}_a)\). Then determine the range \(\R({\rm ev}_a)\) (noting that there aren't a whole lot of possibilities for a subspace of \(\bfr\) !) and apply Theorem 2.2.

    • T 3/11/25 Assignment 8  

    • Read the remainder of Section 2.1, as a back-up to some results and proofs I covered in class.

    • (Added 3/7.) In the Lists, ... handout, read from the beginning of Proposition 13 through the end of the proof of Corollary 14 (pp. 7–8). You've already seen the results of Proposition 13 proven in class, as weil as in the "Bases and Dimension" handout (parts of Proposition 18). The proof in the "Lists ..." handout is different, deriving the result as an application of the Rank-plus-Nullity Theorem.

    • Read Section 2.2.

    • Read the (partial) solutions to Exam 1 posted in Canvas under Files. (Updated 3/10/25 with an expanded solution to 4(c).)

    • Reminder: since Assignment 4, an implicit part of every assignment has been to practice writing definitions! This is not something that can wait till you're reviewing for an exam. For every definition given in class or in assigned reading, practice writing the definition without looking at the book or any notes until you're able to reproduce the definition you were given.

          Writing a precise definition, and understanding what you're writing, requires that you read definitions carefully. This takes TIME and CONCENTRATION. Don't multi-task, and don't just run your eyes over the words, say "Yeah, okay" to yourself, and count that as reading. Play close attention not only to what words appear, but to the order in which they appear, and to the logical and grammatical structure of the sentence(s). It is not acceptable, for example, to know only that linear combinations, spans, and linear dependence/independence have something to do with a bunch of vectors \(v_i\), a bunch of scalars \(c_i\), expressions of the form \(c_1 v_1+\dots +c_n v_n\), and sometimes the zero vector and sometimes not. Each definition of the terms above, if phrased in terms of vectors \(v_i\) and scalars \(c_i\), has to introduce and quantify those vectors and scalars; has to state (among other things) exactly what restrictions there are, if any, on the scalars and/or vectors; and has to state exactly what role the expressions "\(c_1 v_1+\dots +c_n v_n\)" play in the definition. There can be no ambiguity.

    • 2.1/ 1gh, 23, 25, 27, 28. See comments below on some of these exercises.
      • #25: In the definition at the bottom of p. 76, the terminology I use most often for the function \({\sf T}\) is the projection [or projection map] from \({\sf V}\) onto \({\sf W}_1\). There's nothing wrong with using "on" instead of "onto", but this map \({\sf T}\) is onto. I'm not in the habit of including the "along \({\sf W}_2\)" when I refer to this projection map, but there is actually good reason to do it: it reminds you that the projection map depends on both direct summands \(\W_1\) and \(\W_2\), which is what exercise 25 is illustrating.

      • #28(b): If you've done the assigned exercises in order, then you've already seen such an example.

    • 2.2/ 2–7, 12, 16a (modified as below), 17 (modified as below).
      • In #16a: Show also (not instead of) that an equivalent definition of \({\sf S}^0\) is: \({\sf S^0= \{ T\in {\mathcal L}(V,W): N(T)\supseteq {\rm span}(S)\}} \).
      • In #17: Assume that \({\sf V}\) and \({\sf W}\) have finite, positive dimension (see note below). Also, extend the second sentence so that it ends with "... such that \([{\sf T}]_\beta^\gamma\) is a diagonal matrix, each of whose diagonal entries is either 1 or 0." (This should actually make the problem easier!)
            Additionally, show that if \({\sf T}\) is one-to-one, and the bases \(\beta,\gamma\) are chosen as above, none of the diagonal entries of \([{\sf T}]_\beta^\gamma\) is 0. (Hence they are all 1, and \([{\sf T}]_\beta^\gamma\) is the \(n\times n\) identity matrix \(I_n\) defined on p. 82, where \(n=\dim(V)=\dim(W)\).)

        Note (accidentally omitted from original posting of assignment: Using a phrase like "for positive [something]" does not imply that that thing has the potential to be negative! For example, as I mentioned in class the first day we started Chapter 2, "positive dimension" means "nonzero dimension"; there's no such thing as "negative dimension". For numerical quantities \(Q\) that can only be positive or zero, when we don't want to talk about the case \(Q=0\) we frequently say "for positive \(Q\)", rather than something like "for nonzero \(Q\)".

    • Do these non-book problems.
    • T 3/25/25 Assignment 9  

    • Read Section 2.3 up through Theorem 2.16.

        Powers of transformations in \(\call(\V,\V)\) and square matrices. For a linear map \(\T\) from some vector space \(\V\) to itself, the book's recursive definition of \(\T^k\) for \(k\geq 2\) is ''\(\T^k=\T^{k-1}\circ \T.\)'' An equivalent definition that I find more natural is ''\(\T^k=\T\circ\T^{k-1}\).'' (For me, the definition is, ''After you've done \(\T\)    \((k-1)\) times, do it one more time,'' whereas the book's is, ''Do \(\T\), and then do it \((k-1)\) more times.'') Similarly, for a given square matrix \(A\) (recall that "square matrix" means "\(n \times n\) matrix for some \(n\)"), for me the natural recursive definition of \(A^k\) is ''\(A^k=A\, A^{k-1},\)'' rather than the book's equivalent definition ''\(A^k=A^{k-1}\, A.\)''

        The book's definitions ''\(\T^0=I_\V\)'' (for any linear map \(\T:\V\to \V\)) and ''\(A^0=I_{n\times n}\)'' (for any \(n\times n\) matrix \(A\)) should be regarded as notation conventions for this book, not as standard definitions like the definition of vector space. The authors allude to this implicitly in their definition of   \(\T^0\)   (''For convenience, we also define ...'') but neglect to do this in their definition of \(A^0\), where the "For convenience" wording would actually be more important.

    • Read Section 2.4, skipping Example 5.
          After reading Theorem 2.19, go back and replace Example 5 by an exercise that says, "Show that \(P_3(\bfr)\) is isomorphic to \(M_{2\times 2}(\bfr)\)." Although that's the same conclusion reached in Example 5, there are much easier, more obvious ways to obtain that conclusion. (The point of Example 5 isn't actually to show that \(P_3(\bfr)\) is isomorphic to \(M_{2\times 2}(\bfr)\); it's to show that one way you could reach this conclusion is by using the Lagrange Interpolation Formula, something we skipped in Section 1.6 and won't be covering.)
          Several results in Section 2.4 are worded incorrectly. On my Fall 2023 homework page, in Assignment 8 (due-date 10/24/23), read my comments on the wording of Theorem 2.18 and its corollaries.

    • 2.3/ 1, 2, 4–7, 8 (just the first part, for now), 11–15, 16a, 17–19
          Notes on some of these exercises:
      • In 1e, it's implicitly assumed that \(W=V\); otherwise the transformation \({\sf T}^2\) isn't defined. Similarly, in 1f and 1h, \(A\) is implicitly assumed to be a square matrix; otherwise \(A^2\) isn't defined. In 1(i), the matrices \(A\) and \(B\) are implicitly assumed to be of the same size (the same "\(m\times n\)"); otherwise \(A+B\) isn't defined.

      • In 2a, make sure you compute  \( (AB)D\)   *AND*   \(A(BD)\)   as the parentheses indicate. DO NOT ASSUME, OR USE, ASSOCIATIVITY OF MATRIX-MULTIPLICATION IN THIS EXERCISE. The whole purpose of exercise 2 is for you to practice doing matrix-multiplication, not to practice using properties of matrix-multiplication. If your computations are all correct, you'll wind up with the same answer for \(A(BD)\) as for \((AB)D\).

      • In #11, remember that \(\T_0\) is the book's notation for the zero linear transformation (also called "zero map") from any vector space \(V\) to any vector space \(W\). (In class I've generally been using the notation \(0_V^W\) for this map, because of drawbacks to "\(\T_0\)" that I talked about.)

      • Consider #13 to be part (a) of problem that continues with parts (b)–(d), as follows, and do these other parts as well.

          (b) Let \(A\) and \(B\) be matrices of sizes \(m\times n\) and \(n\times m\) respectively, where \(m\) and \(n\) may or may not be equal. If \(m\neq n\), then \({\rm tr}(A)\) is not defined, but both \(AB\) and \(BA\) are square matrices (of sizes \(m\times m\) and \(n\times n\) respectively), so their traces are defined. Show that \({\rm tr}(AB)={\rm tr}(BA)\) whether or not \(m=n\). For a consistency check on this remarkable property, choose a \(2\times 3\) matrix \(A\) and a \(3\times 2\) matrix \(B\), compute \(AB\) and \(BA\), and check that the traces are indeed equal.

          (c) Check that if matrices \(A, B, C\) are of sizes for which the products \(ABC\) and \(BCA\) are defined, then the product \(CAB\) is also defined. Then use part (b) to show that, in this instance, \({\rm tr}(ABC)={\rm tr}(BCA)={\rm tr}(CAB)\). This is often called the "cyclic property of the trace".

          (d) Show that the cyclic property of the trace generalizes to products of any number of compatibly sized matrices.

      • In #18: It is perfectly fine (and intuitive) to use \(k=1\) as the ''base case'' for the inductive arguments in this exercise (in which case, however, you have to give a separate argument for the case \(k=0\)), or to use \(k=2\) as the ''base case'' (in which case you ha ve to give separate arguments for the cases \(k=0\) and \(k=1\). The shortest argument uses \(k=0\) as the base case, but, to me, that's something you discover in retrospect after you've used \(k=1\) or \(k=2\) as the "base case", and then realize that starting with the case \(k=0\) would have saved you from having to check the \(k=1\) and/or \(k=0\) case separately.

      • (Not important.) In #14, you might wonder, "Why are they defining \(z\) to be \((a_1, a_2, \dots, a_p)^t\) instead of just writing   \(z=\left( \begin{array}{c}a_1\\ a_2\\ \vdots \\ a_p\end{array}\right) \)  ? "   Historically, publishers required authors to write a column vector as the transpose of a row vector, both because it was harder to typeset a column vector than a row vector and because the column vector used more vertical space, hence required more paper. I can't be sure whether those were reasons for the book's choice in this instance, but it's possible. Other possible reasons are (i) it's a little jarring to see a tall column vector in the middle of an otherwise-horizontal line of text, and (ii) the fact that in LaTeX (the mathematical word-processing software used to typeset this book) it takes more effort to format a column vector than a row vector.

    • 2.4/ 2, 3, 13, 15, 17 (with 17b modified; see below), 23. Some notes on these exercises:
      • In #2, keep Theorem 2.19 in mind to save yourself a lot of work.

      • Regarding 17a: in a previous homework exercise (2.1/ 20), you already showed that the conclusion holds if \(\T\) is any linear transformation from \(\V\) to \(\W\); we don't need \(\T\) to be an isomorphism.

      • Modify 17b by weakening the assumption on \(\T\) to: "\(\T:\V\to\W\) is a one-to-one linear transformation." (So, again, we don't need \(\T\) to be an isomorphism, but this time we do need more than just "\(\T\) is linear.")

      • Regarding #23: The book's notation makes even me say "Huh?" To unravel the notation, you have to go back not just to exercise 1.6/ 18, but to Section 1.5, Example 2—at which point you may notice that what's being called a sequence in exercise 2.4/ 23 is not consistent with the definition of "sequence" in the Section 1.5 example. (The sequences in 2.4/ 23 are functions from the set of non-negative integers to \(\bfr\), rather than from the set of positive integers to \(\bfr\).)
            There are a couple of ways to fix this, and to write the definition of \(T\) more digestibly. One such way is this: Leave the definition of "sequence" in Section 1.5 Example 2 unchanged (with 1 being the initial index of every sequence), but instead of notation such as \(\sigma\) for a sequence, use notation such as \(\vec{a}\) for the sequence whose \(n^{\rm th}\) term is \(a_n\) (for every \(n\geq 1)\), and \(\vec{0}\) for the all-zero sequence. Then, in exercise 2.4/ 23, define \(\T\) by $$\T(\vec{a}) = \left\{ \begin{array}{ll} 0 & \mbox{if} \ \vec{a}=\vec{0},\\ \sum_{n=0}^{N-1} a_{n+1} x^n, &\mbox{where $N$ is the largest integer such that $a_N\neq 0,$ if $\vec{a}\neq\vec{0}$}.\end{array}\right. $$

    • Reminder: The reading portions of your assignments (like the other portions) are NOT OPTIONAL. In particular, this applies to handouts (including solutions handouts) that I've assigned you to read.

      Please remember that all my handouts are written to help you succeed, not to burden you with extra work. (They also are/were very time-consuming to write.) So please make sure you read them, with the goal of understanding, paying enough attention while reading that you remember what you've read. (Achieving this goal may require re-reading the same thing multiple times, possibly several weeks apart. If you pay attention while reading [no multi-tasking; see https://news.stanford.edu/stories/2009/08/multitask-research-study-082409], and genuinely want to understand, the material will keep seeping into your brain without you knowing it; your brain runs programs in the background, even while you sleep. How much calendar time is needed will vary enormously from student to student.)

    • T 4/1/25 Assignment 10  

    • Do these non-book problems. (Updated Sunday 8:30 a.m.)

    • 2.4/ 1, 4–9, 14, 16, 19.
      Regarding #8: we did at least half of this in class, but re-do it all to cement the ideas in your mind.
      Regarding 19(b): as we saw in an earlier assignment, whenever FIS asks you to "verify" a particular instance of a theorem, what the authors mean is, "Check, by direct computation, that the conclusion of the theorem is consistent with what your computation gives in this instance (or vice-versa)."

    • Read Section 2.5.
          Note: This textbook often states very useful results very quietly, often as un-numbered corollaries. One example of this is the corollary on p. 115, whose proof is one of your assigned exercises. There are other important results that the book doesn't even display as a corollary (or theorem, proposition, etc.), or even as a numbered equation. One example is the important matrix-product fact   "\( (AB)^t=B^tA^t\) " buried on p. 89 between Example 1 and Theorem 2.11.

    • Inverses of \(2\times 2\) matrices arise so often that you should eventually find that you know the following by heart (like the way you know your Social Security number withot ever trying to memorize it): the matrix \(A=\abcd\) is invertible if and only if \(ad-bc\neq 0\), in which case $$ \abcd^{-1}= \frac{1}{ad-bc} \left( \begin{array}{rr} d&-b\\ -c&a \end{array}\right).\ \ \ \ (*) $$

      Warning: Any version of equation (*) that you think is "mostly correct" (but isn't completely correct) is useless. Don't rely only on your memory for this formula. When you write down what you think is the inverse \(B\) of a given \(2\times 2\) matrix \(A\), always check (by doing the matrix-multiplication) either that \(AB=I\) or that \(BA=I\). (We showed in class why it's sufficient to do one of these checks.) This should take you only a few seconds, so there's never an excuse for writing down the wrong matrix for the inverse of an invertible \(2\times 2\) matrix.

    • W 4/2/25

      Second midterm exam

      Review the instructions on the cover page of your first exam. The instructions for the second exam will probably be identical; any changes would be minor.

            "Fair game" material for this exam will be everything we've covered (in class, homework, or the relevant pages of the book) up through all the material on isomorphisms, including everything in Assignment 10 other than the reading of Section 2.5. . The emphasis will be on material covered since the first midterm, but all of that relies on the earlier material, so effectively the exam is cumulative.

        Reminder: As the semester moves along, your mathematical writing is expected to improve. You are expected to have learned from corrections made on your graded quizzes and exams, or were addressed in class. Various mistakes that may not have cost many (or any) points earlier in the semester will be more costly now.

        Failure to pick up your graded first exam or any quiz , after being absent when the graded work was returned in class, does not excuse ignorance of what mistakes of yours have been commented on or corrected. Nor does absence excuse continuing to make mistakes that I discussed in class when you were absent. You are always responsible for everything I've said in class, whether or not you were there.

      T 4/8/25 Assignment 11

      Expect this to be a long assignment. Do not take a vacation from linear algebra after the Wednesday 4/2 midterm and postpone starting to work on this assignment.

      2.5/ 1, 2, 4, 5, 6, 8, 11, 12

      Comment on #6.   Note that in this exercise, you are asked only to find the matrices \([L_A]_\b\) and \(Q\); you are not asked to figure out the matrix \(Q^{-1}\) or to use the formula "\([L_A]_\b=Q^{-1}AQ\)" in order to figure out \([L_A]_\b\) (which can be computed without knowing \(Q^{-1}\)—in fact, without even knowing \(Q\)). The Corollary on p. 115 tells us how to write down the matrix \(Q\) (in each part of #6) directly from the given basis \(\b\), with no computation necessary. Without even writing down \(Q\), the definition of "the matrix of a linear transformation with respect to given bases [or with respect to a single basis, for transformations from a vector space to itself]" tells everything that's needed to figure out such a matrix. For example, in 6c or 6d, letting \(w_1,w_2\), and \(w_3\) denote the indicated elements of \(\b\), we can proceed as follows:

      1. Compute \(L_A(w_1)\) (which is simply \( Aw_1\)).
      2. Express \(Aw_1\) as a linear combination of \(\{w_1,w_2,w_3\}\)—thus, as \(c_1w_1+c_2w_2+c_3w_3\)  for some \((c_1,c_2,c_3)\)—by solving the appropriate system of three equations in three unknowns, as you were doing in various exercises in Chapter 1.
      3. These coefficients \((c_1,c_2,c_3)\) form the first column of \([L_A]_\b\).
      4. Now repeat with \(Aw_2\) and \(Aw_3\) to get the second and third columns of \([L_A]_\b\).
      If we did want to compute \(Q^{-1}\)—the matrix that expresses the standard basis vectors \(e_1, e_2,\) and \(e_3\) in terms of \(\beta\)—we could do that by going through steps 2, 3, and 4 of the procedure above, but with \(Aw_i\)  replaced by \(e_i\).

          When we want to use the formula   " \([T]_{\b'}=Q^{-1}[T]_\b Q\) "   (not necessary in 2.5/ 6 !) in order to explicitly compute \([T]_{\b'}\) from \([T]_\b\) and \(Q\) (assuming the latter two matrices are known), we need to know how to compute \(Q^{-1}\) from \(Q\). The above approach in green works, but is not very efficient for \(3\times 3\) and larger matrices. Efficient methods for computing matrix inverses aren't discussed until Section 3.2. For this reason, in some of the Section 2.5 exercises (e.g. 2.5/ 4, 5), the book simply gives you the relevant matrix inverse.

    • Practice writing definitions!! Since Assignment 4, this has implicitly been a part of every assignment. For every object or property we've defined in this class, you should be able to write a definition that's nearly identical either to the one in the book or one that I gave in class (or in a handout). Some things to check:
      • Are the book's definitions (or mine, in handouts) written in complete sentences? Are yours?
      • Does the book (or do my handouts) start any sentences with math symbols, e.g. "\(\N(\T)\)"? Do you?

    • Practice writing the statements and proofs of results we've proven!! Since Assignment 5, this has implicitly been a part of every assignment. Some things to check:
      • Are there any theorems in the book, or my handouts, that have only a conclusion, with no explicit hypotheses?
      • Are there any theorems whose hypotheses (including those that introduce notation) are written after the conclusion, as an afterthought?

    • Read Section 3.1.

    • 3.1/ 1, 3–8, 10, 11. (I didn't get to discuss Section 3.1 material on Friday, but you should easily be able to do these exercises based just on the reading.) Some notes on these problems:

      • 1(c) is almost a "trick question". If you get it wrong and wonder why, the relevant operation is of type 3. Note that in the definition of a type 3 operation, there was no requirement that the scalar be nonzero; that requirement was only for type 2.

      • In #7 (proving Theorem 3.1), you can save yourself almost half the work by (i) first proving the assertion just for elementary row operations, and then (ii) applying #6 and #5 (along with the fact "\((AB)^t=B^tA^t\) " stated and proven quietly on p. 89).

      • In #8, I don't recommend using the book's hint, which essentially has you repeating labor done in #7 instead benefiting from the fruits of that labor. Instead I would just use the result of #7 (Theorem 3.1) and Theorem 3.2. (Observe that if \(B\) is an \(n\times n\) invertible matrix, and \(C,D\) are \(n\times p\) matrices for which \(BC=D\), we have \(B^{-1}D= B^{-1}(BC)=(B^{-1}B)C=IC=C,\) where \(I=I_{n\times n}\). [Note how similar this is to the argument that if \(c,x,y\) are real numbers, with \(c\neq 0\), the relation \(y=cx\) implies \(x=\frac{1}{c}y = c^{-1}y\). Multiplying a matrix on the left or right by an invertible matrix (of the appropriate size) is analogous to dividing by a nonzero real number. But in the matrix case, we don't call this operation "division".])

    • Practice writing definitions!! Since Assignment 4, this has implicitly been a part of every assignment. For every object or property we've defined in this class, you should be able to write a definition that's nearly identical either to the one in the book or one that I gave in class (or in a handout). Some things to check:
      • Are the book's definitions (or mine, in handouts) written in complete sentences? Are yours?
      • Does the book (or do my handouts) start any sentences with math symbols, e.g. "\(\N(\T)\)"? Do you?

    • Practice writing the statements and proofs of results we've proven!! Since Assignment 5, this has implicitly been a part of every assignment.

    • Read Section 3.2, except for (i) the statement and proof of Corollary 1 (which isn't important enough to be the best use of your time), (ii) the proof of Theorem 3.6 and (iii) the proof of Theorem 3.7.

        My route to the results in Sections 3.2–3.4 will be different from the book's, and will use terminology that's not in the book that I introduced in class on Friday Apr. 4 (column space, row space, column rank, and row rank). Whether or not you were present on Friday, I will expect you to know this terminology before the start of class on Monday. This very useful terminology is not my own; it just happens to be absent from this textbook.
           Note that once column rank is defined, my definition of row rank is equivalent to: \(\mbox{row-rank}(A) = \mbox{column-rank}(A^t)\).

           You've already covered some of Section 3.2's results in the Assignment 10's non-book problems. The version of Theorem 3.7(b) you proved in problem NB 10.3(b) is stronger than the one in the book, since the homework problem does not assume that the vector space \(Z\) is finite-dimensional. Problems NB 10.2 and 10.3(a) led you to a proof of Theorem 3.7(b) that's more fundamental and conceptual, as well as more general, than the proof in the book. Parts (c) and (d) of Theorem 3.7 then follow from parts (a) and (b), just using the book's definition of rank of a matrix (which, as I showed i Friday's class, is equivalent to my definition of column rank).

            Problem NB 10.3(b) (or, more weakly, Theorem 3.7ab) is an important result that has instructive, intuitive proofs that, as the homework problem shows, in no way require matrices (or anything in the book beyond Theorem 2.9). For my money, the book's proof of Theorem 3.7(b) is absurdly indirect, gives the false impression that matrix-rank needs to be defined before proving this result, further gives the false impression that Theorem 3.7 needed to be delayed until after Theorem 3.6 and one of its corollaries (Corollary 2(a), p. 156) had been proven, and obscures the intuitive reason why the result is true (namely, linear transformations never increase dimension).

            A note about Theorem 3.6:   The result of Theorem 3.6 is pretty, and can be use to derive various other results quickly. However, the book greatly overstates the importance of Theorem 3.6; there are other routes to any important implication of this theorem. And, as the authors warn in an understatement, the proof of this theorem is "tedious to read". There's a related theorem in Section 3.4 (Theorem 3.14) that's less pretty but gives us all the important consequences that the book gets from Theorem 3.6, and whose proof is a little shorter. Rather than struggling to read the proof of Theorem 3.6, you'll get much more out of doing enough examples to convince yourself that you understand why the result is true, and why you could write out a careful proof (if you had enough time and paper). That's essentially what the book does for Theorem 3.14; the authors don't actually write out a proof the way they do for Theorem 3.6. Instead, the authors outline a method from which you could figure out a (tedious) proof. This is done in an example (not labeled as an example!) on pp. 182–184, though the example is stated in the context of solving systems of linear equations rather than just for the relevant matrix operations.

    • Practice writing definitions!!

    • Practice writing the statements and proofs of results we've proven!!

    • 3.2/ 1–3, 5 (the "if it exists" should have been in parentheses; it applies only to "the inverse", not to "the rank"), 6(a)–(e), 11, 14, 15, 21, 22. See the notes on some of these exercises below.
        Hopefully, on Monday I'll get to do some examples of the methods developed in Sections 3.2. But examples such as Example 3, 4bc, 5, 6, and 7 would be an inefficient use of class time if I were to go through all the steps. (These can be done on paper much more quickly than they can be done at the blackboard. Writing on the board, and saying everything I'm writing, and answering questions along the way that, can take five times as long as it would take you to work them out on paper.) So don't wait to start working through the book's examples, getting some practice with the methods, even if you haven't finished reading the theorems and proofs in Section 3.2, and even if I haven't gotten there yet in class.

        Some notes on the Section 3.2 exercises:

        • In #6, one way to do each part is to introduce bases \(\beta, \gamma\) for the domain and codomain, and compute the matrix \([T]_\beta^\gamma\). Remember that the linear map \(T\) is invertible if and only if the matrix \([T]_\beta^\gamma\) is invertible. (This holds no matter what bases are chosen, but in this problem, there's no reason to bother with any bases other than the standard ones for \(P_2({\bf R})\) and \({\bf R}^3\).) One part of #6 can actually be done another way very quickly, if you happen to notice a particular feature of this problem-part, but this feature might not jump out at you until you start to a compute the relevant matrix.

        • Exercises 21 and 22 can be done very quickly using results from Assignment 10's non-book problems. (You figure out which of those problems is/are the one(s) to use!)

    • In Section 3.3, read up through Example 6.

    • Practice writing definitions!!

    • Practice writing the statements and proofs of results we've proven!!
    • T 4/15/25 Assignment 12  

    • Read the solutions to Exam 2 posted on Canvas (revised significantly on 4/9/25)

      With the very poor overall performance of the class on Exam 2, I'm concerned now that by showing you previous exams of mine, I may actually be hurting your performance. That would be the case if you're (mis)using the old exams as a way of trying to guess what questions (or topics) they're likely to see on your exam, instead of studying everything you're expected to know or be able to do. I'm strongly considering not showing you a previous final exam. There are too many possible topics you're responsible for, and if I'm giving you materials that tempt you to study less, I'm making it more likely that you'll do badly on the final exam.

    • Read Section 3.4 up through Theorem 3.16. (I already stated much of this theorem on Wed. 4/9, but did not have time that day to start proving the parts I'd stated.) For our purposes of this class, the corollary following Theorem 3.16 is not important, other than to give us the convenience of saying "the RREF" of a matrix \(A\) rather than "a  RREF of \(A\)."
          Note: The usage of the term "the RREF of \(A\)" in Theorem 3.16 is premature, since the term does not make sense till after the corollary following the theorem is proven. For the same reason, the wording of the corollary itself is imprecise. Better wording would be "Every matrix has a unique RREF," after which we can unambiguously refer to the RREF of a given matrix.

      Note: Other than to understand what some assigned exercises are asking you to do, I do not care whether you know what the term "Gaussian elimination" means. I never use the term myself. As far as I'm concerned, "Gaussian elimination" means "solving a system of linear equations by (anything that amounts to) systematic row-reduction," even though that's imprecise. Any intelligent teenager who likes playing with equations could discover "Gaussian elimination" on his/her own. Naming such a procedure after Gauss, one of the greatest mathematicians of all time, is like naming finger-painting after Picasso.

    • 3.4/ 1, 2, 7–13. Theorem 3.16(c) is key to the 7–13 group; re-read from the diamond-symbol on p. 191 through p. 193 if you're having trouble with these.
          Note: On p. 191, after the diamond-symbol, the authors' goal is to "streamline a procedure" that was illustrated in Section 1.6. For this reason they write the indicated homogeneous system of three equations in five unknowns; its \(3\times 6\) augmented matrix; and the RREF of this augmented matrix. But we can now accomplish the final goal—finding a subset of the five given vectors that's a basis of their span (which happens to be all of \(\bfr^3\)) with much less writing. The goal in this problem is NOT to find the solution-set of the indicated system of equations; that's the answer to a different type of question (although the questions are related). For the current problem, all we need to do is to row-reduce the 3\(\times\)5 matrix whose columns are the five given vectors, ending up with the \(3\times 5\) matrix obtained by deleting the last column of the book's matrix \(B\). Writing that never-changing all-zero 6th column, in every step from start to finish, is a waste of time (for the current problem, not if our goal were to solve the system of equations)).
          On the next page, in Example 3, the authors use the more efficient approach, but you have to read carefully to see that. (You should always be reading carefully, but in this particular instance, with the prominent visual display on p. 191 and the lack of anything comparable on p. 192, I think the presentation of Example 3 buries the lead. [Look up "burying the lead" if you don't know the expression.])

    • 3.3/ 1–5, 7–10. In #9, there is practically nothing to do; the point of the exercise is to help you realize that the definition of "\(Ax=b\) has a solution" is exactly the the same as the definition of "\(b\in \sfr(\sfl_A)\)" combined with the definition of \(\sfl_A\).

    • In Chapter 4:
      • Skim Section 4.1.
      • Read Section 4.2.
      • Read Section 4.3 up through just before Theorem 4.9, and skim from Theorem 4.9 through the end of the section (unless you have the time and interest to read it in depth).
        I am not holding you responsible for the formula in Theorem 4.9. (Cramer's Rule is just this formula, not the whole theorem.) You are responsible for knowing, and being able to show, that if \(A\) is invertible, then \(A{\bf x}={\bf b}\) has a unique solution, namely \(A^{-1}{\bf b}.\))

      • Instead of Section 4.4, read my own summary of some facts about determinants on my Fall 2024 homework page, Assignment 13b, minus (for now, at least) item 13 ("Determinants and volume").
           The title of FIS Section 4.4 is somewhat misleading, and I'm not satisfied with the content either. The book's "summary" omits many important facts, and intersperses its summarized facts with uses of these facts (so that the summarized facts don't appear in a single list.) The (unlabeled) examples on pp. 233–235 are useful, instructive, and definitely worth reading, but hardly belong in a summary of  facts   about determinants.

      Determinants are important, but we simply don't have enough time to cover them the way we should. So, with regret, I won't be spending class time going over them, or proving any of their properties In Calculus 3 (and perhaps other courses) you saw how to define and compute \(2\times 2\) and \(3\times 3\) determinants, so at least you're already somewhat familiar with them. For purposes of this class, this semester, you may just take on faith that the statements in my Fall 2024 summary are true; I will expect you to know and be able to use all the properties there (except perhaps the volume-related ones).

      I will also expect you to know the recursive definition of \(n\times n\) determinants, and be able to compute with it. At the bottom of the Miscellaneous Handouts page, the handout "Using elementary operations to compute determinants" gives some time-saving techniques that can greatly facilitate the computation of determinants. (Note: if the way you learned to compute \(3\times 3\) determinants involved copying columns 1 and 2 to the right of column 3, then drawing certain diagonal lines, please purge that from your memory. It does not generalize to \(n\times n\) matrices with \(n>3\), and for \(n=3\) it has no time-saving advantage over the standard definition. It can't really help you, and it can harm you.)

      After we're done with Chapter 3, we'll cover Chapter 5 (as much as we can get to). The material there is something that you're not likely to have seen before, and it's much more important that we use whatever class time remains after Chapter 3 on Chapter 5, rather than sacrifice any Chapter-5 class time to cover Chapter 4. Chapter 5 uses determinants, however, so it's important that you know how to compute them, and what their basic properties are, by the time we start Chapter 5.

    • (This part of the assignment can be postponed.) In the "Polynomials and polynomial functions" handout, I've added a couple of things:
      • A brief, final section (Section 4, pp. 10–11) that I didn't include originally because we hadn't covered isomorphisms yet. Read at least as far as the paragraph after the proof of Proposition 4.1. You may treat Remark 4.2 as optional reading.

      • A subsection of Section 3, "Aspects of polynomial functions that have no analog for abstract polynomials" (p. 9). The purpose is to give the student additional help understanding why polynomials and polynomial functions are not actually the same thing. Read as much of this as you find helpful for that purpose.
    • T 4/22/25 Assignment 13  

    • Read Section 5.1.
             When you get to the Corollary on p. 247, I suggest that you read the last sentence first. That will give you more concrete idea of what a diagonalizable matrix is: A matrix \(A\) is diagonalizable iff there exists an invertible matrix \(Q\) such that \(Q^{-1}AQ\) is a diagonal matrix (equivalently: iff \(A=QDQ^{-1}\) for some invertible matrix \(Q\) and diagonal matrix \(D\)). Otherwise, in that corollary, you may get lost in the weeds, and the important last sentence may become less memorable. Once you're confident with that sentence, it's okay to go back and read the rest of the Corollary.

    • 5.1/ 1, 2, 3abc, 4abd, 5abcdhi, 7–13 , 16, 18, 20.
          I recommend doing 5hi by directly using the definition of eigenvector and eigenvalue rather than by computing the matrix of \({\sf T}\) with respect to a basis of \(M_{2\times 2}({\bf R})\). (I.e., take a general \(2\times 2\) matrix \(A=\left(\begin{array}{cc} a & b\\ c& d\end{array}\right) \neq \left(\begin{array}{cc} 0&0\\ 0&0\end{array}\right)\) and \(\lambda\in{\bf R}\), set \({\sf T}(A)\) equal to \(\lambda A\), and see where that leads you.)
          The wording of 18(d) is a good example of bad writing. The sentence should have begun with "[F]or \(n>2\)," not ended with it.

    • In Section 5.2:
      1. Read the first two paragraphs.
      2. Before reading the statement of Theorem 5.5, read the following easier-to-understand special case:

          Theorem 5.4\(\frac{1}{2}\) (Nickname: "Eigenvectors to different eigenvalues are linearly independent")    Let  \(T\)  be a linear operator on a vector space. Suppose that  \(v_1, \dots, v_k\)  are eigenvectors of \(T\) corresponding to distinct eigenvalues  \(\l_1, \dots, \l_k\)   respectively. (Remember that "distinct" means   \(\l_i\neq \l_j\)  whenever  \(i\neq j.\)) Then the list  \(\{v_1, v_2, \dots, v_k\}\)  is linearly independent.

        Although "Theorem 5.4\(\frac{1}{2}\)" is a special case of FIS's Theorem 5.5, and the proof I've given in a handout (see below) occupies more space than the book's proof of the more general theorem, I think you'll find my proof easier to read, comprehend, and reproduce, partly because the notation is much less daunting.

      3. Read the handout "Linear independence of eigenvectors to distinct eigenvalues" posted on the Miscellaneous Handouts page. The handout has a proof of "Theorem 5.4\(\frac{1}{2}\)" (numbered Theorem 1 in the handout), some comments, and two corollaries. The second corollary is exactly FIS Theorem 5.5.

      4. Continue reading Section 5.2 up through Example 7 (p. 271).
    • Read the handout "change of coordinates (notes).pdf" posted on Canvas under Files.

    • 5.2/ 1, 2abcdef, 3bf, 7, 10.
          For 3f, see my recommendation above for 5.1/ 5hi. In #7, you're supposed to find an explicit formula for each of the four entries of \(A^n\), as was done for a different \(2\times 2\) matrix \(A\) in an example in Section 5.2.

    • Do these non-book problems.
    • Before the final exam Assignment 14  

    • 5.1/ 14–18

    • 5.2/ 11, 13

    • Do this non-book problem.
    • THURSDAY 5/1/25

      Final Exam
            Location: Our usual classroom
            Starting time: 7:30 a.m.

      As I mentioned in a recent email, the exam-date info on One.UF has reverted to being wrong. IGNORE ONE.UF for final-exam-date info for this class. The correct date and time, Thursday May 1 at 7:30 a.m., have always been the ones in the syllabus.


      Class home page