Homework problems will be assigned daily and collected weekly on Tuesdays (possibly with some exceptions).
UPDATE 9/24/2022. In view of how time-consuming it is for you to neatly write up your work on every assigned problem, starting with Assignment 5 (due 9/27/2022) I'll collect only a (usually proper) subset of the exercises.
To help motivate you to do ALL the assigned problems—rather than waiting till I've announced the hand-in problems, and then making those the only ones you do—I will not announce which ones I'm collecting until shortly before they are due. (I'm not committing to specific timing of "shortly before", but my expectation is that it will be 2–3 days.)
Don't make the mistake of thinking that I'm collecting only the problems I think are important. You still need to do the entire assignment. The only reason I'm collecting less than 100% of it is that it can take so much more of your time to write a solution that's neat and follows my formatting rules, than it does just to figure out a solution.
If there's a homework problem you don't get around to doing, assume that it will show up on an exam. Don't just take the word of your old-geezer math professor about this; Andres can talk to you about this from personal experience. The universe is out to get you.What should happen for each assignment, going forward, is this:
It would be very unwise to procrastinate, waiting to see which problems I'm going to require you to hand in, before deciding which problems to work on.
- First, you do all parts of the assignment, spreading the work out over the whole week as new problems get posted. If the assignment lists some reading before a set of exercise (which may happen more than once in a given assignment), you do that reading before doing those exercises. I will still be building each assignment as we go along, adding each exercise as soon as after we've covered the material needed for it. Just as before, you should start working on each exercise within a day of its first posting.
- Second, you neatly write up the hand-in problems, following all the other rules I've given. You keep your work on the other problems for help when you're reviewing—or in case I every ask you to show me your work to prove that you've been doing all the problems I've assigned.
Previously, I've used this partial-collection system only in Advanced Calculus and graduate classes. I have considerable reservations about changing to this system in MAS 4105. Although the partial-collection system will allow you to make better use of your time (by saving time with your write-ups, not with working out the exercises), it will also give you more rope with which to hang yourself. Experience shows that some students will do the latter, no matter how much they're warned. In this class (as in most!), it's clear already that since I can't collect whatever reading I assign, many students don't take it seriously. The same will probably happen with exercises that I don't collect. But be warned: when I construct your exams, I'll assume that you're just as familiar with the uncollected homework exercises as with the collected ones. If you fall behind, there won't be enough time later for you to catch up.
The homework will (still) be graded by your TA, Andres Zuniga. I'm expecting to collect more problems than the number Andres has typically been grading, so you should still expect that only some of the assigned problems will be graded, although a greater fraction than before. I will not announce in advance which problems will be graded, since too many students would then not do the rest of the problems—even when warned that those problems may appear on exams. To learn mathematics, you need to do many more problems than there is time to grade in any but the most cursory fashion.
The assignments and due dates are listed later on this page (scroll down to the Assignments chart). Note that this page has a "last updated" line near the top. For each assignment the problem-list (and other components, if any) will be updated frequently, based on how far we got in the previous lecture. Usually these updates will be made a few hours after the lecture. Sometimes there will be further updates to correct typos, provide clarification, etc.
You are responsible for checking this page frequently, within a day of each lecture. I will not send out a notice each time I update this page. You should start working on problems the day that they're added to this page (unless I say otherwise); it will be unwise to leave a week's worth of homework to the last day.
Since the assignments will be built as we go along, you will see a " NOT COMPLETE YET" or " POSSIBLY NOT COMPLETE YET" notice for each assignment until that assignment's listing is complete.
Unless otherwise indicated, problems are from our textbook. A problem listed as (say) "2.3/ 4" means exercise 4 at the end of Section 2.3.
Homework Rules
Academic honesty
On all work submitted for credit by students at the University of Florida, the following pledge is implied:"On my honor, I have neither given nor received unauthorized aid in doing this assignment."
No aid is authorized that involves anything but (i) your own brain, (ii) your notes from this class or prerequisite classes, (iii) handouts from me, or other material I post on the class webpages (which includes the MHF3202 textbook, in case you need it for review), (iv) discussions with me or the TA, (v) LIMITED consultation with classmates in this section of MAS 4105 (see below), and (vi) the current textbook for this class. (But if there's some type of aid that you're not sure I consciously intended to exclude, don't be afraid to ask me about it!) You are permitted to work on homework with other students in this section as long as you write up your solutions on your own. (This is the "limited consultation with classmates" mentioned above.) If I suspect that your homework is not your own work, I may give you an oral exam on the homework questions.
Any infringement of the spirit, not just the letter, of these restrictions, will be considered a violation of the Student Honor Code, and may result in your receiving a failing grade for the course.
Submission Rules
Homework will be collected at the BEGINNING of the class period on the hand-in day, and must be completed and stapled together before that period begins.
Even when homework is well written, reading and grading it is very time-consuming. In order that this process not be more burdensome than it intrinsically needs to be:
- The homework you hand in must be neat, and must either be typed or written in pen or DARK pencil. Anything that is difficult to read will be returned to you ungraded.
Do not turn in homework that is messy, has faint writing, or has anything that's been erased and written over (or written over without erasing). "Written over without erasing" includes not just superimposing a new letter on an old one, but writing something in pencil and then tracing over it in pen. (The latter practice leads to an eye-straining "double vision" effect. Please don't do it.)
If you are writing on both sides of a sheet of paper, do not use paper/ink/pencil combinations for which the writing on one side of the paper shows on the other side.
- Work everything out for yourself on scrap paper first. Then carefully rewrite (or typeset) what you're handing on clean sheets of 8.5" x 11" plain, white, unlined, printer paper with no holes. You can buy a pack of 500 sheets for a few dollars (or split the package and cost with a classmate), and it will be more than enough for the whole semester. Do not use any other type of paper (e.g. notebook paper or looseleaf paper).
- Use WIDE (1.75") margins, (left and right and top and bottom) and enough space between lines of writing (similar to double-spacing if you were typing), so that it is EASY for a grader to insert corrections (or comments) adjacent to what's being corrected (or commented on). If you squeeze words in at the bottom, sides, or top of a page, do not expect that work to be graded or to receive any credit.
To make sure that your handwritten work has acceptable margins and spacing, you can print out p. 6 of the pdf file produced by my LaTeX template and keep it next to you when you're writing. This page is typeset, of course, but has the same margins and spacing I want in handwritten homework.
- Staple your sheets together in the upper left-hand corner. No other means of attachment is allowed. Make sure that your staple is lose enough to the corner that when the reader turns pages, nothing that you've written is obscured. (If you have trouble stapling this way, you haven't left wide enough margins at the left side and/or top of the page, and should rewrite your homework.)
- Write in complete, unambiguous, grammatically correct, and correctly punctuated sentences and paragraphs, as you would find in your textbook.
- You are not permitted to use the following symbols in place of words: \( \forall, \exists, \Longrightarrow, \Longleftarrow,\iff, \vee, \wedge,\) and any symbol for logical negation (e.g. \(\sim\)). (Note: the double-arrows \( \Longrightarrow, \Longleftarrow,\) and \(\iff\) are implication arrows. Single arrows do not represent implication, so you may not use them to substitute for the double-arrow symbols.)
On your exams,, to save time you'll be allowed to use the symbols \(\forall, \exists\), \(\Longrightarrow, \Longleftarrow\), and \(\iff\), but you will be required to use them correctly. The handout Mathematical grammar and correct use of terminology, assigned as reading in Assignment 0, reviews the correct usage of these symbols.
Even on exams, you will not be allowed to use the symbols \(\wedge\) and \(\vee\), or any symbol for logical negation of a statement. Symbols for negation are highly author-dependent. Symbols for "and" and "or" are used essentially as "training wheels" in courses like MHF 3202 (Sets and Logic). The vast majority of mathematicians never use these symbols to mean "and" or "or"; they use \(\wedge\) and \(\vee\) with very standard different meanings.LaTeX. (This is completely optional! You're not required even to read this paragraph. I've inserted it just because some students might find it helpful.) LaTeX is a mathematical word-processing software system that has become the standard for research papers in (at least) mathematics, statistics, and physics. Some students may find it easier to typeset their homework in LaTeX than to try to write neatly or to leave wide margins. So, for any students who would like to teach themselves LaTeX, and use it for their homework, here is a source-file template that includes some useful commands. To use LaTeX, you'll need to install some version on your computer. (Legitimate versions of LaTeX, such as MiKTeX, are available for free.) Once you've done the installation, my template file (or any ".tex" file) should open automatically when you click on it. (But if you want to look the template before you've installed LaTeX, open it with whatever you'd use to read a plain-text file.) An easy way to get started writing in LaTeX, once you've installed it, is by copying a sample source-file (for example, my template), replacing any text there with your own, and experimenting with commands in the source-file to see how things work.
Assignments
Date due Assignment F 8/26/22 Assignment 0 (just reading) Read the Class home page, and Syllabus and course information handouts.
Read the Homework Rules above. Go to the miscellaneous handouts page and read the handouts "What is a proof?" and "Mathematical grammar and correct use of terminology". (Although this course's prerequisites are supposed to cover most of this material, most students still enter MAS 4105 without having had sufficient feedback on their work to eliminate common mistakes or bad habits.)
I recommend also reading the handout "Taking and Using Notes in a College Math Class," even though it is aimed at students in Calculus 1-2-3 and Elementary Differential Equations.T 8/30/22 Assignment 1 Read these tips on using your book. In the handout "Sets and Functions" linked to the Miscellaneous Handouts page, read at least the section on functions (pp. 6–9).
1.1/ 1–3, 6, 7. Note: As I mentioned in class, the book's term "the equation of a line" is misleading, even if the word "parametric" is included, since the same line can be described by more than one parametric equation. In fact, there are infinitely many parametric equations for the same given line. All of the above applies to planes as well as lines.
1.2/1–4, 7, 8, 10, 12, 13, 17–20.
Note: To show that a given set with given operations is not a vector space, it suffices to show that one of the properties (VS 1)–(VS 8) does not hold. So in each of the exercises in the 13–20 group, if the object you're asked about turns out not to be a (real) vector space, an appropriate answer is "No; (VS n) does not hold" (where n is an appropriate one of the numbers 1, 2, ..., 8), together with an example in which (VS n) fails. Even if two or more of (VS 1)–(VS 8) fail, you only need to mention one of them that fails. In your own notes, to keep for yourself, it's fine if you record your analysis of each of the properties (VS 1)–(VS 8), but don't hand that in.
However, to show that something is a vector space, you have to show that each of (VS 1)–(VS 8) holds.T 9/6/22 Assignment 2
1.2/ 14, 21.
In #14, remember that "C" is the book's notation for the set of complex numbers. As the book mentions, \(C^n\), with the indicated operations, is a complex vector space (a vector space over the field \(C\)). However, what you're being asked in #14 is whether \( C^n \), with the indicated operations, is also a real vector space.
In #21, the vector space \({\sf Z}\) is called the direct sum of \({\sf V}\) and \({\sf W}\), usually written \( {\sf V} \oplus {\sf W} \). (Sometimes the term "external direct sum" is used, to distinguish this "\( {\sf V} \oplus {\sf W} \)" from something we haven't defined yet that's also called a direct sum.)In Section 1.3, read Examples 2–4, as well as the paragraph just before Example 3.
1.3/ 1–8; 9 except for part (d); 10, 13, 15
I meant the list above to be "1–7; 8 except for part (d); 9, 10, 13, 15" but there's no way you could have known this. Sorry! And the only reason for excluding 8d was that it's very similar to 8c.T 9/13/22 Assignment 3
In the textbook, read the section "To the Student" that precedes Chapter 1. The advice in the three bullet points on p. xiii is very good and very important, and will apply to virtually any math class you take from here on out. Many professors, myself included, give similar advice in almost all their classes. Part of the book's advice (don't ignore the rest; I'm just not copying it all!), with some extra emphasis added by me, is: "Each new lesson usually introduces several important concepts or definitions that must be learned in order for subsequent sections to be understood. As a result, falling behind in your study by even a single day prevents you from understanding the material that follows. To be successful, you must learn the new material as it arises and not wait to study until you are less busy or an exam is imminent."
It's easy to dig yourself into a hole by thinking, "I've never had to work after every single class, or put in as many hours as following advice like this would take, and I've always done well. And the same goes for my friends. So I'll just continue to approach my math classes th e way I've always done." By the time a student realizes that this plan isn't working, and asks his or her professor "What can I do to improve?" it's usually too late to make a big difference.1.3/ 18, 19, 22 (with \(F_1=F_2={\bf R}\) ) Do the following, in the order listed.
- Read the first definition—---the definition of \(S_1+S_2\)—near the bottom of p. 22. (The second definition is correct, but not complete; as you saw in the previous assigment, there is something else that's also called direct sum. Both types of direct sum are discussed and compared in a handout later in this assignment.)
- 1.3/ 23
- Read the handout "Direct Sums" posted on the Miscellaneous Handouts page. (This is a new, short handout; it didn't exist until Sept. 7.)
- Do exercises DS1, DS2, and DS3 in the Direct Sums handout.
1.3/ 24 (also figure out how #24 is is to exercise DS1), 25, 26, 28, 29 (all with \({\bf F}={\bf R}\)). You're not required to know what a field of characteristic two is; you may just take it on faith that \({\bf R}\) isn't such a field. (But in case your interest was piqued: every field has a characteristic, which is either 0 or a prime number. A field \({\bf F}\) of characteristic \(p>0\) has the property that \(px=0\) for all \(x\in {\bf F}\), where \(px=\underbrace{x+x+\dots +x}_{p\ \ \mbox{times}}\).)
In #28, skew-symmetric is a synomym for antisymmetric.
1.3/ 30, just the remaining half. (In exercise DS3, you proved half of the "if and only if," so in #30, I'm asking only you to deal with the remaining half.) Read the definition of span on p. 30. (In class on Friday 9/9, we spent most of the period introducing and discussing the concept of "span"; we just didn't get to was the word for this concept.)
1.4/ 1abc, 3abc, 4abc, 5cdegh, 10, 13, 14, 16. In 5cd, the vector space in consideration is \({\bf R}^3\); in 5e it's \(P_3({\bf R})\); in 5gh it's \(M_{2\times 2}({\bf R})\). T 9/20/22 Read from the middle of p. 26 (after the diamond-symbol that ends an example started on previous page) through the end of Section 1.4. In the three procedural steps below "The procedure just illustrated" on p. 28, the 2nd and 3rd steps shoud have been stated more precisely. In the illustrated procedure, each step takes us from one system of equations to another system with the same number of equations; after doing step 2 or 3, we don't simply append the new equation to the old system. The intended meaning of Step 2 is, "multiplying any equation in the system by a nonzero constant, and replacing the old equation with the new one." The intended meaning of Step 3 is, "adding a constant multiple of any equation in the system, say equation A, to another equation in the system, say equation B, and replacing equation B with the new equation."
Of course, these intended meanings are clear if you read the examples in Section 1.4, but the authors should still stated the intended meanings implicitly.
1.4/1def, 2, 6, 8, 9, 11, 15. In 1de:
- Interpret the indicated operation as replacing an equation by one obtained by the stated operation. In 1e, the equation being replaced is the one to which a multiple of another equation was added.
- The intended meaning of "it is permissible" to do the indicated operation is that that operation never changes the solution-set of a system of equations. No permission from your professor (or other human being, alien overlord, fire-breathing dragon, etc.) is involved.
In Section 1.4:
- Read Theorem 1.5 and its proof. I stated and proved the main part of this in class: that, in the notation of the theorem, \( {\rm span}(S)\) is a subspace of \(V\). I didn't include the statement that \({\rm span}(S)\) contains \(S\), because it's obvious, and really was put into Theorem 1.5 so that the second sentence would be, essentially, the converse of the first.
The last paragraph of the book's proof, together with the sentence just before that paragraph, form the only parts of the proof of Theorem 1.5 that I didn't give in class.- Read the rest of Section 1.4 after the proof of Theorem 1.5.
1.5/ 1, 2(a)–2(f), 3–7, 10, 12, 13, 15, 16, 17, 20 T 9/27/22 Assignment 5
In Section 1.6, read from Corollary 1 (p. 47) up through and including "An Overview of Dimension and its Consequences" (p. 50). These pages include:
- Corollaries 1 and 2 (which follow quickly from results we've already proved);
- a definition of finite-dimensional vector space that updates (but, thanks to one of our recently proven results, is equivalent to) the one I gave in class several lectures ago; the definition of dimension of a finite-dimensional vector space; and
- many examples (you may treat Example 11 as optional reading).
We will cover the results and definitions on pp. 47–50 in class on Monday 9/26, but I don't want you to wait till then to start practicing using them. They are needed for most of the exercises for Section 1.6 that I'd postponed assigning up through Wed. 9/21. With \(V\) denoting a vector space, a brief summary of the content of the above reading (minus most of the examples) is:
- \(V\) is called finite-dimensional if \(V\) has a finite basis, and infinite-dimensional otherwise.
- If \(V\) is finite-dimensional, then all bases of \(V\) are finite and have the same cardinality. This cardinality is called the dimension of \(V\) and written \({\rm dim}(V).\)
- \( {\rm dim}({\bf R}^n)=n\) (for \(n>0\)).
\( {\rm dim}(\{ {\bf 0}\} ) =0.\)
\({\rm dim}(M_{m\times n}({\bf R}))=mn.\)
\({\rm dim}(P_n({\bf R}) )=n+1.\)
- Suppose \(V\) is finite-dimensional and let \(n={\rm dim}(V).\) Then:
- No subset of \(V\) with more than \(n\) elements can be linearly independent.
- No subset of \(V\) with fewer than \(n\) elements can span \(V.\)
- If \(S\) is a subset of \(V\) with exactly \(n\) elements, then \(S\) is linearly independent if and only if \(S\) spans \(V\). Hence (under the assumption that \(S\) has exactly \(n\) elements) the following are equivalent:
- \(S\) is linearly independent.
- \(S\) spans \(V.\)
- \(S\) is a basis of \(V.\)
Thus, given a vector space \(V\) that we already know has dimension n (e.g. \({\bf R}^n\)), and a specific a set \(S\) of exactly \(n\) vectors in \(V\), if we wish to check whether \(S\) is a basis of \(V\) it suffices to check either that \(S\) is linearly independent or that \(S\) spans \(V\); we do not have to check both of these properties of a basis.
- Every linearly independent subset \(S\subseteq V\) can be extended to a basis. (I.e. \(V\) has a basis that contains the set \(S\). The sense of "extend[ing]" here means "throwing in additional elements of \(V.\)")
1.6/ 1–8, 12, 13, 17, 21, 25 (see below), 29, 33, 34. For several of these problems (for example, #4 and #12), various facts in the summary above can (and should) be used to considerably shorten the amount of work needed.
   Note: For most short-answer homework exercises (the only exceptions might be some parts of the "true/false quizzes" like 1.6/ 1) you are expected to show your reasoning.
   Note: #25 can be reworded as: For arbitrary finite-dimensional vector spaces \(V\) and \(W\), express the dimension of the external direct sum \(V\oplus_e W\) in terms of \({\rm dim}(V)\) and \({\rm dim}(W).\) Both this wording the book's have a deficiency: since we have defined "dimension" only for finite-dimensional vector spaces, we really shouldn't even refer to "the dimension of \(V\oplus_e W\)" (the dimension of \(Z\), in the book's wording) without first knowing that \(V\oplus_e W\) is finite-dimensional. So, for the second half of my alternate wording of #25, I really should have said "show that the external direct sum \(V\oplus_e W\) is finite-dimensional, and express its dimension in terms of \({\rm dim}(V)\) and \({\rm dim}(W).\)"
However, the latter wording, while objectively more sensible, has a drawback when teaching: it can lead students to expect that the work they do must effectively, have a "part (a)" (showing finite-dimensionality of the direct sum) and a "part (b)" (giving a formula for the dimension), when in fact these parts end up being done simultaneously. To do #25, start with bases of \(V\) and \(W\), make an educated guess that a certain finite subset of \(V\oplus_e W\) (with easily computed cardinality) is a basis, and then show that your basis-candidate is indeed a basis. That shows, simultaneously, that \(V\oplus_e W\) has a finite basis (hence is finite-dimensional) and that the cardinality of your candidate-basis is the dimension of \(V\oplus_e W\).On Tuesday 9/27, hand in only the following problems: 1.6/ 2cde, 3abc, 4, 5, 12, 29, 34a.
T 10/4/22 Assignment 6
In the handout "What is a proof?" (the mandatory reading of which was part of Assignment 0), re-read pitfall #2 on p. 4 (proving the converse of what you are supposed to prove). A version of the mistaken reasoning in the example there was used by most students when doing homework problem 1.4/6.
Most of my general handouts are written at a (deceptively) simple level, in order to make them as readable as possible. You will likely find that you already know some of what's said in these handouts. It is very unlikely that you already know all of what's there, and it is very likely that each of these handouts addresses one or more misconceptions that you (yes, you) have, or something you've never thought about. If you ever think, "This looks like stuff I already know from prior classes; this handout isn't meant for me," I can pretty much guarantee that this handout was meant for you. I see the proof of this every year when students make exactly the mistake(s) I warned about in a previously assigned handout. (Any class may have a tiny number of students—perhaps one or two—for whom my handouts will have nothing they don't already know. But students in this category are aware that they can't be sure of this until they've read the handouts. Top-notch students don't overestimate how much they know.)The mistaken reasoning I saw used in exercise 1.4/6 is very common in lower-level courses, and is often not corrected there, but you've reached the stage at which it needs to be stamped out! Read the handout "One-to-one and onto: What you are really doing when you solve equations" (a link on the Miscellaneous Handouts page) up through Example 3.
The "reversibility" illustrated in Example 3 (and deciding whether a step is reversible) is much less clear when we're dealing with systems of more than one equation in more than one unknown, than when we're dealing with one equation in one unknown. Later in this course we will establish the reversibility of certain steps in solving systems of linear equations. You saw a preview of this in Section 1.4, but we have NOT proven (or even stated) the relevant theorems yet.In Section 1.6, read the subsection "The Dimension of Subspaces," which starts near the bottom of p. 50 and runs through the middle of p. 52 (the last line before the subsection "The Lagrange Interpolation Formula" begins). Because of the cancellation of the Wednesday Sept. 28 and Friday Sept. 30 classes, we will not have time to discuss this material in lecture, but it will still be fair game for the Oct. 5 exam. (The same goes for the exercises below.) 1.6/ 14–16, 18 (note: in #18, \({\sf W}\) is not finite-dimensional!), 22, 23, 30, 31, 32 In Section 2.1, read from the beginning of the section up through Example 9. Because of the cancellation of the Wednesday Sept. 28 and Friday Sept. 30 classes, we will have limited time to discuss this material in lecture, but it will still be fair game for the Oct. 5 exam. (The same goes for the exercises below.) 2.1/ 2–6 (only the "prove that \({\sf T}\) is a linear transformation" part, for now), 7–9.
I will put the remaining portions of 2–6 into a future assignment, but not before discussing Theorem 2.1 in class.***
In view of the class cancellations on Sept. 28 and 30, other potential Hurricane-Ian-related complications, and the Oct. 5 exam, no homework will be collected for Assignment 6.
W 10/5/22 First midterm exam
"Fair game" material for this exam is everything we've covered (in class, homework, or the relevant pages of the book) up through the portion of Section 2.1 represented in Assignment 6. In Chapter 1, we did not cover Section 1.7 or the Lagrange Interpolation Formula subsection of Section 1.6. You should regard everything else in Chapter 1 as having been covered (except that the only field of scalars we've used, and that I'm holding you responsible for at this time, is \(\bf R\).)
For this exam, and any other, the amount of material you're responsible for is far more than could be tested in an hour (or even two hours). Part of my job is to get you to study all the material, whether or not I think it's going to end up on an exam, so I generally will not answer questions like "Might we have to do such-and-such on the exam?" or "Which topics should I focus on the most when I'm studying?"
If you've been responsibly doing all the assigned homework, and regularly going through your notes to fill in any gaps in what you understood in class, then studying for this exam should be a matter of reviewing, not crash-learning. (Ideally, this should be true of any exam you take; it will be true of all of mine.) Your review should have three components: review your class notes; review the relevant material in the textbook and in any handouts I've given; and review the homework.
When reviewing homework that's been graded and returned to you, make sure you understand any comments that Andres or I made on what you handed in, even on problems for which you received full credit. There are numerous mistakes that students made for which no points were deducted in homework, and that Andres and/or I commented on, that could cost you points on an exam.
T 10/11/22 Assignment 7 In Section 2.1, read the definitions of rank and nullity (top of p. 70), and read at least the statements of Theorems 2.1, 2.2, 2.3, and 2.4. The exercises below use these definitions and several of these theorems, but do not require understanding the proofs.
The only reason I'm not saying to read the proofs of Theorems 2.1–2.4 for this assignment is time! We will definitely be covering all the proofs, and will definitely prove Theorem 2.1 on Monday 10/10. In order to avoid giving you a super-sized assignment due next week, I did not want to wait till I'd covered all of Theorems 2.1–2.4 before assigning any of the exercises that use the results of these theorems. You can practice using these results even before understanding the proofs.
2.1/ 2–6 (complete the parts that weren't in Assignment 6), 10, 12, 16, 17 . Some hints that I wouldn't be giving if these exercises were due a few days later:
- #10: For the "Is \({\sf T}\) one-to-one?" part, you'll want to use Theorem 2.4, but there's more than one way of setting up to use it. You should be able to do this problem in your head (i.e. without need for pencil and paper) by using Theorem 2.2, then Theorem 2.3, then Tbeorem 2.4.
- #16: For the "not one-to-one" part, use Theorem 2.4.
- #17: Apply Theorems 2.3 and 2.4.
  I misspoke at the end of Assignment 6: I'm literally putting the remaining portions of exercises 2–6 into Assignment 7 on 10/7/22, before I've covered Theorem 2.1 in class. But they won't be due until the day after we've covered Theorem 2.1. I've been avoiding assigning problems that are due the day after we've covered the relevant material in class, but in this instance, that's the lesser evil. You really should be able to do this week's assigned problems quickly and easily based on the assigned reading alone, especially with the hints I'm providing.BTW: Students who don't work on the assigned exercises that I'm not collecting as seriously as the ones that I am collecting have a death-wish. I will have no sympathy—NONE—for students who have trouble on an exam because (or even just partly because) they attempted fewer than 100% of the assigned exercises (or did less than 100% of assigned reading) before an assignment's due-date. And "not having enough time" because you waited too long to start working on exercises posted several days earlier, and/or you were waiting to see which subset of the exercises you'd have to hand in, is never a valid excuse.
One other item: For some reason, every year many students think they can use unauthorized, publicly available sources undetectably. If you are, perchance, making use of someone's solutions found online (other than the small number that come with the web-enhanced text), you are using a source that I have explicitly said is unauthorized (see the "Homework Rules" section of this page). This violates the Student Honor Code and is something I have failed students for.
But grade-penalties aside, there is always a learning penalty: a habit of relying on someone else's solutions stunts your learning. (Furthermore, unauthorized solutions that can be found online, including ones that are sold by outfits that assure you the solutions have been written by "experts", often have mistakes.) You are much better off using the "limited consultations with classmates" that my rules on this page allow. You can learn a lot by bouncing ideas off each other.On Tuesday 10/11, hand in only the following problems: 2.1/ 3, 6, 16. In #3 and #6, you need not write out a proof that \({\sf T}\) is linear, but do all the other parts:
Also, in #6: when writing out a basis for \( {\sf N(T)}\), handle the cases \(n=1\) and \(n\geq 2\) separately; the notation you'll be using for \(n\geq 2\) probably won't accommodate the \(n=1\) case.
- Find a basis of \( {\sf N(T)}\) and a basis of \( {\sf R(T)}\).
- "Compute" the nullity and rank of \({\sf T}\). (I've put "compute" in quotation marks since, once you've found the bases above, this "computation" is just a matter of counting.)
- Verify the Dimension Theorem. (You're not verifying the truth of the Dimension Theorem; it's a theorem. What you're being asked to do is to check that your answers for the nullity and rank satisfy the equation in Theorem 2.3. In other words, you're doing a consistency check on those answers.)
- Determine whether \({\sf T}\) is one-to-one.
- Determine whether \({\sf T}\) is onto.
T 10/18/22 Assignment 8 In Section 2.1, read all examples that you haven't read yet. For those that start with wording like "Let \({\sf T}: {\rm (given\ vector\ space)}\to {\rm (given\ vector\ space)}\) be the linear transformation defined by ...", the first thing you should do (before proceeding to the sentence after the one in which \({\sf T}\) is defined) is to check that \({\sf T}\) is, in fact, linear. (Example 11 is one of these.)
Also, re-examine the examples you have already read, and apply the above instructions to these as well.Some students will be able to do these linearity-checks mentally, almost instantaneously or in a matter of seconds. Others will have to write out the criteria for linearity and explicitly do the calculations needed to check it. After doing enough linearity-checks—how many varies from person to person—students in the latter category will gradually move into the former category, developing a sense for what types of formulas lead to linear maps.
In Section 2.1, observe that Example 1 is the only example in which the authors go through the details of showing that the function under consideration is linear. In the remaining examples, the authors assume that all students can, and therefore will, check the asserted linearity on their own.
In math textbooks at this level and above, it's standard to leave instructions of this sort implicit. The authors assume that you're motivated by a deep desire to understand; that you're someone who always wants to know why things are true. Therefore it's assumed that, absent instructions to the contrary, you'll never just take the author's word for something that you have the ability to check; that your mindset will NOT be (for example), "I figured that if the book said object X has property Y at the beginning of an example, we could just assume object X has property Y."
2.1/ 1, 11, 14ac (see below), 15, 18 (see below), 20 (see below), 21, 22 (just the first part), 23, 25 (see below), 27, 28 (see below), 36.
- In 14a, the meaning of "\({\sf T}\) carries linearly independent subsets of \( {\sf V} \) onto linearly independent subsets of \( {\sf W} \)" is: if \(A\subseteq {\sf V}\) is linearly independent, then so is \({\sf T}(A)\). For the notation "\({\sf T}(A)\)", see the note about #20 below.
- You may find #18 more challenging than the others.
- Regarding the meaning of \({\sf T(V_1)}\) in #20: Given any function \(f:X\to Y\) and subset \(A\subseteq X\), the notation "\(f(A)\)" means the set \( \{f(x): x\in A\} \). The set \(f(A)\) is called the image of \(A\) under \(f\).
- Regarding #25: In the definition at the bottom of p. 76, the terminology I use most often for the function \({\sf T}\) is the projection [or projection map] from \({\sf V}\) onto \({\sf W}_1\). There's nothing wrong with using "on" instead of "onto", but this map \({\sf T}\) is onto. I'm not in the habit of including the "along \({\sf W}_2\)" when I refer to this projection map, but there is actually good reason to do it: it reminds you that the projection map depends on both \({\sf V}\) and \({\sf W}\), which is what exercise 25 is illustrating.
- Regarding #28(b): If you've done the exercises in order, then you've already seen such an example.
OPTIONAL: 2.1/ 38. If you're one of the students who has asked me questions about vector spaces over fields other than \({\bf R}\), I think you'll enjoy doing this problem.) On Tuesday 10/18, hand in only the following problems: 2.1/ 11, 14c (in your write-up, don't assume the result of 14a), 15, 18, 20, 25, 27ab
T 10/25/22 Assignment 9
Read anything in Section 2.2 that you haven't read yet. 2.2/ 1–4, 5ac, 12, 17 (modified as below)
  For #17: Assume that \({\sf V}\) and \({\sf W}\) have finite, positive dimension (see note below). Also, extend the second sentence so that it ends with "... such that \([{\sf T}]_\beta^\gamma\) is a diagonal matrix, each of whose diagonal entries is either 1 or 0." (This should actually make the problem easier!)
Additionally, show that if \({\sf T}\) is one-to-one, then if the bases \(\beta,\gamma\) are chosen as above, none of the diagonal entries of \([{\sf T}]_\beta^\gamma\)is 0. (Hence they are all 1, and \([{\sf T}]_\beta^\gamma\) is the \(n\times n\) identity matrix \(I_n\) defined on p. 82, where \(n=\dim(V)=\dim(W)\).Note: Using a phrase like "for positive [something]" does not imply that that thing might sometimes be negative! "Positive dimension" means "nonzero dimension"; there's no such thing as "negative dimension". For quantities Q that can be greater than or equal to zero, when we don't want to talk about the case Q=0 we frequently say something like "for positive Q", rather than "for nonzero Q".
In the "Convex Sets in Vector Spaces" handout linked to the Miscellaneous Handout page, read from the beginning up through Exercise 9 on the top of p. 3, and do Exercises 1–9. (Most of these are very short, and some may be exercises you've done before.) For students who know some abstract algebra: a vector space is, among other things, an abelian group (with "+" being the group operation, and the zero vector being the group identity element). Subspaces of a vector space are (special) subgroups. Translates of a subspace \(H\) are what we call \(H\)-cosets in group theory. (Since the group is abelian, we need not say "left coset" or "right coset"; they're the same thing.)
In Section 2.3, read up through Example 2. (We already covered a lot of this in class on Friday 10/21, so the reading should go pretty quickly.) 2.3/ 2a, 8.
On Monday 10/24 I'll do a small number of examples of matrix-multiplication. You may find it easier to do problem 2 after Monday's class, which is why I'm limiting the matrix-multiplication exercises in this assignment to 2a. (The next assignment will include 2b.)
In 2a, make sure you compute \( (AB)D\) *AND* \(A(BD)\) as the parentheses indicate. On Friday I mentioned that matrix-multiplication is associative—a fact that we'll prove—but DO NOT ASSUME ASSOCIATIVITY IN THIS EXERCISE (even if we've proven it by the end of Monday's class). The whole purpose of exercise 2 is for you to practice doing matrix-multiplication, not to practice using properties of matrix-multiplication. If your computations are all correct, you'll wind up with the same answer for \(A(BD)\) as for \((AB)D\). But, in this exercise, use this foreknowledge only as a consistency check on your computations, not as a way to avoid doing computations.On Tuesday 10/25, hand in only the following problems:
- 2.2/ 2be, 4, 5c, 17 (modified as above)
- 2.3/ 8 (Hand in only the second part [stating a proving a more general result] unless you were only able to do the first part [proving Theorem 2.10]. In the latter case, hand in your proof of Theorem 2.10. In the first part, the statement of the more general result you're intended to discover will involve three vector spaces.)
- "Convex Sets" handout Exercises 7, 8
T 11/1/22
(corrected
from typo "11/2/22")Assignment 10 The Section 2.3 part of this assignment is being posted Wednesday night. If you wait past Thursday to start on it, you have waited too long.
Read the remainder of Section 2.3, not including the "Applications" subsection. (Of course, you are welcome to read that subsection as well! We're just not covering it, and you won't be responsible for it.) Some of the exercises below involve material from Section 2.3 that we haven't had time to go over in class, so make sure you do the reading before you start the exercises. 2.3/1, 2b, 4ac, 5, 6, 11–14, 16a, 17–19
In 1e, it's implicitly assumed that \(W=V\); otherwise the transformation \({\sf T}^2\) isn't defined. Similarly, in 1f and 1h, \(A\) is implicitily assumed to be a square matrix; otherwise \(A^2\) isn't defined. In 1(i), the matrices \(A\) and \(B\) are implicitly assumed to be of the same size (the same "\(m\times n\)"); otherwise \(A+B\) isn't defined.
In #11, \({\sf T}_0\) is the book's notation for the zero linear transformation (also called "zero map") from any vector space \(V\) to any any vector space \(W\). [Conveniently for anyone like me who'd forgotten where the book introduces this notation, a reminder appears a few lines earlier in Exercise 9. You'll also find it on the last page of the book (at least in the hardcover 5th edition) under "List of Notation, (continued)". which is the last page of the 5th edition hardcover book. The book's original definition of the notation seems to be buried in Section 2.1, Example 8, but you may also remember that you saw it used on first paragraph of p. 82. In class, I used different notation (at least once) for the zero map from \(V\) to \(W\); I used either 0 or \(0_V^W\).]
In the "Convex Sets ..." handout, read from the top of p. 2 through the first three examples on p. 4. Do Exercises 9, 10, and 11. (The handout originally had two exercises numbered "11", one on p. 4 and one at the top of p. 5. I have fixed the numbering; the exercises on p. 5 now start with #12.) In Section 2.4, read up through Example 4. This reading includes the definition of the word "isomorphism" (which I stated, without writing, in the last few seconds of lecture on Friday 10/28) and the related adjective "isomorphic". On Friday, even though I didn't get to write down these definition, we covered all the material you'll need to answer the questions below from Section 2.4. (Feel free to read beyond Example 4! The next assignment will include reading most of the rest of Section 2.4. I will not expect you to read Example 5, however, since we did not cover the Lagrange interpolation formula in Section 1.6.).
2.4/ 13, 17
On Tuesday 11/1, hand in only the following problems:
- 2.3/ 11, 13, 16a, 18
- 2.4/ 17
- "Convex Sets" handout Exercise 9. In part (b), formal proof of your answers is not required, but you have to give some non-tautological reasons for your answers.
T 11/8/22 Assignment 11 Reminder for students who've taken MAS 3114 (Computational Linear Algebra) or any other prior course in linear algebra: the only linear-algebra tools and facts you're ever allowed to use in this course, in any proofs or arguments, are ones that we've covered in this course, to date. For example, you may not use anything about determinants until we cover determinants, and you may not use row-reduction to show anything until we cover row-reduction.
Reminder for everyone. The long list of format-related homework rules stems entirely from one simple principle:
"As burdensome as it intrinsically needs to be" means this: While you are learning mathematics, you often will not yet be able to express your thoughts perfectly, so even if your work is as neat as can be, a grader will often have to struggle to figure out what you meant, and/or whether the argument you seem to have had in mind is correct. That source of reading-difficulty is not really under your control at any moment in time, assuming you've expressed yourself as well as you are able to at that time. (Your ability to express precisely what you mean should improve over time—provided that you make it easy enough to for a grader to comment on your work, and you carefully read every comment and attempt to learn from it. But as you move into higher mathematics, and have to write more complicated proofs than anything you'll see in linear algebra, it will take effort on the grader's part to understand even a well written argument.)
Don't make grading your homework more burdensome than it intrinsically needs to be. But the neatness and format of what you hand in are under your control. It is VERY disrespectful to turn in homework that's hard to read for any reason under your control, including, but not limited to:
It's also disrespectful not to leave enough room for a grader to make comments easily.
- writing on top of, or near, something you've erased;
- writing on top of something you haven't erased;
- deviating from standard single-column layout (each line of writing horizontal; the lines proceeding steadily from top to bottom; a later or separate part of an argument or answer, or a later problem or problem-part, NEVER being put to the right or left of something you've already written, forcing the reader to zig-zag his/her way across your page);
- writing all the way down to the very bottom of a page;
- squeezing words into any margin.
Students certainly don't mean to be disrespectful when they hand in hard-to-read work; they're simply not aware of how much disrespect they're showing for their grader's time. When your work is unnecessarily difficult to read or comment on, you cause the grader—whether it's a professor or a TA—to spend more time deciphering your work, or figuring out how to comment on it, than he or she should have to. This adds many hours per week to the grader's work.
Years ago, my homework rules said little more than "Your homework must be neat" and the large-font principle above. But I found that the principle wasn't enough; if an explicit "don't do this particular thing" wasn't listed, some students would do that thing. This has led to my listing an ever-increasing number of specific things that students should or shouldn't do. I wish that this list weren't necessary, but experience has proven that it is.
So far, in this class, there has been quite a bit of disregard for the rules. This is not unusual at the start of a semester, but when I'm doing most of the grading myself, I try to nip this problem in the bud after the first assignment. We're now past the midway point of the semester, and the comments I've made on the homeworks of individual students about this problem have been ignored.
In the 34+ years I've taught at UF, I've experimented with many ways of trying to get students to stop handing in unacceptable work. Unfortunately, I have found only one way that works: imposing penalties for not following the rules.
I'll be doing that from now on, starting with the assignments you've already turned in but have not been returned to you yet. Penalty-points will be deducted from your homework-point total at the end of the semester. However, if you get your act together and follow ALL the rules (starting with the next time you hand in an assignment), I will consider reinstating some or all the points you lost through these deductions.-------------------------------------------------------------
Read anything in Section 2.4 you haven't read yet, excluding the reference to the Lagrange interpolation in Example 5.
    Regarding Example 5: We already saw in class that \(\dim(P_n({\bf R})=n+1\) and that \(\dim(M_{m\times n}({\bf R}))=mn\). Hence, using Theorem 2.19, it follows immediately that \(P_3({\bf R})\cong M_{2\times 2}({\bf R})\) since both of these vector spaces have dimension 4. But the book presented Example 5 before Theorem 2.19, so this easy way of showing \(P_3({\bf R})\cong M_{2\times 2}({\bf R})\) wasn't available yet. (However, even without Theorem 2.19 or the Lagrange Interpolation formula, you should easily be able to write down an explicit isomorphism from \(P_3({\bf R})\) to \(M_{2\times 2}({\bf R})\), thereby showing another way that these spaces are isomorphic.
I think the main purpose of Example 5 was to illustrate a non-obvious isomorphism beween these spaces. If the goal is to show merely that the spaces are isomorphic, you'd have to be crazy to do it Example 5's way.
2.4/ 1– 9, 12, 13–15, 17, 20. See notes below before starting these.
- In #2, keep Theorem 2.19 in mind to save yourself a lot of work.
- #8: On Monday 10/31, we did part of this in class.
- #12: As of Monday 10/31, we've already done most of the ingredients of this one in class; most of what's left is just to assemble the ingredients.
Read Section 2.5. Some things to be aware of when you're reading:
- In the paragraph after the proof of Theorem 2.22, the book's notation "\(x_j\)" does not have the same meaning as what I used it for in class on Friday Nov. 4. The book's \(x_j\) is a vector (an element of \(V\)—specifically, the vector that I denoted \(v_j\) in Friday's class, the \(j^{\rm th}\) vector in the ordered basis \(\beta\)). In that lecture, I used the notation \(x_j\) for a scalar, the \(j^{\rm th}\) coordinate of a general vector \(v\in V\) with respect to \(\beta\). In this paragraph of the book, these scalars are not mentioned (in any notation).
However, my matrix \(Q\) had exactly the same meaning as in as the book; both \(Q\)'s were/are the matrix that "changes \(\beta'\)-coordinates into \(\beta\)-coordinates."
The last sentence of this paragraph says something useful that I neglected to mention in Friday's class. In the notation I used in class, this sentence would say:
Observe that if \(\beta = \{v_1, \dots,v_n\}\) and \(\beta' = \{v_1', \dots,v_n'\}\), then $$v_j'=\sum_{i=1}^n Q_{ij}\,v_i\ $$ for \(j=1,2, \dots, n;\) that is, the \(j^{\rm th}\) column of \(Q\) is \([v_j']_\beta\) (the coordinate vector of \(v_j'\) with respect to the basis \(\beta\)).
(The extra boldface and underlining aren't in the book, and neither is my redundant, parenthetic info at the end. I've added these just for emphasis.) This fact is especially useful when \(V={\bf R}^n\) and \(\beta\) is the standard basis of \({\bf R}^n\). In this case, the above fact implies that the matrix \(Q\) that "changes \(\beta'\)-coordinates into \(\beta\)-coordinates" is simply the matrix whose first column is \(v_1'\), whose second column is \(v_2'\), etc. This is the content of the Corollary on p. 115.
- Given a linear transformation \(T:V\to V\), in order to compute \([T]_{\beta'}\) from \([T]_\beta\) and \(Q\) (in examples, with actual numbers), we need to know how to compute \(Q^{-1}\) from \(Q\). Methods for computing matrix inverses aren't discussed until Section 3.2. For this reason, in some of the Section 2.5 exercises (e.g. 2.5/ 4, 5), the book simply gives you the relevant matrix inverse.
- You should notice that the book's proof of Theorem 2.23 is much shorter than the proof I gave in class. The book's proof is an example of what I'd call an elegant proof. ("Elegant" is not a technical term, or one that has an absolute meaning! What constitutes an elegant proof is highly subjective.) The book's argument makes clever use of the fact that \({\sf T=IT=TI}\)—a fact that, while obviously true, might not jump into your head as something that might help you prove this theorem.
By contrast, the approach I used in class is what I'd call a "brute force" or "just do it!" approach. No cleverness at all was needed: having previously defined the matrix that "changes \(\beta'\) coordinates into \(\beta\) coordinates" (by expressing each of the elements of \(\beta'\) as a linear combination of elements of \(\beta\), and organizing all coefficients needed into a matrix \(Q\)), I simply plugged that into the defining equation for \([\sf T]_\beta\) and \([\sf T]_{\beta'}\), and used the fact that if \(\{v_1,\dots, v_n\}\) is a basis, and \(\sum_{i=1}^n a_iv_i = \sum_{i=1}^n b_iv_i\) (where \(a_i,b_i\in {\bf R}, \ 1\leq i\leq n\)), then \(a_i=b_i\) for each \(i\).
If you think of or remember an elegant proof of something, that's great—but it's better to be confident that, if you don't see a clever proof-shortening "trick" right away (or remember one perfectly), you can still succeed with a "brute force" approach.2.5/ 1, 2bd, 4, 8, 11. (No more exercises will be added to this assignment.) Exercise 8 can be done either by brute-force or "elegantly".
***
In view of the Nov. 9 exam, no homework will be collected for Assignment 11.
W 11/9/22 (corrected
from typo "11/10/22")Second midterm exam
"Fair game" material for this exam is everything we've covered (in class, homework [including anything I add to this assignment by Sunday Nov. 6], handouts, or the relevant pages of the book) up through Section 2.5.
The general comments in the entry on this page for the first midterm still apply. Review them.
T 11/15/22 Assignment 12 Because class time is so limited between now and the end of the semester—only nine MWF lectures remain!—there will be several items that you'll be responsible for that will be covered only in the book (and homework), not in class.
In this assignment and the remaining ones, some readings will have due-dates earlier than the first column lists for the full assignment. It's going to be very important that you do all assigned readings by the deadlines listed for them within the assignments. There may be homework due on a Tuesday for which your only preparation will be the reading that I'm telling you to complete before the Monday class the day before. (You may thank the profusion of UF Friday holidays this semester for that.)
I recommend that you to try to complete the readings even earlier than the within-assignment deadlines. The sooner you complete these readings, the more time you'll have for working on the related exercises.
Read Section 3.1 before the Monday Nov. 14 class. 3.1/ 1–11. Read Section 3.2 through Corollary 3 before the Monday Nov. 14 class. By the same deadline, also read what I've written below. (I actually recommend doing the latter first, keeping the book handy to look at the items there that I refer to. This will give you a summary of some of the most important concepts and results in Section 3.2 without the risk of getting lost in the weeds.) There's some terminology that I've always found useful but that's absent from our textbook (probably because some of it ends up being redundant). For an \(m\times n\) matrix \(A\):
- The column space of \(A\) is defined to be the subspace of \({\bf R}^m\) spanned by the columns of \(A\). (For this, elements of \({\bf R}^m\) are treated as column vectors.)
- The row space of \(A\) is defined to be the subspace of \({\bf R}^n\) spanned by the rows of \(A\). (For this, elements of \({\bf R}^n\) are treated as row vectors.) Equivalently, the row space of \(A\) is the column space of \(A^t\).
- The column rank of \(A\) (temporary notation: \(\mbox{column-rank}(A)\) is defined to be the dimension of the column space of \(A\).
- The row rank of \(A\) (temporary notation: \(\mbox{row-rank} (A)\) is defined to be the dimension of the row space of \(A\). Equivalently, \(\mbox{row-rank} (A) \) is defined to be \(\mbox{column-rank} (A^t) \).
The first definition in Section 3.2 defines the rank (without the modifier "column" or "row") of a matrix \(A\in M_{m\times n}({\bf R})\) to be the rank of the linear map \({\sf L}_A: {\bf R}^n\to {\bf R}^m\). Keeping in mind this definition and the ones above,
- Theorem 3.5 can be restated more simply as: \({\rm rank}(A)=\mbox{column-rank}(A)\).
- Corollary 2c (p. 158) can be restated more simply as: \(\mbox{row-rank}(A) = \mbox{column-rank}(A)\).
- Combining the above restatements of Theorem 3.5 and Corollary 2c, we obtain this restatement of Corollary 2b: \(\mbox{rank}(A) = \mbox{row-rank}(A)\).
- Corollary 2a, combined with our second definition of row-rank above ( \(\mbox{row-rank}(A)=\mbox{column-rank}(A^t) \) ), is then just another way of saying that \(\mbox{row-rank}(A) = \mbox{column-rank}(A)\).
In any case, the upshot of Theorem 3.5 and Corollary 2 is that
\( {\rm rank}(A)=\mbox{column-rank}(A)=\mbox{row-rank}(A) \ \ \ \ (*).\) Since the rank, column rank, and row rank of a matrix are all equal, it suffices to have just one term for them all, rank. But since all three notions are conceptually distinct from each other, I prefer to define all three and then show they're equal; I think that this makes the content of Theorem 3.5 and Corollary 2 easier to remember and understand. Friedberg, Insel, and Spence prefer to define only \(\mbox{rank}(A)\), and show it's equal to \(\mbox{column-rank}(A)\) and \(\mbox{row-rank}(A)\) without introducing extra terminology that will become redundant once (*) is proved.
3.2/ 1acfh, 3, 8, 11. (You should be able to do these just from the facts above about rank, column rank, and row rank. For purposes of this assignment, you may assume these facts, even though we may not prove them until afer Nov. 15; I don't want to wait till then to give you your first practice using them.) . In the "Convex Sets ..." handout, read Definition 4 and the examples below it. Do Exercises 12–15. On Tuesday 11/15, hand in only the following problems:
- 3.2/ 3, 8
- "Convex Sets" handout Exercises 12, 14a, 15.
T 11/22/22 (This day is not a UF holiday! OF COURSE the discussion section will meet as usual, and your attendance will be expected. You've had no shortage of holidays this semester: Labor Day, Homecoming, Veterans' Day, three days off for Hurricane Ian, and the upcoming three official days off for Thanksgiving.
And OF COURSE the class will meet as usual on Monday 11/21. From before Day One of the semester, there has been a warning in the syllabus about an unexcused absence on that day. Students have had plenty of time to plan their travels accordingly.
And, even without the warning I give in the syllabus, it is NEVER appropriate to plan to miss class for reasons not similar to those in the "Acceptable reasons ..." paragraph on the UF attendance policies page. NOBODY IS ENTITLED TO A NINE-DAY BREAK FOR THANKSGIVING!!
Assignment 13 Read the remainder of Section 3.2 before the Wed. Nov. 16 class.
3.2/ 1–5, 6(a)–(e), 12, 14, 15. In #6, one way to do each part is to introduce bases \(\beta, \gamma\) for the domain and codomain, and compute the matrix \([T]_\beta^\gamma\). Remember that the linear map \(T\) is invertible if and only if the matrix \([T]_\beta^\gamma\) is invertible. (This holds no matter what bases are chosen, but in this problem, there's no reason to bother with any bases other than the standard ones for \(P_2({\bf R})\) and \({\bf R}^3\).) One part of #6 can actually be done another way very quickly, if you happen to notice a particular feature of this problem-part, but this feature might not jump out at you until you start to a compute the relevant matrix.
Read Section 3.3, minus the application on pp. 175–178, before the Mon. Nov. 21 class. On Tuesday 11/22, hand in only the following problems:
- 2bdf, 4b, 5bdf, 6bd, 15.
In the computational problems—for which you're required to do all the computations by hand so that you get the hang of the relevant method(s)—you're required to state exactly what your steps were, using notation as explicit as what I used in the Friday 11/18 lecture. (If you missed that class, get notes from a classmate right away. I will not repeat any portion of Friday's class to anyone who missed it.) Your grader should not have to guess (or attempt to figure out) what operations you used. I am instructing him not to give you full credit for any problem in which he has to do any such guessing.
T 11/29/22 Assignment 14 Read Section 3.4 before the Mon. Nov. 28 class. Note added 12/2/2022: I won't hold you responsible for part (d) of Theorem 3.16, or for proving the Corollary that comes right after the theorem. For the exercises below, make sure you have read Section 3.n before you start to do the exercises for Section 3.n. (This is a reminder of the "order of operations" you should be, and should have been, applying to all exercises assigned from this book, modulo adjusting "make sure you have read Section 3.n" to, "make sure you have read the relevant portions of Section m.n." (Generally I tell you what the "relevant portions" for a given assignment are, when I'm assigning exercises from a section we haven't finished our classwork on yet.)
Another reminder: part of Assignment 3 was to read the "To the Student" section that precedes Chapter 1. As I said at the time, "The advice in the three bullet points on p. xiii is very good and very important, and will apply to virtually any math class you take from here on out. Many professors, myself included, give similar advice in almost all their classes."
A great many students think that a smart thing to do, as a way of saving time, is not to bother with any reading until they're looking at an exercise, and then just to flip back through a chapter/section looking for something that appears similar. Many also think it is smart not to bother doing several similar-looking, seemingly repetitive exercises, stopping after they get one or two right. With skills, repetition builds retention. There is no substitute.
"Time-saving" approaches like "don't bother looking at the chapter/section except as a source of examples when you're doing exercises," and "don't bother doing a lot of exercises of the same type once you've gotten one or two right, are absolutely not "smart"; they are simply rationalizations for not doing part of your homework. Successes you may have had with these approaches in the past are a reflection of poorly designed, or poorly graded, exams. I cannot truthfully tell you that these approaches won't work for you again with some future instructor(s). But I can truthfully tell you that they are poor ways of learning, and that you should not expect them to work for you in my class. Almost every student thinks, that he or she is an exception to such rules of thumb. Almost every student who thinks this is wrong.
3.3/ 1–5, 7–10
3.4/ 1, 2, 7, 9, 10–13 No homework will be collected on Tues. Nov. 29. Anything that I decide to collect for Assignment 14 will be merged into the hand-in problems for Assignment 15, to be collected Tues. Dec. 6.
--------------------------------
Note about Theorem 3.16. Letting \(A_j\) denote the \(j^{\rm th}\) column of \(A\), part (c) of Theorem 3.16 says that the \(r\)-element set of columns \(A_{j_1}, A_{j_2}, \dots, A_{j_r}\) is linearly independent. The theorem does not assert the false statement that this set is the only \(r\)-element subset of the columns of \(A\). Some trivial examples showing that this assertion would be false is are the rank-1 matrix \(A=\left( \begin{array}{cc} 1 & 1\\ 2& 2\end{array}\right) \) and the rank-2 matrix \(B=\left( \begin{array}{cc} 1 & 3 & 5\\ 2& 4 & 6\end{array}\right) \). Clearly, each column of \(A\) forms a 1-element linearly independent set (in \( {\bf R}^2\)), and it is not hard to see that every 2-element subset of the columns of \(B\) is linearly independent. Furthermore, the reduced row-echelon form (RREF) of \(A\) is \(\left( \begin{array}{cc} 1 & 1\\ 0& 0\end{array}\right) \), each of whose columns is the vector \(e_1\in {\bf R}^2\).
What is special about the column-positions \( j_1 < j_2< \dots < j_r\) in the RREF of a rank-\(r\) matrix \(A\neq {\it 0}\) is this:
These recursively defined conditions uniquely determine the column-positions in \(B:=\) RREF(\(A\)) for which \(B_{j_i}=e_i, \ \ 1\leq i\leq r\). This uniqueness really should have been stated in Theorem 3.16; otherwise it becomes an extra lemma that's needed to prove the corollary stated after the theorem (uniqueness of RREF(\(A\))). Strictly speaking, the wording "the reduced row-echelon form of \(A\)" in Theorem 3.16 was premature, since the word "the" assumes uniqueness that's proven as a corollary of the theorem.
- \(j_1\) is the smallest \(j\) for which \(A_j \neq {\bf 0}\); equivalently, for which the 1-element set \(\{A_j\}\) is linearly independent.
- Assuming \(r>1\): \(j_2\) is the smallest \(j > j_1\) for which the 2-element set \( \{ A_{j_1}, A_j \} \) is linearly independent.
- Assuming \(r>2\): \(j_3\) is the smallest \(j > j_2\) for which the 3-element set \( \{ A_{j_1}, A_{j_2}, A_j \}\) is linearly independent.
...
- \(j_r\) is the smallest \(j > j_{r-1}\) for which \( \{ A_{j_1}, A_{j_2},\dots, A_{j_{r-1}}, A_j \}\) is linearly independent.
T 12/6/22 Assignment 15 Read the green note at the end of Assignment 14 above. I added this note too late to expect you to read it as part of Assignment 14. I inserted it there anyway, rather than here, because of its topic. Read Sections 4.1 and 4.2 before the Wed. Nov. 30 class. 4.1/ 1, 6, 8, 9. 4.2/ 1–3, 5, 8, 11, 23–25, 27, 29. In general in Chapter 4 (and maybe in other chapters), some parts of the true/false set of exercises 4.(n+1)/1 duplicate parts of 4.n/ 1. Do as you please with the duplicates: either skip them, or use them for extra practice. Read Section 4.3 up through the last paragraph before Theorem 4.9; skim the remainder of Section 4.4 (unlees you have the time and interest to read it in depth). I am not holding you responsible for the formula in Theorem 4.9. (Cramer's Rule is just this formula, not the whole theorem. You certainly are responsible for knowing, and being able to show, that if \(A\) is invertible, then \(A{\bf x}={\bf b}\) has a unique solution, namely \(A^{-1}{\bf b}.\)) 4.3/ 1(a)–(f), 9–12, 15. (For the definition of similar matrices, see p. 116.) Read Section 4.4, as well as my own summary of some facts about determinants below. (Assigned exercises from Section 4.4 are listed after this summary.) In this summary, every matrix \(A, B, \dots,\) is \( n\times n\), where \(n\geq 1\) is fixed but arbitrary (except when examples for \(n=1,2\) or 3 are given.)
- The following are equivalent:
- \({\rm rank}(A)=n\)
- The set of columns of \(A\) is linearly independent.
- The set of columns of \(A\) is a basis of \({\bf R}^n\).
- The set of rows of \(A\) is linearly independent.
- The set of rows of \(A\) is a basis of \({\bf R}^n\).
- \(A\) is invertible.
- \(\det(A)\neq 0.\)
(In our coverage of Chapter 2, we showed that the first six statements on this list are equivalent; we have simply added a seventh.)
- \( \det(I)=1\) (where \(I\) is the \(n\times n\) identity matrix)
- \(\det(AB)=\det(A)\, \det(B)\)
- If \(A\) is invertible, then \(\det(A^{-1})=1/\det(A). \)
- \(\det(A)=\det(A^t)\)
- If \(A' \) is a matrix obtained by interchanging exactly two columns of \(A\) or exactly two rows of \(A\), then \(\det(A')=-\det(A)\).
- If \(A'\) is a matrix obtained from \(A\) by multiplying exactly one column or row of \(A\) by a nonzero real number \(c\) (leaving all other columns or rows of \(A\) unchanged), then \(\det(A')=c\det(A)\).
- For any nonzero \(c\in{\bf R}\), we identify the sign of \(c\) (positive or negative) with the corresponding real number \(+1\) or \(-1\). (Of course, "+1" can be written simply as "1".) This enables us to write equations involving multiplication by signs, e.g. "\(c={\rm sign}(c)\,|c|\)."
Every ordered basis \(\beta\) of \({\bf R}^n\) has a well-defined sign associated with it, called the orientation of \(\beta\), defined as follows:
If \(\beta=\{v_1, v_2, \dots, v_n\}\) of \({\bf R}^n\), where we view elements of \({\bf R}^n\) as column vectors, let \(A_{(\beta)} =\left( \begin{array} {c|c|c|c} v_1 & v_2 & \dots & v_n \end{array} \right) \), the \(n\times n\) matrix whose \(i^{\rm th}\) column is \(v_i\), \(1\leq i\leq n\). (The notation \(A_{(\beta)}\) is introduced here just for this discussion; it is not permanent or standard.) Then \(A_{(\beta)}\) is invertible, so \(\det(A_{(\beta)})\) is not zero, hence is either positive or negative. Wefine the orientation of \(\beta\) (denoted \({\mathcal O}(\beta)\) in our textbook) to be \({\rm sign}(\det(A_{(\beta)}))\in \{+1,-1\}.\) Correpondingly, we say that the basis \(\beta\) is positively or negatively oriented. For example, the standard basis of \({\bf R}^n\) is positively oriented (the corresponding matrix \(A_{(\beta)}\) is the identity matrix.
With \(\beta\) as above, let \(\beta'=\{-v_1, v_2, v_3, \dots, v_n\}\), the ordered set obtained from \(\beta\) by replacing \(v_1\) with \(-v_1\), leaving the other vectors unchanged. Then \(\beta'\) is also a basis of \({\bf R}^n\), and clearly \({\mathcal O}(\beta') =-{\mathcal O}(\beta)\).
Thus there is a one-to-one correspondence (i.e. a bijection) between the set of positively oriented bases of \({\bf R}^n\) and the set of negatively oriented bases of \({\bf R}^n\). ("Change \(v_1\) to \(-v_1\)" is not the only one-to-one correspondence between these sets of bases. Think of some more.) In this sense, "exactly half" the bases of \({\bf R}^n\) are positively oriened, and "exactly half" are negatively oriented. (A term like "in this sense" is needed here since the phrase "exactly half of an infinite set" has no clear meaning.)
If we treat elements of \({\bf R}^n\) as row vectors, and define \(A^{(\beta)}\) to be the matrix whose \(i^{\rm th}\) row is \(v_i\), then \(A^{(\beta)}\) is the transpose of \(A_{(\beta)}\). Hence, because of the general fact "\(\det(A^t)=\det(A)\)," we obtain exactly the same orientation for every basis as we did by treating elements of \({\bf R}^n\) as column vectors.
- Determinants and geometry. There is a notion of \(n\)-dimensional (Euclidean) volume in \({\bf R}^n\) (let's just call this "\(n\)-volume") with the property that the \(n\)-volume of a rectangular box is the product of the \(n\) edge-lengths. The precise definition of \(n\)-volume for more-general subsets of \({\bf R}^n\) would require a very long digression, but for \(n=1, 2\) or 3 it coincides, respectively, with length, area, and what we are accustomed to calling volume.
In exercise 12 of the "Convex Sets" notes, (closed) parallelepiped in \({\bf R}^n\) was defined. For \(n=1\), a parallelepiped is an interval of the form \([a,b]\) (where \(a\leq b\)); for \(n=2\), a parallelepiped is a parallelogram (allowed to be "degenerate" [see the Convex Sets notes or the textbook) an interval of the form \([a,b]\), \(a\leq b\); for \(n=3\), a parallelepiped is what you were taught it was in Calculus 3 (but allowed to be degenerate).
For an ordered \(n\)-tuple of vectors \(\alpha=({\bf a}_1, \dots, {\bf a}_n)\) in \({\bf R}^n\) let \(A_{(\alpha)} =\left( \begin{array} {c|c|c|c} {\bf a}_1 & {\bf a}_2 & \dots & {\bf a}_n \end{array} \right) \). (The only difference between this and our earlier \(A_{(\beta)}\) is that we are not requiring the vectors \({\bf a}_i\) to be distinct, or that the set \( \{ {\bf a}_1, \dots, {\bf a}_n\}\) to be linearly independent.) For the parallelepiped \(P=P_{(\alpha)}\) in exercise 12 of the "Convex Sets" notes, with what we may call "edge vectors" \({\bf a}_1, \dots, {\bf a}_n\), the determinant of \(A_{(\alpha)}\) and the volume of \(P_{(\alpha)}\) coincide up to sign. More specifically:
- If \(\alpha\) is linearly independent, then \(\det(A_{(\alpha)})= {\mathcal O}(\alpha)\times\) (\(n\)-volume of \(P_{(\alpha)}\)).
- If \(\alpha\) is linearly dependent, then \(\det(A_{(\alpha)})= 0 =\) \(n\)-volume of \(P_{(\alpha)}\).
4.4/ 1, 4ag.
If I were asked to do 4g, I would probably not choose to expand along the second row or 4th column. Do you see why? If you were asked to compute \(\left| \begin{array}{cc} 1 & 2 & 3\\ 0& 0 & 4 \\ 5&6&7\end{array}\right|, \) which method would you use?Read Section 5.1 before the Mon. Dec. 5 class. No homework will be collected on Tues. Dec. 6. I've been told that this is a "crunch week" for many students.
--------------------------------
The following is NOT HOMEWORK. It is enrichment for students who know some abstract algebra and have a genuine interest in mathematics.
There is a non-recursive, explicit formula for \(n\times n\) determinants. To understand the formula, you need to know (i) what the symmetric group (or permutation group) \(S_n\) is, and (ii) what the sign of a permutation is.
The formula is this: if \(A\) is an \(n\times n\) matrix, and \(a_{i,j}\) denotes the entry of \(A\) in the \(i^{\rm th}\) row and \(j^{\rm th}\) column, then $$ \det(A)=\sum_{\pi\in S_n} {\rm sign}(\pi)\ a_{1, \pi(1)}\, a_{2,\pi(2)}\, \dots\, a_{n, \pi(n)} \ \ \ \ (*) $$ (a sum with \(n!\) terms, each of which is a product of \(n\) entries of \(A\) and a sign). (You're forbidden from using formula (*) on graded work in this class, since we're not proving it. The fact that it's true is just an "FYI" for interested students.)
To use formula (*) to prove certain properties of the determinant, you need to know a little group theory (not much) and the fact that the map \({\rm sign}: S_n\to \{\pm 1\}\) is multiplicative (\({\rm sign}(\sigma\circ \pi)={\rm sign}(\sigma)\,{\rm sign}(\pi)\ \) ). With that much knowledge, you can use formula (*) to give proofs of various other facts by more-direct means than are in our textbook. For example, when proving that \(\det(A)=\det(A^t)\) or that \(\det(AB)=\det(A)\det(B)\), there's no need to use one argument for invertible matrices and another non-invertible matrices. Of course, formula (*) itself needs proof first!
There are even better proofs that \(\det(AB)=\det(A)\det(B)\), but they require far more advanced tools.By the final exam Assignment 16 THERE IS NOTHING TO HAND IN FOR THIS ASSIGNMENT. The purpose of this assignment is to give you practice with the newest material before the final exam (Wed. Dec. 14 at 10:00 a.m.).
5.1/ 1,2,3abc, 4abd, 10, 12, 15, 16, 18, 20. You may realize either before you finish doing 18(a) or after, that it's related to 16a. However, you can do 18(a) even if its relation to 16(a) doesn't occur to you. In Section 5.2, read up through Example 4 before the Wed. Dec. 7 class. In Section 5.2, also read the subsection "Test for Diagonalizability" (pp. 268–271), minus Example 6. (See note below.) 5.2/ 1a–e,g, 2, 7, 11, 13 ----------------
Note about the "test for diagonalizability"
Below, "polynomial" always means "polynomial of degree \(\geq 1\).
The "test for diagonalizability", stated in the first paragraph of the subsection of the same name, is a essentially a combination of Theorem 5.6 and Theorem 5.8.
Because the book's treatment of "eigen-stuff" is very general, applying to vector spaces over an arbitrary field, this treatment omits some facts that are special to real vector spaces and complex vector spaces (i.e., vector spaces over \({\bf R}\) or \({\bf C}\), respectively).
The Fundamental Theorem of Algebra ("FTA" below) asserts that every polynomial with complex coefficients splits over C. ("Complex" does not mean "not real"; \({\bf R}\) is a subset of \({\bf C}\).) Hence, in the setting of complex vector spaces, the first of the two conditions in the diagonalizability test (the characteristic polynomial splits) is superfluous; the test boils down to whether, for each eigenvalue, the geometric multiplicity coincides with the algebraic multiplicity. I'm not requiring you to know anything about complex vector spaces, but I'd be remiss if I didn't at least expose you to the FTA. (Side note: Despite its name, the Fundamental Theorem of Algebra has no purely algebraic proof! Every proof involves some analysis or topology.)
Since every real number is also a complex number, a corollary of the FTA is that every polynomial with real coefficients splits over C. Some of these polynomials split over \({\bf R}\); some do not. Thus, the characteristic polynomials of some real matrices split over \({\bf R}\), and some do not. Those that don't split over \({\bf R}\) have at least one factor of the form \(t-\lambda\) where \(\lambda\) is a non-real complex number.
For real \(n\times n\) matrices (or linear transformations between real \(n\)-dimensional vector spaces), many people use the word "eigenvalue" to mean "root of the characteristic polynomial" (whether or not the root is real). This is a matter of convenience: with this expanded definition, the first criterion in the test for diagonalizability of real matrices (or linear transformations) can be rewritten simply as, "All eigenvalues are real."
If you use this expanded definition of eigenvalue, keep in mind that, in the real-vector-space setting, only real eigenvalues are "true" eigenvalues, corresponding to at least one eigenvector in \({\bf R}^n\).
The remainder of this note is enrichment only; you are not required to read it.
Although not every polynomial \(p(t)\) with real coefficients splits over \({\bf R}\), the fact that such a polynomial splits over \({\bf C}\) has implications for the roots of \(p(t)\) and their multiplicities. Specifically, it can be shown that if \(p(t)\) is a polynomial with real coefficients, then for every non-real root \(z\), the complex conjugate \(\bar{z}\) is a root with the same multiplicity as \(z\). (If \(z=x+yi\), where \(x\) and \(y\) are real, the the complex conjugate of \(z\) is the complex number \(\bar{z}:=x-yi\). A real number is its own conjugate.) Hence if \(p(t)\) is a real polynomial of degree \(n\geq 1\), then the non-real roots of \(p(t)\) occur in conjugate pairs, with the two elements of the pair having the same multiplicity. Thus if \(p(t)\) has \(j\) distinct real roots \(r_1, \dots, r_j\), and \(k\) distinct pairs of conjugate non-real roots \( \{z_1, \overline{z_1}\}, \dots, \{z_k, \overline{z_k}\}\), then the complete factorization (splitting) of \(p(t)\) over \({\bf C}\) can be written as form: $$ p(t)= a(t-r_1)^{m_1}\dots (t-r_j)^{m_j} \ \ (t-z_1)^{m'_1}(t-\overline{z_1})^{m'_1}\dots (t-z_k)^{m'_k}(t-\overline{z_k})^{m'_k} \ \ \ \ \ \ (*) $$ where \(m_l\) and \(m'_l\) are the multiplicities of \(r_l\) and \(m'_l\), respectively. (If \(p(t)\) has no real roots, then omit all the real-root factors in (*); if \(p(t)\) has no non-real roots, then omit all the non-real-root factors in (*).) The multiplicities satisfy \(m_1+\dots+m_j + 2(m'_1+\dots m'_k)=n\), of course.
Note that for the conjugate pair \(\{z_l, \overline{z_l}\} =\{\alpha_l+\beta_l\, i,\ \alpha_l-\beta_l\, i\}\), $$ (t-z_l)^{m'_l}(t-\overline{z_l})^{m'_l} = [(t-z_1)(t-\overline{z_1})]^{m'_l} =(t^2+a_l t +b_l)^{m'_l} $$ where \(a_l=-2\alpha_l\) and \(b_l= \alpha_l^2+\beta_l^2\). Thus $$ p(t)=a(t-r_1)^{m_1}\dots (t-r_j)^{m_j}\ \ q_1(t)^{m'_1} \dots q_k(t)^{m'_k}, \ \ \ \ \ \ \ (**) $$ where \(q_l(t)=t^2+a_l t +b_l\), an irreducible quadratic factor of \(p(t)\). (Here "irreducible" means "having no real roots"; the roots are a non-real conjugate pair, specifically \(\{\alpha_l\pm \beta_l\, i\} \).) The factorization (**) should be familiar to you as the starting point of the partial fractions decomposition of a general rational function.
W 12/14/22
Final ExamThe final exam will be given on Wednesday, December 14, starting at 10:00 a.m., in our usual classroom.