In the table below, "NSS" stands for our textbook. Exercises are
from NSS unless otherwise specified.
| Date due |
Section # / problem #'s |
| M 8/25/25 |
Read
the class home page and
syllabus webpages.
Go to the Miscellaneous Handouts page (linked to the
class home page) and read the web handouts
"Taking and Using
Notes in a College Math Class," "Sets and Functions,", and
"What is a solution?"
Never treat any reading portion
of any assignment as optional, or as something you're sure
you already know, or as something you can postpone
(unless I tell you otherwise)! I can pretty much guarantee that
every one of my handouts has something in it that you don't know,
no matter how low-level the handout may appear to be at
first.
Read Section 1.1 and do problems 1.1/ 1–16.
Since not everyone may have access to the
textbook yet, here is a
scan of the first 15 pages (Sections
1.1–1.2, including all the exercises).
Do non-book problem
1.
(This link takes you to a page with all the non-book
problems that I expect to assign eventually; your current
assignment includes only the first of these problems.)
In my notes on first-order
ODEs (also linked to the Miscellaneous Handouts page), read
the first three paragraphs of the introduction, all of Section
3.1, and Section 3.2.1 through Definition 3.1. In
all readings I assign from these notes,
you should skip anything labeled "Note(s) to instructors".
Whenever I update these notes (whether
substantively or just to
fix typos), I update the
version-date line on p. 1. Each time
you're going to look at the notes,
re-load them to make sure that you're looking at
the latest version.
|
| W 8/27/25 |
1.2/ 1, 3–6, 17, 19–22.
Whenever you
see the term "explicit solution" in the book, you should
(mentally) delete the word "explicit".
(Until the third author was added to later
editions of the textbook, what NSS now calls an explicit
solution is exactly what it had previously called, simply and
correctly, a solution.. The authors tried to "improve" the
completely standard meaning of "solution of a DE". They did not
succeed.
See Notes on some book
problems for additional corrections to the wording of several
of the Section 1.2 problems.
  In #17, don't worry if you're unsure
what "one-parameter family of solutions" means; I don't address it
till Section 3.2.4 of my notes (and you don't need to know what it
means to do the exercise). If you roughly understand the
terminology now, great; if not,
make a note to yourself to
re-read this problem once we've covered that terminology. The book
uses the terminology incorrectly in many places, but the usage in
1.2/17 is correct.
Note: The exercise portions of many
(probably most) of your homework assignments will be a lot more
time-consuming than in the assignments to date; I want to give you
fair warning of this before the end of Drop/Add.
However, since my posted notes are only
on first-order ODEs, the reading portions of the
assignments will become much lighter once we're finished with
first-order equations (which will take the first month or so of
the semester).
In
my notes, read from where you left off in the last assignment
through Example 3.11 (p. 15).
Make sure you are keeping up with the reading as I assign
it (i.e. if the assignment with due-date X says "Read up through
[this item]," then read at least that far by date X), even if it
seems to be ahead of, or not connected to, what I've covered in class
yet. There isn't enough time to cover everything in my notes in
class—among other things, my notes incorporate a lot of
material that should have been included in your prerequisite courses,
or even in high school, but probably wasn't—and if you wait
until you think my class lectures have fully prepared you for a
given reading assignment, you'll have far too much to read than you
can possible absorb in a few days. Setting aside (only) one day a
week as your "differential equations day" will not serve you well in
this course.
You may not understand everything
the first time you read it. That's OK. Your brain needs time to
process new concepts, and a lot of that processing takes place
unconsciously. Ever wake up and suddenly understand something that you
didn't understand the day before?
|
| F 8/29/25 |
In the textbook, read the
first page of Section 2.2, minus the last sentence.
(We will discuss how to solve separable
equations after we've finished discussing linear equations, the topic
of Section 2.3. The only reason I'm
having you read the first page of Section 2.2 now is so that you can
do the first few exercises of Section 2.3. But as a "bonus", you'll also
be able to do the exercises in Section 2.2 assigned
below.)
2.2/ 1–4, 6
2.3/ 1–6
In
my notes, read from where you left off in the last assignment
through the one-sentence paragraph after Definition 3.19.
|
| W 9/3/25 |
In Section 2.3, read up through p. 50, but mentally
make some modifications:
- Replace the book's transition from equation (6) to
equation (7) by what I said in class: that: \(\mu'/\mu
=\frac{d}{dx} \ln |\mu|.\) (The book's reference to
separable equations is unnecessary, and does not
lead directly to equation (7); it leads to a similar
equation but with \(|\mu(x)|\) on the left-hand side. In
class, I'll go over why we can get rid of the absolute-value
symbols in this setting. Equation (7) itself is fine, modulo the
meaning of indefinite-integral notation; it's only the
book's derivation that has problems.)
-
Remember that whenever you see an indefinite
integral in the book, e.g. \(\int f(x)\, dx,\) the meaning is
my "\(\int_{\rm spec} f(x)\, dx\)." If you'd like a review of
what I said about notation for indefinite integrals, go to
my Spring 2024 homework page,
locate the assignment that was due 1/19/24, and in the first
bullet-point, read from the beginning of the second sentence ("Remember
...") to the end of the green text.
In
my notes, read from where you left off in the last assignment
through the end of Section 3.2.4 (the middle of p. 27).
In Friday's class I didn't get quite far enough to
write a clean summary of the method we had effectively
derived, but the box on p. 50 serves that purpose (modulo the
indefinite-integral notation). Armed with this, you should
be able to do most of the exercises I'll be assigning from
Section 2.3, but I'm putting these exercises in to
the next assignment since I didn't get quite far
enough. However, I recommend
that,
by Wednesday
9/3, you do as many of these exercises as you can, so that
the next assignment isn't extra-long.
If you want to read more examples
before next class, and before starting on the exercises, it's
okay to look at Section 2.3's Examples 1–3, but be warned:
Examples 1 and 2 have some extremely poor writing that you
probably won't realize is poor, and that
reinforces certain bad habits that
most students have but that few are aware of. (Example 3 is
better written, but shouldn't be read before the other two.)
Specific
problems with Example 2 (one of which also occurs in Example
1) are discussed in the same
Spring 2024 assignment mentioned above. The most pervasive of these
is the one in last small-font paragraph in that assignment.
|
| F 9/5/25 |
2.3/ 7–9,
12–15 (note which variable is which in #13!),
17–20
When you apply the integrating-factor
method
don't forget
the first step: writing the equation in "standard linear form",
equation (15) in the book. (If the original DE had an \(a_1(x)\)
multiplying \(\frac{dy}{dx}\) — even
a constant function other than 1—you have to divide
through by \(a_1(x)\) before you can use the formula for \(\mu(x)\) in
the box on p. 50; otherwise the method doesn't work.) Be
especially careful to identify the function \(P\) correctly; its
sign is
very important. For example, in 2.3/17,
\(P(x)= -\frac{1}{x}\), not just \(\frac{1}{x}\).
2.3 (continued)/ 22, 23, 25a, 27a, 28, 31, 33, 35
See my Spring 2024 homework
page, assignment due 1/22/24, for corrections to some of the
Section 2.3 exercises. Also, in that same assignment read the three
paragraphs at the bottom of the assignment.
(The reason I'm not simply recopying such items into this semester's
homework page is that some of my 2023-2024 students said that,
although my comments and corrections were intended to be helpful, they
made the look of those assignments
overwhelming.)
2.2/ 34. Although this exercise is in the section on
"Separable Equations" (which we haven't discussed yet), the DE
happens to be linear as well as separable, so you're equipped to
solve it. For solving this equation, the "linear equations
method" is actually simpler than—I would even say
better than—the (not yet discussed) "separable
equations method".
(The
same is true of Section 2.1's equation (1), which the book solves
by the "separable equations method"—and makes two
mistakes in the sentence containing equation (4). This is why
I did not assign you to read Section 2.1.)
Do non-book problem
2.
In
my notes, read from the
beginning of Section 3.2.5 (p. 27) through the end of
Definition 3.23 (p. 33), plus the paragraph after that definition.
(All of this is needed for a
proper understanding of the word "determines" in the book's Definition
2 in its Section 1.2! [I still haven't defined "implicit solution of a
DE" yet; the above reading is needed just to understand the single
word "determines" in that definition.] This is one of the biggest
reasons I didn't assign you to read Section 1.2.)
Then
do the exercise that's shortly after Definition 3.23 in my notes.
|
| M 9/8/25 |
In
my notes:
- Read the remainder of Section 3.2.5 (pp. 33–34).
- Read Section 5.5 (The Implicit Function
Theorem).
- In Section 3.2.6, read up through the end of Example 3.27
(pp. 37–38).
After you've done that reading,
do the following exercises from the
textbook: 1.2/ 2, 9–12, 30. In #30, ignore the
book's statement of the Implicit Function Theorem; use the
statement in my notes. The theorem stated in problem 30
is much weaker than the Implicit Function Theorem, and should
not be called by that name. In fact, problem 30 cannot even be done using
the book's theorem, because of the the words "near the point (0,1)"
at the end of the problem.
|
| W 9/10/25 |
Re-do 1.2/ 9–12 without the book's instruction
to assume that "the relationship does define \(y\) as a function
of \(x\)."
In my notes, read
Sections 5.2 and 5.4.
(If you have any uncertainty
about what an interval is, read Section 5.1 as well.
I already covered
Section 5.3—a review of the Fundamental Theorem of
Calculus—in class, so this section is optional reading for you.)
My notes' Theorem
5.8, the "FTODE", is what the textbook's Theorem 1 on
p. 11 should have said (modulo my having used
"open set"
in the FTODE instead of the book's "open rectangle").
1.2/ 18, 23–28, 31. Do not do these until
after you've read Section 5.4 in my notes.
Anywhere that the book asks you whether its Theorem 1 implies
something, replace that Theorem 1 with the FTODE stated in my notes.
  See my Fall 2024 homework
page, assignment due 9/11/24, for corrections to some of these
exercises, and some other brief comments.
In my notes, continue reading
Section 3.2.6 (from where you left off) up through the
paragraph before Example
3.31 on p. 43, and read Sections 3.2.8 and 3.2.9. (The latter two
sections should be easy and can be read before Section 3.2.6; I've
already discussed a lot of their content in class.)
Reminder: reading my notes is not optional
(except for portions that I [or the notes] say you may
skip, and the footnotes or parenthetic comments that say "Note to
instructor(s)").
You should do
your best to complete each reading assignment by the due date I
give you. If you let yourself fall
significantly behind, planning to catch up later, you will
have far too much to absorb in too little
time. What I've put in the notes are things that are
not adequately covered in our textbook (or any current textbook
that I know of). Unfortunately there isn't enough time to go
over most of these carefully in class; we would not get through
all the topics we're supposed to cover.
|
| F 9/12/25 |
Skim Section 2.2 in the textbook, up through Example 3.
I'm always uneasy about having my
students read this section. The book's explanations and
definitions in this section say many of the
right things, but don't hold up under scrutiny, and there's
lot of poor writing that I hate exposing you to. Furthermore,
the most prominent item in the section—the box on
p. 42—is misleading. The correct "method for solving
separable DEs" has two parts, one of which is
the (not quite finished) mechanical method in the box.
The correct
name for the method in the box on p. 42 is
separation of variables.
Furthermore, this PART of the method for solving
separable DEs has (potentially) one
more step: solving equation (3) explicitly for \(y\) in terms
of \(x\) when possible.
There is still some
conceptual material that's absent from the book, before which
doing the exercises in Section 2.2 will amount to little more than
pushing the symbols around the page a certain way. However,
you do need to start getting some practice with the mechanical
(what I called the "brain off")
separation-of-variables method; otherwise you'll have too much to
do in too short a time. So I've assigned some exercises from
Section 2.2 below, for you to attempt based on Wednesday's class
and your reading, but
with special temporary instructions.
2.2/ 7–14. For now (with the Friday 9/12
due date), all I want you to do in these exercises is
(a) to achieve
an answer of the form of equation (3) in the box on
p. 42—without worrying about intervals, regions, or exactly what
an equation of this form has to do with (properly
defined) solutions of a DE, and (b) to find all the
constant
solutions,
if there are any.
Save your work, so that when I re-assign these exercises later,
at which time your goal will be to get a complete answer that you fully
understand, you won't have to re-do this part of the work.
In my notes, finish
reading Section 3.2.6.
I've been making some (relatively minor)
revisions to my notes. Near the top of p. 1 is a line that gives
the version date. Any page-references in a homework
assignment reflect the then-current version of the notes, but
some in past assignments may be slightly off from the current
version. To make sure you're looking at the current
version, it's best to at least refresh your browser's view (if not
re-download a fresh copy) each
time
you look at the notes.
|
| M 9/15/25 |
In my notes:
- Read Section 3.2.7 up through the paragraph after
Definition 3.35 ("As mentioned earlier ...").
- Read
Section 3.2.10 up through the paragraph after the statement of Theorem
3.45. (This one-sentence paragraph
explains the notation in equation (3.103) in Theorem
3.45.) This theorem assures us that, when its
hypotheses are met, every solution of \(\frac{dy}{dx}=g(x)p(y)\) in
the indicated region \(R\) is either a constant solution or can be
found, at least in implicit form, by separation of variables (the
"brain-off" method in the box on p. 42 of the textbook).
On my Spring 2024 homework
page, go to the assignment that was due 1/26/24, and read the
(whole) second bullet-point (which continues until the end of that
assignment). This details several of the items that are misleading or
just plain wrong in the book's Section 2.2. In the last
non-parenthetic sentence of that assignment, "the method we've
studied" is the method that
we may still be in the
process of studying this semester (the method summarized by Theorem
3.45 in my notes, and justified by the proof of that theorem a few
pages later). In the blue portion of this bullet-point,
"Theorems 3.44 and 3.46" are now Theorems 3.45 and 3.48, and
"Example 3.47" is now Example 3.49.
Return to exercises 2.2/ 7–14 that I had you partially
do in the previous assignment. Using Theorem 3.45 in my notes,
this time find all the maximal solutions. Don't worry
about graphing the solution-curves for any of the exercises in
the current assignment; that's more than the exercises are asking
for, and would take more time than it's worth.
The exercises in this assignment (above and below) are geared
towards giving you practice with the two-part
procedure for solving separable DEs (one part
being separation of variables [the box on NSS p. 42],
the other being finding any constant solutions the DE may have
[it may not have any]). Although I haven't discussed various
subtleties yet in class, or finished justifying the
two-part procedure yet (we almost finished on Friday,
but not quite), the procedure
does find all the solutions of the
DEs in this assignment; you
may assume this when doing the exercises.
Do non-book problems
3–5 .
Answers to these
non-book problems are posted on the
"Miscellaneous handouts" page.
General comment. In doing the
exercises from Section 2.2, or the non-book problems, you may
find that, often, the hardest part of doing
such problems
is doing the integrals. I
intentionally assign problems that require you to refresh most of your
basic integration techniques (not all of which are adequately
refreshed by the book's problems).
If you need to review the method of partial fractions,
you can undoubtedly find it online somewhere, but our textbook has
its own review on pp. 370–374. This
review is interspersed with examples related to the topic of
Chapter 7, Laplace Transforms, which we are a long way from
starting to cover. For purposes of simply reviewing
partial fractions, ignore everything in Examples 5, 6, and 7
on these pages except for the partial fractions
computations. (For example, ignore any equation that has a curly
"L" in it.)
2.2/ 17–19, 21, 24
The book's IVP exercises are not rich
enough, by a long shot, to illustrate the dangers
of keeping your brain turned off after you've separated
variables (putting all \(y\)'s on one side of the equation and all
\(x\)'s on the other, if these are the variable-names) and done
the relevant integrals. Non-book problems 7 and 8, which will be
in an upcoming assignment, were constructed to remedy this
poverty. Feel free to tackle these before they're assigned.
|
| W 9/17/25 |
2.2/ 27abc
Do non-book problems
6–8.
Re-do 2.2/ 18 with the initial condition \(y(5)=1.\)
In my notes:
- Read the remainder of Section 3.2.7.
- In Section 3.2.10, starting where you left off,
read up through at least the portion of the proof of Theorem
3.45 that ends with statement (3.109).
This reading now (as of 9/15, with a further update
9/16) includes a new "Remark 3.46" (currently on
pp. 62–63) that may not have existed the last time you
looked at the notes. Make sure you read the new remark.
Inclusion of the new Remark 3.46
affected the numbering of all subsequent remarks, theorems,
examples, etc. In the assignment that was due 9/15, I've
retroactively updated the (small number of) references to
items numbered 3.46 or higher.
My 9/15 and 9/16 updates also
included some minor changes made on a few earlier pages,
mostly for notational consistency with later pages. These
changes affected the page-breaks (but not any
item-numbering) starting around p. 48. In case you're
looking for something that's not where you remember it
being, on the Miscellaneous Handouts page I've put a link to
the notes as they existed 9/14/2025.
The new Remark 3.46 is about what I call
"semi-arbitrary constants" (read the new Remark, then return
here). In notation of the form "\( \{\mbox{[equation involving
$C$]} : C\in \bfr\}\)", \(C\) is an arbitrary constant,
meaning that \(C\) could be any real number. As I've said before,
a convention I allow in this class is that if we simply omit the
"\(\in \bfr\)", we also mean that \(C\) is an arbitrary constant.
When we write down families of solutions, or implicit solutions,
of some DEs, it may not be obvious whether the \(C\)'s (or
whatever letter we're using for the same purpose) should be
arbitrary, or should have some restrictions, and, in the latter
case, what these restrictions should be. But unless I
specify otherwise, I don't want you to write "\(C\in {\mathcal
K}\)", with or without defining \({\mathcal K}\); that
was just something I required of myself because I wanted
Theorem 3.45 to be precise and completely correct (even at the
cost of some ease-of-understanding). What I'll want
you to write on an exam—unless I specify
otherwise—is one of the following (which may not be equally
acceptable, depending on the problem; see below):
- "\( \{\mbox{[equation involving
$C$]}\}\), where \(C\) is an arbitrary or semi-arbitrary
constant." This answer is suitable if it's not obvious to you
whether there need to be any restrictions on \(C\). Usually, your
time would be better spent on other problems (or problem-parts)
than on trying to be more precise about this \(C\).
- "\( \{\mbox{[equation involving $C$]}\}\), where \(C\) is
a semi-arbitrary constant." This answer is suitable if you can
tell that there need to be some restrictions on \(C\), but
it 's not obvious to you exactly what these restrictions should be.
Again, usually, your time would be better spent on other
problems (or problem-parts) than on trying to be more precise
about \(C\).
- "\( \{\mbox{[equation involving
$C$]}\}\), where \(C\) is an arbitrary
constant" or "\( \{\mbox{[equation involving
$C$]}: C\in \bfr\}\), where \(C\) is an arbitrary
constant" or "\( \{\mbox{[equation involving
$C$]}\}\)." (The first two of these mean the same thing.
In this class, we're allowing the third of these to be
short-hand for the first two.) This answer is suitable if you can
tell that there are no restrictions on \(C\).
In an exam problem, for the number of points I'm allotting for
finding a correct family of implicit solutions (whether or not that
family is an implicit form of the whole general solution), you'd get
the vast majority of points regardless of which of the three above
answers you gave (assuming your equations were correct). In some
cases, I may think that you should have been able to tell whether
your \(C\) should be arbitrary or only semi-arbitrary, or to be able
to tell explicitly what restrictions on \(C\) are needed. For
example, if you come up with \(\{x^2+y^2=C\}\) as a family of
implicit solutions of \(y\frac{dy}{dx}+x=C\), I would expect you to
notice that the restriction "\(C>0\)" is needed, and if you didn't
state this restriction I might take
off a point or two—which is a lot fewer points
than you'd lose for not having time to do some other problem.
The same principle applies when I think you
shouldn't find it difficult to see that no
restrictions on \(C\) are needed, and applies also if you say "\(C\)
is semi-arbitrary" when "\(C\) is arbitrary" is correct or
vice-versa.
In instances in which \(C\) is semi-arbitrary, and figuring out
the precise restrictions on \(C\) is non-trivial, but you succeed
in correctly figuring them out, I might give you a few points of
extra credit—but again, not enough to make up for what you'd
lose by not getting to another problem on the exam.
On my exams, bad time-management
generally costs students more points than anything else! In all the
cases above, figuring out something about \(C\)
that isn't immediately obvious to you is best postponed until
after you've finished the other problems.
|
| F 9/19/25 |
In my notes:
- Read the remainder of Section 3.2.10.
- Read Section 3.3.1. (Much of
pp. 74–77, roughly the first half of p. 78, repeats
material I covered in Wednesday's class. Something important on
these pages that I did not get to is Definition 3.52. I
also didn't mention anything like Example 3.55 on
Wednesday.) With the exception of the definition
of the differential \(dF\) of a two-variable function \(F\), the
material in Section 3.3.1 of my notes is basically not discussed
in the book at all, even though differential-form DEs appear in
(not-yet-assigned) exercises for the book's Section 2.2 and in
all remaining sections of Chapter 2. (Except for "Exact
equations"—Section 3.3.6 of my notes—hardly anything
in Section 3.3 of my notes [First-order equations in
differential form] is discussed in the book at all.)
In the textbook, read Section 2.4 up through the boxed
definition "Exact Differential Form" on p. 59. Also, on
my Spring 2024 homework page,
go to the assignment that was due 2/7/24, and read
"Comments, part 1" and "Comments, part 2."
|
| M 9/22/25 |
In my notes:
- Read
Section 3.3.2 and 3.3.3. You may skip the portions labeled
"optional reading".
When reading anything in Sections 3.3
(all of the "3.3.x" subsections) and Sections 3.4–3.6,
remember that Section 3.7 summarizes all the definitions
and results in those sections. To avoid getting lost in the weeds,
refer to this summary as often as you need; that's the
whole reason for Section 3.7's existence.
- In Section 3.3.5, read up through Example 3.72.
Section 3.3.5 essentially addresses: what
constitutes a possible answer to various questions, based the
type of DE (derivative-form or differential-form) you're being asked
to solve? A proper answer to this question requires taking into
account some important facts omitted from the textbook (e.g. the fact
that DEs in derivative form and DEs in differential form
are not "essentially the same thing").
2.2 (not 2.3 or 2.4)/ 5, 15, 16.
(I did not assign these when we were
covering Section 2.2 because we had not yet discussed
"differential form".)
Previously, we defined what "separable" means
only for a DE in derivative form. An equation in differential
form is called separable if, in some region of the
\(xy\) plane (not necessarily the whole region on which the given DE
makes sense), the given DE is algebraically equivalent to an equation
of the form \(h(y)dy=g(x)dx\) (assuming the variables are \(x\) and
\(y\)). This is equivalent to the condition that the derivative-form
equation obtained by
formally dividing the original equation by
\(dx\) or \(dy\) is separable.
As for how to solve these equations: you will
probably be able to guess the correct mechanical procedure. A natural
question is: how can you be sure that these mechanical procedures give
you a completely correct answer? That question is, essentially, what Sections
3.4–3.6 of my notes
are devoted to.
Warning. For
questions answered in the back of the book: not all answers there are
correct
(that's a general statement; I haven't done a separate
check for the exercises in this assignment)
and some may be misleading. But most are either correct, or
pretty close.
|
| W 9/24/25 |
In the textbook, continue reading Section 2.4, up through Example
3. Then do the following exercises:
2.4/ 1–8.
Note: For differential-form DEs, there is no
such thing as a linear equation. In these problems, the book
means for you
to classify an equation in differential form as linear if
at least one of the associated derivative-form equations (the ones
you get by formally dividing through by \(dx\) and \(dy\),
as if they were numbers) is linear. It is possible for one of
these derivative-form equations to be linear while the other is
nonlinear. This happens in several of these exercises.
For example, the associated derivative-form
DE for \(y(x)\) is linear; the associated derivative-form DE for
\(x(y)\) is not.
In my notes,
read the remainder of Section 3.3.5, and read Section 3.3.6
up through Example 3.76. (The remainder of Section 3.3.6
is optional reading.)
If I have not yet gone through the "exact equation method" in class,
read the rest of NSS Section 2.4 to see the mechanics of solving an
exact DE. (Just don't trust any "justifications" or
terminology in this section.) This should be enough to enable you to do the
exercises below, though not necessarily with confidence if I
haven't gone through this in class yet.
Don't invent a different method for solving
exact equations, or use a different method you may have
seen before. (See next bullet-point.)
Please do not ask me about any different
method until you have completed reading the "A terrible way
..." handout in the next bullet-point. I
guarantee you that if you've invented, or have
ever been shown, an alternative to the method that's shown in the
book (and that I'll go over in class), your alternative
method is exactly the "terrible method" laid out at
the beginning of the handout. Every year a student who hasn't yet
read the handout comes up to me after class and asks, "But how
about this method I saw (or was shown) for solving exact
equations?" It's always exactly the method that I'm
calling the "terrible method". ALWAYS. WITHOUT EXCEPTION.
You may have thought this method was good in the past. That's the
fault of whoever taught it to you (or simply let you use it, if you
re-invented the method yourself) and designed the examples you saw.
Read the handout
A terrible way to solve exact
equations. (Note: The "(we
proved it!)" in the handout may not become true till after the
due-date of this assignment.)
The example in
this version of the handout is rather complicated; feel free to read
the simpler example in the
original version
instead.
For additional comments on this handout and the terrible method, see
my Spring 2024 homework page,
assignment due 2/12/24.
If you still have questions about an alternative
method AFTER you've read the handout and we've shown in class why the correct
method works, I'm happy to discuss those questions with
you in office hours.
2.4/ 9, 11–14, 16, 17, 19, 20
|
| F 9/26/25 |
2.2 (not 2.3 or 2.4)/ 22.
Note that although the differential
equation doesn't specify independent and dependent variables, the
initial condition does. Thus your goal in this exercise is to
produce a solution "\(y(x)= ...\)".
This exercise, as written, is an
example of what I call a "schizophrenic" IVP.
If
what you're after are solutions with independent variable \(x\) and dependent
variable \(y\) (which is what an initial condition of the form
"\(y(x_0)=y_0\)'' indicates), then the differential equation you were
interested in at the start was one in derivative form
(which in exercise 22 would be \(x^2 +2y \frac{dy}{dx}=0\), or an
algebraically equivalent version), not one in differential
form. Putting the DE into differential form is often a useful
intermediate step for solving such a problem, but differential form is
not the natural starting point. On the other hand, if what you are
interested in from the start is a solution to a
differential-form DE, then it's illogical to express a preference for
one variable over the other by asking for a solution that satisfies a
condition of the form "\(y(x_0)=y_0\)'' or "\(x(y_0)=x_0\)''. What's
logical to ask for is a solution whose graph passes through the
point \((x_0,y_0)\), which in exercise 22 would be the point
(0,2). (That's how the exercise should have been written.)
2.4/ 21, 22 (note that
#22 is the same DE as #16, so you don't have to solve a new DE; you
just have to incorporate the initial condition into your answer
to #16.).
Note that exercises
21–26 are what I termed "schizophrenic" IVPs.
Your goal in these problems is to find an an
explicit formula for a solution, one expressing the dependent
variable explicitly as a function of the independent variable
—if algebraically possible—with the choice of
independent/dependent variables indicated by the initial condition.
However, for
these schizophrenic IVPs, if the algebraic equation ''\(F({\rm variable}_1, {\rm
variable}_2)=0\)'' that you get via the exact-equation method
can't be solved explicitly for the
dependent variable in terms of the independent variable, you have to
settle for an implicit solution.
2.4/ 29, modified as below.
- In part (b), after the word "exact", insert "on some regions
in \({\bf R}^2\)." What regions are these?
- In part (c), the answer in the back of the book is missing a solution
other than the one in part (d). What is this extra missing
solution?
- In part (c), the exact-equation method gives an answer of the
form \(F(x,y)=C\). The book's answer is what you get if you try
to solve for \(y\) in terms of \(x\). Because the equation you
were asked to solve was in differential form, there
is no reason to solve for \(y\) in terms of \(x\), any more
than there is a reason to solve for \(x\) in terms of \(y\).
As my notes say (currently on p. 78),
For any differential-form DE, if
you reverse the variable names you should get the same set of
solutions, just with the variables reversed in all your
equations. This will not be the case if you do what the book did
to get its answer to 29(c), treating your new \(x\) (old
\(y\)) as an independent variable.
In my notes, read Sections 3.3.4
and 3.4. (Section 3.4 answers questions that
several of you have already anticipated!) Remember that
you're allowed to skip anything labeled "optional reading",
which accounts for about half the length of Section 3.4.
|
| M 9/29/25 |
In my notes:
- Skim
Section 3.3.7 up through the boldfaced statement (3.151). Read
statement (3.151) itself.
- Read Sections 3.4, 3.5, and 3.6 . (Remember that the most important
conclusions—the ones displayed in boldface—are summarized
in Section 3.7. It's OK to read the summary first, and do a more
careful reading when you have more time.)
Do non-book problem
10. You may not get completely correct answers to parts of
problem 10 if you haven't read Sections 3.4–3.6 of my
notes.
|
| W 10/1/25 |
Do non-book problems 9 and
11. Note: There was a typo in the
original version of problem 11c: in the last of the four
identities,
there was a "\(+2\pi\)"
that should have been "\(-2\pi\)". This has now been
fixed.
Read Section 4.1 of the textbook.
(We're skipping Sections 2.5 and 2.6, and all of Chapter 3.)
We will be covering the
material in Sections 4.1–4.7 in an order that's different from the
book's.
|
| F 10/3/25 |
As part of your preparation for next week's exam,
read The Math
Commandments.
4.7 (yes, 4.7) / 30.
(This exercise does not
require you to have read anything in Sections 4.1–4.7.)
Read Section 4.2 up through the bottom of p. 161. Some
corrections and comments:
- On p. 157, between the next-to-last line and the last line,
insert the words "which we may rewrite as".
(The book's " ... we obtain [equation 1], [equation 2]" is a
run-on sentence, the last part of which (equation 2) is a
non-sequitur, since there are no words saying how this equation is
related to what came before.
Writing [equation] [equation] ... [equation], on successive lines,
with no words or logical connectors in between—is a very common
bad habit among students, and is tolerable from students at
the level of MAP2302; they haven't had much opportunity to learn
any better yet. However, tolerating a bad habit until students can
be trained out of it is one thing; reinforcing that bad habit
[as an author or teacher]
is another.
In older math textbooks, you would rarely if ever see this
writing mistake; in our edition of NSS, it's all over the
place.)
- On p. 158, the authors say that equation (3) is called the
auxiliary equation and say, parenthetically, that it is also known
as the characteristic equation.
This is true, but a more accurate depiction of reality would
be to say that equation (3) is called the
characteristic equation and to say, parenthetically, that it
is also known as the auxiliary equation.
"Characteristic equation" is more common, and that's the term
I'll be using.
- The second paragraph on p. 160 should say: "The proof of the
uniqueness statement in Theorem 1 is beyond the scope of a first
course in differential equations; in this text we defer that proof
to chapter 13.\(^\dagger\) However, in the present section and the
next, we will explicitly construct solutions to (10) for all
constants \(a\, (\neq 0),\ b,\) and \(c,\) and all
initial values \(Y_0, Y_1\), thereby proving directly
the existence of at least one solution to (10). For purposes of
an introductory course, we will simply take it on faith that the
uniqueness statement in Theorem 1 is true as well."
|
| M 10/6/25 |
4.7 (yes, 4.7 again) / 1–8
  These exercises do not require anything from
Section 4.7 that we
won't have covered in class
before the due-date of this assignment. "Theorem 5" (p. 192),
referred to in the instructions for exercises 1–8, is simply the
2nd-order case of the "Fundamental Theorem of Linear ODEs" that
I'll have stated in class.
4.2/ 1, 3, 4, 7, 8, 10, 12, 13–16, 18,
27–32,
46ab.
Relatively few of Section 4.2's exercises are
doable until the whole section has been covered. Above, I've
selected
ones that are doable based on the reading
due Friday 10/3.
In #46, the instructions should say that the
hyperbolic cosine and hyperbolic sine functions can be
defined as the solutions of the indicated IVPs, not that
they are defined this way. The customary definitions are
more direct: \(\cosh t=(e^t+e^{-t})/2\) (this is what you're
expected to use in 35(d))
and \( \sinh t= (e^t-e^{-t})/2\). Part of what you're doing in
46(a) is showing that the definitions in problem 46 are equivalent
to the customary ones. One reason that these functions have
"cosine" and "sine" as part of their names is that the ordinary
cosine and sine functions are the solutions of the DE \(y''+y=0\)
(note the plus sign) with the same initial conditions at \(t=0\)
that are satisfied by \(\cosh\) and \(\sinh\) respectively. Note
what an enormous difference the sign-change makes for the
solutions of \(y''-y=0\) compared to the solutions of \(y''+y=0\).
For the latter, all the nontrivial solutions (i.e. those that are
not identically zero) are periodic and oscillatory; for the
former, none of them are periodic or oscillatory, and all of them
grow without bound either as \(t\to\infty\), as \(t\to -\infty\),
or in both directions.
  Note: "\(\cosh\)" is
pronounced the way it's spelled; "\(\sinh\)" is pronounced "cinch".
|
| W 10/8/25 |
First midterm exam
(assignment is to study for it).
Time: 6:30 pm. We will also
have class at the usual time this day.
Location: LIT 305
On Canvas, under Files, I've posted my
Spring 2025 first midterm (problems only). I've also posted there a
sample cover-page for the exam-booklet.
Familiarize yourself with the instructions on this page;
your instructions will be similar or identical.
Reminder: As
the syllabus says, "[U]nless I say
otherwise, you are responsible for knowing any material I
cover in class, any subject covered in homework, and all the
material in the textbook chapters we are studying." I have
not "said otherwise." The homework has included readings from
my notes ( not optional!)
as well as doing book and
non-book exercises. The textbook chapters/sections we'll have
covered before the exam are 1.1, 1.2, 2.2, 2.3, 2.4,
and possibly parts of sections 4.1 and 4.2.
In case you'd like additional
exercises to practice
with:
If
you've done all your homework, you should be able to do all the review
problems on p. 79 except #s 8,
9, 11, 12, 15, 18, 19,
20, 22, 25, 27, 28, 29, 32, 35, 37, and the last part
of 41. A good feature of the book's "review problems" is that, unlike
the exercises after each section, the location gives you no clue as to
what method(s) is/are likely to work. You will have no such clues on
exams either. Even if you don't have time to work through the
problems on p. 79, they're good practice for figuring out what the
appropriate methods are.
A negative feature of the book's exercises
(including the review problems) is that they
don't give you enough practice with a few important integration
skills. This is why I created
(and assigned) several of my non-book problems.
|
| F 10/10/25 |
No new homework.
|
| M 10/13/25 |
4.7 (yes, 4.7)/ 25, 26.
Note: To compute \(\frac{d}{dt} |t^3|\) at \(t=0\),
use the definition of derivative (\(f'(t_0)=\lim_{t\to t_0}
\frac{f(t)-f(t_0)}{t-t_0}\)).
4.2/ 35, 36. (Don't look up and use \(3\times 3\)
Wronskians. They're not covered in Section 4.2, and aren't
needed for these problems; they'd actually interfere with
you from what you're supposed to be seeing.)
4.2/ Skim the remainder of Section 4.2.
4.2/ 2, 5, 9, 11, 17, 19, 20, 26.
When combined with what was
above and in an earlier assignment, and earlier in the current assignment,
the list of exercises assigned from this
section is now:
4.2/ 1–20, 26–32, 35, 36, 46ab.
|
| W 10/15/25 |
In Section 4.3, read the box "Complex Conjugate Roots"
(p. 168) and Example 2. This should be enough for you to be able
to do the exercises from Section 4.3 in this assignment (using
also the Section 4.2 reading and what we've done in class so
far). Enabling you to start on these exercises is the only reason
I'm assigning this Section 4.3 reading now; see bullet-point after
these exercises.
4.3/ 1–18, 21–26. These
exercises are numerous, but you should find 1–18 very short.
However, if you can't finish them all by Wednesday, that's okay; add
the unfinished ones to the next assignment.
Reading Section 4.3 is optional. As with most sections of the
book, there are many correct statements, but they're intertwined
with many incorrect (or incomplete) statements and/or
explanations. In class, I'll go over the complex-exponential
material done correctly.
If you do read Section 4.3:
-
See my
my Spring 2024 homework page,
assignment due 3/4/24, for several comments and corrections.
- The book's solution of Example 4 starts with
"Equation (14) is a minor alteration of equation (12) in Example
3." This is true in the
same sense that the word "spit" is a minor alteration of the word
"suit". Changing one letter can radically alter the meaning of a
word. Any of the numerous words obtainable from "suit" by changing
the second letter has its own meaning, all very different from the
others.
It's true that the only difference between
the DEs in Examples 3 and 4 is the sign of the \(y'\)
coefficient, and that the only difference between equation (15) (the
general solution in Example 4) and equation (13) (the general
solution in Example 3) is that equation (15) has an \(e^{t/6}\)
where equation (13) has an \(e^{-t/6}\). But for modeling a
physical system, these differences are enormous; the
solutions are drastically different. Example 4 models a
system that does not exist, naturally, in our universe.
(More precisely: there could be a
real-life physical system (for example) could be
modeled approximately by equation (14) for a short enough
period of time. But the physical conditions that were used as
assumptions to model the system this way would break down after a
while, after which the system could no longer be modeled by the same
DE.) In this system, the amplitude of the
oscillations grows exponentially, without bound. This is
displayed in Figure 4.7 (except for the "without bound" part).
Example 3, by contrast,
models a realistic mass/spring system, one that could
actually exist in our universe. All the solutions exhibit
damped oscillation. Every solution \(y\) in Example 3 has the
property that \(\lim_{t\to\infty} y(t)=0\); the oscillations die
out. For a picture of this—which the book should have
provided either in place of the less-important Figure 4.7 or
alongside it—draw a companion diagram that corresponds to
replacing Figure 4.7's \(e^{t/6}\) with \(e^{-t/6}\). If you
take away the dotted lines, your companion diagram should look
something like Figure 4.3(a) on p. 154, modulo how many wiggles you
draw.
When working with any linear,
constant-coefficient DE, it is crucial that you make NO
mistake in identifying the characteristic polynomial and its
roots. The most common result of misidentifying the characteristic
roots is to completely change the nature of the solutions.
|
| M 10/20/25 |
4.3/ 28, 32, 33 (students in
electrical engineering may do #34 instead of #33).
Before
doing problems 32 and 33/34, see Examples 3 and 4 in Section 4.3.
Read Section 4.4 up through Example 3.
Read Section 4.5 up through Example 2.
We will be covering Sections 4.4 and 4.5 simultaneously, more or
less, rather than one after the other. What most mathematicians
(including me) call "the Method of Undetermined Coefficients" is what
the book calls "the Method of Undetermined Coefficients plus
superposition." You should think of Section 4.5
as completing the (second-order case of) the Method of
Undetermined Coefficients, whose presentation is begun in Section 4.4.
2.4/ 32, 33.
(These are the exercises on first-order DEs that I should
have assigned a week before the first exam.) See comments
below.
2.4/ enhanced generalized version of 33b: Show that, for every
positive integer \(n\), the set of orthogonal trajectories to
the family of curves \(\{y=kx^n\}\) is a family of ellipses,
all centered at the origin and having their
axes along the \(x\)- and \(y\)-axes, and all having
the same value for the ratio
\(\frac{\mbox{length of semi-major axis}}
{\mbox{length of semi-minor axis}}\)
(with the ratio being determined by
\(n\)). This ratio tells us the "shape" of
an ellipse. Thus, for fixed \(n\), all ellipses in this family
all have the same shape; they simply have
different sizes. As \(n\to\infty\), what happens
to the shapes of these ellipses?
(Green text in
paragraph above was added after the due-date. It's not needed
for the problem; it's just supplementary information.)
Comments concerning orthogonal trajectories to a family of
curves \(\{F(x,y)=k\}\):
- The \(F\)'s of interest are continuously differentiable on
some open region \(R\) in \(\bfr^2\), often the whole
\(xy\)-plane. If \(k\) does not lie in the range of \(F\) on
\(R\), then the graph of \(F(x,y)=k\) in \(R\) is the
empty set, hence contains no curves. Thus, unless the range of
\(F\) on \(R\) is the whole real line, the parameter \(k\) in
"\(\{F(x,y)=k\}\)" is a
"semi-arbitrary" constant.
For simplicity, below we omit explicit references to
\(R\), except when unavoidable.
- Recall that a critical point of \(F\) is a point
\((x_0,y_0)\) at which \(\partial F/\partial x\) and \(\partial
F/\partial x\) are both 0. If \((x_0,y_0)\) is a critical point
of \(F\), we call the number \(F(x_0,y_0)\) a
critical value of \(F\).
- In the intro to problem 32, the equation just
before part (a) assumes that there are no points at which
\(\partial F/\partial y =0.\) The first sentence of part (a)
tacitly assumes that
that there are also no points at which
\(\partial F/\partial x =0.\) Often, these assumptions are not
satisfied, but they are also not necessary; the set-up is just worded
imprecisely. Because the objects of
interest when considering orthogonal trajectories are smooth
curves in \(\bfr^2\), not functions of a preferred independent
variable, the DEs that are most naturally suited to this topic are
differential-form DEs, not derivative-form DEs. While
the book's equation before part (a) provides good motivation,
the orthogonality condition can be stated without any reference to
\(\frac{dy}{dx}\), or any choice of independent variable.
Specifically, given \(F\), the two differential-form DEs relevant to
this topic are $$
\frac{\partial F}{\partial x}(x,y)\, dx + \frac{\partial F}{\partial
y}(x,y)\, dy=0\ \ \ \ \ \ \ \ \ (1)$$ and $$ \frac{\partial
F}{\partial y}(x,y)\, dx - \frac{\partial F}{\partial x}(x,y)\,
dy=0\ \ \ \ \ \ \ \ \ (2)$$
(or, instead of (2), the equivalent equation
\( -\frac{\partial F}{\partial y}(x,y)\, dx +
\frac{\partial F}{\partial x}(x,y)\, dy=0)\).
Suppose the point \((x_0,y_0)\) is not a critical point of
\(F\). Then the vector
\(a\,\vi +b\,\vj =\frac{\partial F}{\partial x}(x_0,y_0)\ \vi
+
\frac{\partial F}{\partial y}(x_0,y_0)\ \vj\) and the vector
\(b\ \vi-a\ \vj=
\frac{\partial F}{\partial y}(x_0,y_0)\ \vi -
\frac{\partial F}{\partial x}(x_0,y_0)\ \vj\) are both
nonzero, and are mutually perpendicular since their
dot-product is zero.
(The fact that, for arbitrary \((a,b)\neq
(0,0)\), the nonzero vectors \(a\,\vi+b\,\vj\) and
\(b\,\vi-a\,\vj\) are mutually perpendicular, is the source of the
rule that "perpendicular lines have negative-reciprocal slopes."
When \(a\)
and \(b\) both happen to be nonzero, the "slopes" of the vectors
\(a\,\vi+b\,\vj\) and \(b\,\vi-a\,\vj\), i.e. \(\frac{b}{a}\) and
\(\frac{-a}{b}\), are negative reciprocals of each
other.)
Recall from Section 3.3.3 of my
notes that the condition for a regular
(i.e. continuously
differentiable and non-stop)
curve-parametrization \(\g\) to satisfy
\(M(x,y)\,dx +N(x,y)\, dy =0\) at the point \((x_0,y_0)\) can
be written as \( \left( M(x_0,y_0)\,\vi + N(x_0,y_0)\,\vj\right)
\cdot \vv =0\), where \(\vv\) is the velocity vector of \(\g\) at
\((x_0,y_0)\)
(i.e. \(\vv=\g'(t_0)\), where \(t_0\) is
such that \(\g(t_0)=(x_0,y_0)\)). Hence, with \(a, b,
\mbox{and} \ (x_0,y_0)\) as in the preceding paragraph: if
\(\calc_1\) and \(\calc_2\) are solution curves of equations (1) and
(2) respectively, both passing through
\((x_0, y_0)\), and \(\g_1\) and \(\g_2\) are regular
parametrizations of \(\calc_1\) and \(\calc_2\) respectively, and
\(\vv_1\) and \(\vv_2\)) are the respective velocity vectors of
\(\g_1\) and \(\g_2\) at \((x_0, y_0)\), then
$$\vv_1 \perp (a\,\vi+b\,\vj) \ \ \ \mbox{and}\ \ \
\vv_2 \perp (b\,\vi-a\,\vj),$$
which implies \(\vv_1\perp\vv_2\) since the nonzero vectors
\(a\,\vi+b\,\vj\) and
\(b\,\vi-a\,\vj\) are perpendicular to each other.
Since \((x_0,y_0)\) was an arbitrary non-critical point of \(F\),
it follows that wherever a solution curve of DE (2) intersects a
solution curve of DE (1), the curves intersect orthogonally, provided
that the point of intersection is not a critical point of \(F\).
- At a critical point \((x_0,y_0)\) of \(F\), equations (1) and
(2) put no restrictions on what velocity vectors a
parametrized curve may have when it passes through
\((x_0,y_0)\). Letting \(k=F(x_0,y_0)\) (the
corresponding critical value of \(F\)), the graph of
\(F(x,y)=k\) may not be a smooth curve. This graph may
contain no smooth curves passing through \((x_0,y_0)\), or a
smooth curve passing through \((x_0,y_0)\) that's unique in a
small enough "window" containing this point, or several (even
infinitely many) non-overlapping smooth curves in any "window"
containing \((x_0, y_0)\). Even in the best possible case—in
which the graph of \(F(x,y)=k\) is a
single, smooth, curve—equation (2) at \((x_0,y_0)\) still
puts no restriction on velocity vectors of parametrized solutions
at this point. Hence, at critical points of \(F\), solution curves
of equations (1) and (2) need not intersect orthogonally, in which
case "orthogonal trajectories" becomes a bit of a misnomer. There
are a few work-arounds for this annoyance:
- Alternative 1: For simplicity's sake, just agree to call
solution curves of equation (2) orthogonal trajectories to the solution
curves of (1), despite the possible exceptions to orthogonality at
critical points of \(F\).
- Alternative 2: Instead of considering the whole
region \(R\) on which \(F\) is continuously differentiable,
remove all the critical points of \(F\) from \(R\) (typically
there are only finitely many), and confine our attention to the
(slightly smaller) resulting sub-region.
- Alternative 3: In the family of curves
\(F(x,y)=k\), take the set of allowed values of \(k\) to be
the set of non-critical values of \(F\) on \(R\).
This is a more "extreme" version of Alternative 2: by removing all
critical values of \(F\) from consideration, we
are automatically removing all critical points of
\(F\) in \(R\), but we may be removing some non-critical
points as well. (There may be a critical point \((x_0,y_0)\) and
a non-critical point \((x_1,y_1)\) for which
\(F(x_1,y_1)=F(x_0,y_0)\).)
|
| W 10/22/25 |
Finish reading Sections 4.4 and 4.5.
4.4/ 9, 10, 11, 14,
15, 18, 19, 21–23, 28,
29, 32
Add parts (b) and (c) to 4.4/ 9–11, 14, 18 as follows:
- (b) Find the general solution of the DE in each problem.
- (c) Find the solution of the initial-value problem for the DE in each
problem, with the following initial conditions:
- In 9, 10, and 14: \(y(0)=0=y'(0)\).
- In 11 and 18: \(y(0)=1, y'(0)=2\).
Note: Anywhere that the book says
"form of a particular solution," such as in exercises
4.4/ 27–32, it should be "MUC form of a
particular solution." The terms "a solution" (as defined
in the first lecture or two of this course), "one
solution", and "particular solution",
are synonymous. Each of these terms stands in contrast
to general solution, which means the set of all
solutions (of a given DE). Said another way, the general
solution is the set of all particular solutions (for a given
DE). Every solution of an initial-value problem for a DE is also
a particular solution of that DE.
The Method of Undetermined Coefficients, when applicable,
simply produces a particular solution
of a very specific form, "MUC form". (There is
an underlying theorem that guarantees that when the MUC
is applicable, there is a unique solution of that form.
Time permitting, later in the course I'll show you why the
theorem is true.)
|
| F 10/24/25 |
No new homework. (A new non-book problem I wanted to assign is
still under construction. When I'm done writing it, I may add it to
the next assignment or to one soon after that.)
|
| M 10/27/25 |
Read through the set-up for
non-book problem 13, and do
parts (d)–(g). This problem 13 is new (the old #13 is now #14,
etc.), so make sure you've refreshed or re-downloaded the
non-book-problems page.
4.4/ 1–8, 12, 16, 17, 20, 24, 30, 31
Note that the MUC is not needed to do exercises 1–8,
since (modulo having to use superposition in some cases) the
\(y_p\)'s are handed to you on a silver platter. All that's needed
is the "general solution is \(y_p+y_h\)" principle derived in class
(or soon to be derived)
for any linear DE, plus superposition (problem 4.7/ 30,
previously assigned) in certain problems, plus your knowledge (from
Sections 4.2 and 4.3) of \(y_h\)
for all the DEs in these problems.
Problem 12 can also be done by Chapter 2
methods. The purpose of this exercise in Chapter 4 is to see that
it also can be done using the Method of Undetermined Coefficients,
so make sure you do it the latter way.
4.5/ 1–8, 24–26, 28. (More in next assignment.)
Why so many exercises?
The "secret" to learning math skills
in a way that you won't forget them
is repetition. Repetition builds retention.
Virtually nothing else does (at least not for basic skills).
Some notes:
- In class I used (or will soon have used) the
term multiplicity of a root of the characteristic polynomial.
This is the integer \(s\) in the box on
p. 178. (The book eventually uses the term
"multiplicity", but not till Chapter 6; see the box on p. 337. On
p. 337, the linear constant-coefficient operators are allowed to have
any order, so multiplicities greater than 2 can occur—but not in
Chapter 4, where we are now.)
In the the box on p. 178, replace the \(r\) in the box on p. 178 by
the letter \(\alpha\), so that the right-hand side of the first
equation in the box is written as \(Ct^m e^{\alpha t}\). In order to
restate cleanly what I said (or will be saying soon) in class about
multiplicity, it is imperative not to use the identical letter \(r\)
in "\(t^me^{rt}\)" as in the characteristic polynomial
\(p_L(r)=ar^2+br+c\) and the characteristic equation
\(ar^2+br+c=0\).
Note that if \(p_L\) has a non-real root
\(r_1=\a+i\b\), then it has such a root with \(\b>0\). The relevant
multiplicity is the number of times \(r-(\a+i\b)\) appears in a
factorization of \(p_L(r)=ar^2+br+c\) into degree-one factors. For a
quadratic polynomial, this can only be 0 or 1, since if \(r-(\a+i\b)\)
appears, then so does \(r-(\a-i\b)\); the factorization of \(p_L(r)\)
is \(a\big(r-(\a+i\b)\big)\big(r-(\a+i\b)\big)\). We can define
\(s\) as the multiplicity of \(\a+i\b\) OR the
multiplicity of \(\a-i\b\), but not both at the same time.
(I.e. we count the multiplicity of only one of these conjugate
roots.) These two multiplicities are always equal (even for
higher-degree polynomials with real coefficients), so for simplicity's
sake, in the conjugate pair of roots \(\a\pm i\b\), we may confine
ourselves to considering only the "\(\a+i\b\)" for which \(\b>0\).
- It's important to remember that the MUC works only for
constant-coefficient linear differential operators \(L\)
(and even then, only for certain functions \(g\) in
"\(L[y]=g\)"). That can be easy to forget when doing Chapter 4
exercises, since virtually all the DEs in these exercises are
constant-coefficient. (Remember that a linear DE
\(L[y]=g\) is called a constant-coefficient equation
if \(L\) is a constant-coefficient operator; the function \(g\) is
irrelevant to the constant/non-constant-coefficient
classification.)
-
In class, for the sake of simplicity and
time-savings, for second-order equations
I've consistently been using the letter \(t\) for the
independent variable and the letter \(y\) for the independent
variable in linear DE's. The book generally does this in Chapter 4
discussion as well, but not always in
the exercises—as I'm sure you've noticed. For each DE
in the book's exercises, you can still easily tell which variable is
which: the variable being differentiated (usually indicated with
"prime" notation) is the dependent variable.
While you're learning methods, it's
perfectly fine as an intermediate step to replace
variable-names with the letters you're most used to, as long as,
when writing your final answer, you remember to switch your
variable-names them back to what they were in the problem you were
given. On exams, some past students have simply written a note
telling me how to interpret their new variable-names. No.
[Not if you want 100% credit for an otherwise correct answer
to. That translation is your job, not mine. Writing your
answer in terms of the given variables accounts for part of the
point-value and time I've budgeted for.])
Do these non-book exercises on the
Method of Undetermined Coefficients. The answers to these
exercises are here. (These links
are also on the Miscellaneous Handouts page.)
|
| W 10/29/25 |
Do
non-book problem 13, parts
(a)–(c).
On the Miscellaneous Handouts page, under the "Method of
Undetermined Coefficients" bullet-point, there are several
handouts (the last two of which were linked to the previous assignment).
To get a more complete picture of some things I said in class on
Monday,
view the "granddaddy" file and read the accompanying "Read Me"
file, which is essentially a long caption for the diagram in
the "granddaddy file".
4.7/ 29, 31, 34a. (These could have been assigned a few
lectures ago.) In #29, assume that the functions \(p\) and
\(q\) are continuous \( (a,b)\). In
#34, assume that the interval of interest is the whole real
line.
  For the above Section 4.7 exercises, you don't have to
have read Section 4.7; we've covered everything necessary in class.
4.5/ 9–12, 14–23, 27, 29, 31, 32, 34–36.
In #23,
the same comment as for 4.4/12 applies.
Problem 42b (if done correctly) shows
that the particular solution of the DE in part (a) produced by the
Method of Undetermined Coefficients actually has physical
significance.
4.5 (continued)/ 37–40.
In these, note that you are
not being asked for the general solution (for which you'd need
to be able to solve a third- or fourth-order homogeneous linear
DE, which we haven't yet discussed explicitly—although you would
likely be able to guess correctly how to do it for
the DEs in exercises 37–40). Some tips for 38 and 40 are
given below.
In a
constant-coefficient differential equation \(L[y]=g\), the functions
\(g\) to which the MUC applies are the same regardless of the order
of the DE, and, for a given \(g\), the MUC form of a particular
solution is also the same regardless of the order of the DE.
(We will see why at another time.) The
degree of the characteristic polynomial is the same as the order of
the DE (since we can get the characteristic polynomial by just replacing each
derivative appearing in \(L[y]\) by the corresponding power of
\(r\), remembering that the "zeroeth" derivative—\(y\)
itself—corresponds to \(r^0\) [i.e. to 1, not to \(r\)].)
However, a polynomial of degree greater than 2 can have roots of
multiplicity greater than 2. The possibilities for the exponent
"\(s\)" in the general MUC formula (for functions of "MUC type" with
a single associated "\(\alpha + i\beta\)") range from 0 up to the
largest multiplicity in the factorization of \(p_L(r)\).
Thus the only real difficulty in applying the
MUC when \(L\) has order greater than 2 is that you may have to
factor a polynomial of degree at least 3, in order to correctly
identify root-multiplicities. Explicit factorizations are possible
only for some such
polynomials. (However, depending on the
function \(g\), you may not have to factor \(p_L(r)\) at all. For an
"MUC type" function \(g\) whose corresponding complex number is
\(\alpha +i \beta\), if \(p_L(\alpha +i \beta)\neq 0\), then
\(\alpha +i \beta\) is not a characteristic root, so the
corresponding "\(s\)" is zero.) Every cubic or
higher-degree characteristic polynomial arising in this textbook is
one of these special, explicitly factorable polynomials (and even
among these special types of polynomials, the ones arising in the
book are very simple):
- In all the problems in this textbook in which
you have to solve a constant-coefficient, linear DE of order
greater than two, the corresponding characteristic polynomial
has at least one root that is an integer of small absolute
value (usually 0 or 1). For any cubic polynomial \(p(r)\),
if you are able to guess even one root, you can factor the whole
polynomial. (If the root you know is \(r_1\), divide \(p(r)\) by
\(r-r_1\), yielding a quadratic polynomial \(q(r)\). Then
\(p(r)=(r-r_1)q(r)\), so to complete the factorization of
\(p(r)\) you just need to factor \(q(r)\). You already know how
to factor any quadratic polynomial, whether or not it has
easy-to-guess roots, using the quadratic formula.)
From the book's examples and exercises, you
might get the impression that plugging-in integers, or perhaps
just plugging-in \(0\), \(1\), and \(-1\), is the only tool for
trying to guess a root of a polynomial of degree greater than 2.
If you were a math-team person in high school, you should know
that this is not the case. If you know the
Rational Root Theorem, then for all the cubic characteristic
polynomials arising in this textbook, you'll be able to guess an
integer root quickly. If you do not know the Rational
Root Theorem, you will still be able to guess an integer
root quickly, but perhaps slightly less quickly.
- For problem 38, note that if all terms in a polynomial
\(p(r)\) have even degree, then effectively \(p(r)\) can be treated
as a polynomial in the quantity
\(r^2\). (This enormously simplifying
observation is worth remembering!! It comes up in many
contexts, e.g. partial fractions, and you shouldn't need to be
prompted more than once in your life—not just once in
each context— to notice it. If you've never noticed this
simplification, this is that one time! Overlooking this
simplification often leads students to do a lot of extra work, or to
be unable to do problems that they ought to be able to
do.) Hence, a polynomial of the form \(r^4+cr^2+d\) can be
factored into the form \((r^2-a)(r^2-b)\), where \(a\) and \(b\)
either are both real or are complex-conjugates of each other. You
can then factor \(r^2-a\) and \(r^2-b\) to get a complete
factorization of \(p(r)\). (If \(a\) and \(b\) are not real, you may
not have learned yet how to compute their square roots, but in
problem 38 you'll find that \(a\) and \(b\) are real.)
You can also do problem 38 by extending the
method mentioned above for cubic polynomials. Start by guessing one
root \(r_1\) of the fourth-degree characteristic polynomial \(p(r)\).
(Again, the authors apparently want you to think that the way to find
roots of higher-degree polynomials is to plug in integers, starting
with those of smallest absolute value, until you find one that works.
In real life, this rarely works—but it does work in all the
higher-degree polynomials that you need to factor in this
book; they're misleadingly fine-tuned.)
Then
\(p(r)=(r-r_1)q_3(r)\), where \(q_3(r)\) is a cubic polynomial that you
can compute by dividing \(p(r)\) by \(r-r_1\). Because of the
authors' choices, this \(q_3(r)\) has a root \(r_2\) that you should be
able to guess easily. Then divide \(q_3(r)\) by \(r-r_2\) to get a
quadratic polynomial \(q_2(r)\)—and, as mentioned above, you
already know how to factor any quadratic polynomial.
- For
problem 40, you should be able to recognize that \(p_L(r)\) is \(r\)
times a cubic polynomial, and then factor the cubic polynomial by
the guess-method mentioned above (or, better still, recognize that
this cubic polynomial is actually a perfect cube).
|
| F 10/31/25 |
Do
non-book problem 13, parts
(h)–(k).
4.5 / 41, 42, 45.
Exercise 45 is a nice (but
long) problem that requires you to combine several
things you've learned. The strategy is similar to the
approach outlined in Exercise 41. Because of the
"piecewise-expressed" nature of the right-hand side of
the DE, there is a sub-problem on each of three
intervals: \(I_{\rm left}= (-\infty, -\frac{L}{2V}\,]
\), \(I_{\rm mid} = [-\frac{L}{2V}, \frac{L}{2V}] \),
\(I_{\rm right}= [\frac{L}{2V}, \infty) \). The solution
\(y(t)\) defined on the whole real line restricts to
solutions \(y_{\rm left}, y_{\rm mid}, y_{\rm right}\)
on these intervals.
You are given that \(y_{\rm left}\)
is identically zero. Use the
terminal values \(y_{\rm left}(- \frac{L}{2V}), {y_{\rm
left}}'(- \frac{L}{2V})\), as the initial values \(y_{\rm
mid}(- \frac{L}{2V}), {y_{\rm mid}}'(- \frac{L}{2V})\). You then have
an IVP to solve on \(I_{\rm mid}\). For this, first find a
"particular" solution on this interval using the MUC. Then, use this
to obtain the general solution of the DE on this interval; this will
involve constants \( c_1, c_2\). Using the IC's at \(t=-
\frac{L}{2V}\), you obtain specific values for \(c_1\) and \(c_2\),
and plugging these back into the general solution gives you the
solution \(y_{\rm mid}\) of the relevant IVP on \(I_{\rm mid}\).
Now compute the terminal values
\(y_{\rm mid}(\frac{L}{2V}), {y_{\rm
mid}}'(\frac{L}{2V})\), and use them as the initial
values
\(y_{\rm right}(\frac{L}{2V}), {y_{\rm
right}}'(\frac{L}{2V})\). You then have a new IVP to
solve on \(I_{\rm right}\). The solution,
\(y_{\rm right}\), is what you're looking for in part (a) of the
problem.
If you do everything correctly (which may
involve some trig identities, depending on how you do certain steps),
under the book's simplifying assumptions \(m=k=F_0=1\) and \(L=\pi\),
you will end up with just what the book says: \(y_{\rm right}(t) =
A\sin t\), where \(A=A(V)\) is a \(V\)-dependent constant
(i.e. constant as far as \(t\) is concerned, but a function of the
car's speed \(V\)). If you get the formula right, you'll see that
\(A(V)=0\) for \(V=\frac{1}{3}, \frac{1}{5}, \frac{1}{7}, \dots\) (the
reciprocal of any odd integer \(m\geq 3\)), but not for any \(V\geq
1\).
In part (b) of the problem you are interested
in the function \(|A(V)|\), which you may use a graphing calculator or
computer to plot. The graph is very interesting. Keeping in mind
that both the car and the speed-bump in this problem are
pretend-models, not something whose predictions you should check:
the graph shows that if you drive over the bump slowly enough, the car
will not shake too much. If we imagine that the speed-bump's maximum
speed warning is 15 mph, and that the shaking this speed yields has
about 1/3 the maximum this bump can deliver to this car (the
"most violent shaking of the vehicle"), then the amplitude increases
rapidly with speed up till about 30 mph. At higher speeds, the
amplitude actually decreases with speed, but very
slowly. If you could test this pretend-model on a
race-track,, where you could drive like a bat out of hell
(DON'T TRY THIS!!!), then at fast enough speeds the car would
barely be affected by the bump. However (with
this crude model of a speed-bump, and with the parameters
\(m,k,L\), and \(F_0\) given unrealistic values in order to simplify
computations), you'd have to drive faster than 150 mph or so
for the shaking to be as minimal as it was at 15 mph.
Note: When using MUC to find a
particular solution on \(I_{\rm mid}\), you have to handle the cases
\(V\neq 1\) and \(V = 1\) separately. (If we were not making the
simplifying assumptions \(m = k = 1\) and \(L=\pi\), these two cases
would be \(\frac{\pi V}{L}\neq \sqrt{\frac{k}{m}}\) and \(\frac{\pi
V}{L}= \sqrt{\frac{k}{m}}\), respectively.) Using \(s\) for the
multiplicity of a certain number as a root of the characteristic
polynomial, \(V\neq 1\) puts you in the \(s= 0\) case, while \(V = 1\)
puts you in the \(s= 1\) case.
Warning: As evening approaches, small humanoids will be
roaming the streets. Allow their approach at your own peril!
|
| M 11/3/25 |
Do the multi-part
non-book problem 14 (revised
11/2/25; view/download a fresh copy). If you can't get through all
of it before the Monday 11/3 class, finish it as part of the next
assignment.
Read or skim Section 4.7 up to, but not including, Theorem 7
(Variation of Parameters). The only part of this that we have
not already covered in class is the part that starts
with Definition 2 and ends with Example 3.
- Reminder about some terminology. As I've
said in class, "Characteristic equation" and "characteristic
polynomial" are things that exist only for constant-coefficient
DEs. This terminology should be avoided in the setting of
Cauchy-Euler DEs (and
was avoided for these DEs in early editions of our
textbook). The term I will be using in class for
equation (7) on p. 194, "indicial equation", is what's used in
most textbooks I've seen, and really is better
terminology—you (meaning the book's authors) invite
confusion when you choose to give two different meanings to the
same terminology.
In our textbook, p. 194's equation (7) is actually introduced
twice for
Cauchy-Euler DEs, the second time as Equation (4) in Section
8.5.
For some reason, the authors
give the terminology "indicial equation" only in Section 8.5,
- Correction to book.
On p. 194, the sentence "If
\(r\) is complex ..." falsely implies that the identity
\(t=e^{\ln t}\) (for \(t>0\))
and the definition \(e^{i\th}=\cos\th +
i\sin\th\)(for \(\th\in\bfr\)), taken together, are all that's
needed for the sequence of equations displayed misleadingly as
a derivation of the formula \(t^{\a
+i\b}=t^\a\big(\cos(\b\ln t)+i\sin(\b\ln t)\big)\).
Sorry, no. The very first equation in this
"derivation", \(t^{\a+i\b} =t^\a
t^{i\b}\), assumes that the not-yet-defined
"complex exponential with real, positive base \(t\)" has this
property, just because the formula is true for real
exponents. There is no such thing as "proof by notation".
One correct version of the book's
presentation is to start by defining
\(t^{\a+i\b}\) to be \(e^{(\a+i\b)\ln t}\) for
(real) \(t>0\) and \(\a,\b \in \bfr\). (This definition is
suggested by the fact that "\(\ t^r = e^{r \ln t}\ \) " is the
correct definition of \(t^r\) for real \(t>0\) and [possibly
irrational] real \(r\). We are simply extending this definition
to complex exponents; if \(\b=0\) we recover the definition of \(t^r\)
for real exponents \(r\).) Using this definition, we then
have
$$ \begin{array}{rclll} t^{\a+i\b} &\ =\ & e^{(\a+i\b)\ln t} & =&
e^{\a\ln t +i\b\ln t} \\ &&& \ =\ & \ e^{\a\ln t} \ e^{i\b\ln t} \ \ \
\mbox{(by definition of $e^z$ for complex $z$)}. \end{array} $$
We also have \( e^{\a\ln t} = t^\a\) (by definition) and \(e^{i\b\ln
t} =t^{i\b}\) (using the definition of
\(t^{\a+i\b}\) with \(\a=0\)). Combining these yields
\(t^{\a+i\b} =t^\a t^{i\b}\). Furthermore,
\(t^{i\b}=e^{i\b\ln t} = \cos(\b\ln t)+i\sin(\b\ln t)\) by definition of
\(e^{i\th}\) for real \(\th\). Combining this with the "\(t^{\a+i\b}
=t^\a t^{i\b}\)" that we've just derived, not assumed,
we
now have
$$t^{\a+i\b}=t^\a t^{i\b} = t^\a \big(\cos(\b\ln t)+i\sin(\b\ln
t)\big).$$
Thus, via correct definitions and non-circular logic,
we arrive at the asserted equality "\(t^{\a+i\b}=t^\a t^{i\b} = t^\a
\big(\cos(\b\ln t)+i\sin(\b\ln t)\big)\)."
FYI: Although I inserted
the parenthetic "(real)" once in "for (real) \(t>0\)" above, this
insertion (with or without parentheses) is not necessary. For
non-real complex numbers, there is no such thing as a
"greater than" or "less than" relation. Thus, in a setting where
we're talking about complex numbers, we do not need to specify
explicitly that (say) \(t\) is real when we write (say) "\(t>0\)";
the fact that the "greater than" symbol appears next to \(t\) tells
us implicitly that \(t\) is assumed to be real.
|
| W 11/5/25 |
Finish non-book problem
14,
if you haven't already.
(The accidental duplicate-parts have now been
removed. I've also fixed some minor typos and made some minor
wording-changes.)
Using the definition given in the last
assignment (and in Monday's class) of \(t^z\) for general \(z\in
\bfc\) and \(t>0\), show that (for \(t>0\)) the familiar
real-exponent relation $$t^r\, t^s = t^{r+s} \ \ \ (*)$$
holds true for all complex exponents \(r,s\) as well. I
neglected to show this in Monday's class, but used the \(s=-1\)
case of the relation (*) when I wrote
"\(t^r\,\frac{1}{t}=t^{r-1}\) " in my derivation of
"\(\frac{d}{dt}t^r = rt^{r-1}\)." The positive-integer-\(s\) case
of this relation is also needed for deriving the formula
"\(L[t^r]=q_L(r)\,t^r\ \mbox{on}\ (0,\infty)\)" (for Cauchy-Euler
operators \(L\) and complex exponents \(r\)): in
the \(j^{\rm th}\)-derivative term of \(L[t^r]\), we need to know
that \(t^j\, t^{r-j} = t^r\).
4.7 (continued)/ 9–14, 19, 20
Read (or at least skim) Section 4.6, but without the (implicit)
assumption in equation (1), p. 187, that the linear DE has constant
coefficients. Replace that assumption with: the coefficients are
continuous functions on an interval \(I\), on which \(a(t)\) is
nowhere 0. The method (and the argument that it works) is no
different in this more general situation.
When I present the method in class, my starting-point is a
DE that's already in standard linear form (\(y''+py'+qy=0\));
i.e. I've already divided through by the "\(a\)" (not necessarily
constant) in equation (1). For me, that dividing-through is Step
0. So, in place of the second equation in (9), I'll have one
whose RHS is simply \(g\).
|
| F 11/7/25 |
Do non-book problem
15, before doing exercises 15–18 below.
Refresh/re-download the non-book-problems page; I revised
non-book problem 15 on 11/6/2025).
4.7 (continued)/ 15–18. Also do 23, as modified
below.
- Ignore the first sentence ("To justify ...").
- Understand why 23a is equivalent to what I did in class on
Wednesday (in the Cauchy-Euler part of the lecture).
- Do 23b, but additionally observe that the
change-of-independent-variable idea in 23b is essentially the
same as in the blue note to non-book problem 15a.(All that's
different are the specific substitutions and intervals that
are involved.)
Reminder of some things I said in class:
Problem 23b, with \(f=0\), shows that the
indicial equation for the Cauchy-Euler DE is
the same as
the characteristic equation for the
associated constant-coefficient DE obtained by the
Cauchy-Euler substitution \(t=e^x\). (That's if \(t\) is the
independent variable in the given Cauchy-Euler equation; the
substitution leads to a constant-coefficient equation with
independent variable \(x\).) This is one of the reasons for
keeping
the terminology "indicial equation" and "characteristic
equation" distinct.
In my experience it's unusual to hybridize the
terminology and call the book's Equation (7) the characteristic
equation for the Cauchy-Euler DE, but you'll need to be
aware that that's what the book does.
- Instead of 23c, which I did in class, check directly
(i.e. without using complex-valued functions) that if the indicial
equation for a second-order homogeneous Cauchy-Euler DE
\(at^2y''+bty'+cy=0\) has complex roots \(\alpha \pm i\beta,\)
with \(\beta\neq 0\), then the functions
\(y_1(t)=t^{\alpha}\cos(\beta \ln t)\) and
\(y_2(t)=t^{\alpha}\sin(\beta \ln t)\) are solutions of the DE on the
interval \( (0,\infty) \).
Optional: for a review of what I did in class with complex power
functions \(t\mapsto t^r, \ t>0\), and the facts that needed to be
checked in order to use these to analyze Cauchy-Euler DEs, go to my
Spring 2025 homework page,
assignment due 3/31/25, and look at (i) the blue note after the
"Check directly ..." bullet-point, and (ii) the "Power functions
with positive base and complex exponent" bullet-point.
  (This review is optional as
homework; you're still responsible for everything
that's in the review, since we covered all of it in class and
earlier homework.)
|
| M 11/10/25 |
4.6/ 1–12, 15, 17, 18, 19 (first sentence
only).
Remember that to apply Variation of
Parameters as presented in class, you must first put the DE in
"standard linear form", with the coefficient of the second-derivative
term being 1 (so, divide by the coefficient of this term, if the
coefficient isn't 1 to begin with). NSS's approach to remembering
this is to cast the two-equations-in-two-unknowns system as (9) on
p. 188 (with their \(\frac{f}{a}\) being my \(g\)).
This is fine, but my personal preference is to put
the DE in standard form from the start, in which case the "\(a\)" in
the book's pair-of-equations (9) disappears.
Reminder: Final answers,
to any type of problem, should always be simplified
whenever possible. (This is an instruction on all my exams!)
There isn't always a unique, objectively simplest way to
write an answer, but there are often ways that are objectively
simpler than others. Neither of \(t\ln |t|-t\) and \(t(\ln|t|-1)\)
is objectively simpler than the other, but \(4t+e^t\) is objectively
simpler than \(3t + e^t +t\).
This issue often comes up in Variation of
Parameters problems, because in "\(v_1y_1+v_2y_2\)" or
"\(v_1y_1+v_2y_2+c_1y_1+c_2y_2\)", in specific examples, there are
often expressions that can and should be combined.
For example,
\(e^{5t}(-\frac{1}{4}t^4)+te^{5t}(\frac{1}{3}t^3)\) could never be a
completely acceptable final answer, since \(\frac{1}{12} t^4 e^{5t}\)
is an objectively simpler way of writing the same expression. (So is
\(\frac{1}{12} e^{5t}t^4\); neither \(t^4 e^{5t}\) nor
\(e^{5t}t^4\) is objectively simpler than the other.) Similarly, "\(y=te^t(\ln|t|-1)+c_1e^t+c_2te^t\),"
where \(c_1\) and \(c_2\) are arbitrary constants, cannot be
a completely acceptable way of writing the general solution
of whatever DE, since this family of functions can be written
objectively more simply as
"\(y=te^t\ln|t|+c_1e^t+c_2te^t\) " (The "\(-te^t\) "
obtained when we multiply-out \(te^t(\ln|t|-1)\) can be
absorbed into the \(c_2te^t\); we just rename \(c_2-1\), which can
be any real number, to a new arbitrary constant \(c_2\).)
One good piece of advice in the book is the sentence after
the box on p. 189: "Of course, in step (b) one could use the
formulas in (10), but [in examples] \(v_1(t)\) and
\(v_2(t)\) are so easy to derive that you are advised not to
memorize them." (This advice applies even if you've put the DE
into standard linear form, so that the coefficient-function \(a\) in
equation (10) is 1.)
Incorrectly memorized formulas are worthless. If you attempt
to memorize a formula instead of learning the underlying method, and
your formula is wrong in any way (e.g. a sign is wrong), or
you misuse the correct formula in any way, don't expect
to get much partial credit on an exam problem.
4.7/ 24cd, 37–40. Some comments on these exercises:
- In #37 and #39, the presence of the expression \(\ln
t\) in the given equation means that, automatically, we're restricted
to considering only the domain-interval \( (0,\infty) \). In #40,
presence of \(t^{5/2}\) has the same effect, but the instructions
explicitly say, anyway, to restrict attention to the positive
\(t\)\interval. But in #38, there is no need to restrict attention to
\( (0,\infty) \); you should solve on the negative-\(t\) interval as
well as the positive-\(t\) interval.
- On \((0, \infty)\),
the DEs in all these exercises can be solved either by
using the Cauchy-Euler
substitution "\(t=e^x\)," or
by
first using the indicial equation
to find a FSS for the associated homogeneous DE and then
using Variation of Parameters for the non-homogeneous DE. Both methods
work. I've deliberately assigned exercises that have you solving some
of these equations by one method and some by the other, so that you
get practice with both approaches. Neither is automatically faster
or "better" than the other.
- Regarding #38: as noted
after non-book problem 15(a),
if a
function
\(y\) is a solution of a non-homogeneous
DE on \( (0, \infty) \), then the function
\(\tilde{y}\) on \( (-\infty,0) \) defined by \(\tilde{y}(t)
=y(-t)\) need not be a solution of the same non-homogeneous
DE. So in #38 you'll need to do something a little different to
get a solution to the non-homogeneous equation on \(
(-\infty,0) \).
- In #40, to apply Variation of Parameters as I
presented it in class, don't forget to put the DE into standard form
first!
But after you've done the problem
correctly, I recommend going back and seeing what happens if you
forget to divide by the coefficient of \(y''\). Go as far as seeing
what integrals you'd need to do to get \(v_1'\) and \(v_2'\). You
should see that if you were to do these (wrong) integrals, you'd be
putting in a lot of extra work (compared to doing the right
integrals), all to get the wrong answer in the end. I've made this
mistake on this specific problem several times in the
past!
Redo 4.7/40 by starting with the substitution
\(y(t)=t^{-1/2}u(t)\)
and seeing where
that takes you.
(This should
answer the question, "How did anyone ever figure out, or guess,
a FSS for the homogeneous DE in this problem?" Most, if not all,
of the homogeneous linear DEs for which anyone has ever figured
out a completely explicit FSS, are DEs that can be
"turned into" constant-coefficient DEs by some clever
substitution! Some substitutions change the independent variable
[e.g. the Cauchy-Euler substitution in 4.7/23]; some change the
dependent variable [e.g. the one I just gave you for
4.7/40].)
|
| W 11/12/25 |
Skim Section 6.1, a
lot of which is review of material we've covered already. Assigned
exercises from this section are at the end of this assignment.
I'm
not fond of the way the section is organized or the material is
presented. Among other things:
- There is too much emphasis on the Wronskian,
especially since most students in
their first DE course haven't yet learned how to compute (or define) a
determinant that isn't \(2\times 2\) or \(3\times 3\). "Fundamental set of solutions" (or "fundamental
solution set") should not be defined using the
Wronskian.
- Linear dependence/independence of functions should
be introduced sooner, definitely before the Wronskian.
For easy reference: a set of functions
\(\{f_1, f_2, \dots, f_m\}\) on an interval \(I\) is:
- linearly dependent (on \(I\))
if there are constants \(c_1, c_2, \dots, c_m\), not all
zero, such that \(c_1f_1+c_2f_2+\dots +c_mf_m =0\) (the
constant function 0 on \(I\)); equivalently, if at least one
of the functions \(f_i\) is a linear combination of the
others.
- linearly independent (on \(I\))
otherwise (i.e. if the only constants \(c_i\) for which
\(c_1f_1+c_2f_2+\dots +c_mf_m\) is identically 0 on
\(I\) are \(c_1=c_2=\dots = c_m=0\); equivalently,
if no \(f_i\) is a a linear combination of the
others).
Here is how the material in Section 6.1 should be organized
(I suggest using this outline to guide your
thinking about the material in this section):
- Immediately after the "As
a consequence ..." sentence near the bottom of p. 320, before
anything else is said (or the book's "Is
it true ...?" question is asked), the term fundamental set
of solutions (FSS) should be defined. Specifically,
for
a homogeneous linear DE
\(L[y]=0\) on an interval \(I\), a fundamental set
of solutions (FSS) should be defined in one of the
following equivalent ways.
(i) A finite set of
functions \( \{y_1, \dots, y_m\} \) on \(I\)
for which the general solution of \( L[y]=0\) on \(I\)
is the set of linear combinations \( \{c_1y_1+ \dots
+c_m y_m\} \), and for which \(m\) is as small as
possible among all such sets of
functions.
(ii) A finite, linearly independent set
of solutions \( \{y_1, \dots, y_m\} \) of \(
L[y]=0\) on \(I\) such that every solution of
\(L[y]=0\) on \(I\) is a linear combination of
\( \{y_1, \dots, y_m\}. \)
(iii) A finite, linearly independent set of solutions \(
\{y_1, \dots, y_m\} \) of \( L[y]=0\) on \(I\) such that
the general solution is the set of linear combinations
\( \{c_1y_1+ \dots +c_m y_m\} \).
As discussed in class several weeks ago, in
definition (i), a consequence of "\(m\) is as small as possible" is
that \( \{y_1, \dots, y_m\} \) is linearly independent. (Why?) Thus,
whichever of (i), (ii), or (iii) is used, a FSS
is automatically linearly
independent.
The concept of
"FSS" really has nothing to do with differential equations,
intrinsically; it is a concept that comes straight from linear
algebra. In linear algebra, given a homogeneous linear equation
\(L[y]=0\) (where \(L\) is a linear operator on the "space of inputs
\(y\)"), what we are calling "fundamental set of solutions" would be
called "basis of the solution space, provided that the solution space
is finite-dimensional". For a homogeneous linear equation,
"solution space" means the same thing as
"solution set"—the set of all solutions; equivalently,
the general solution—but with an added reminder that this set is
"closed under taking linear combinations", meaning that any linear
combination of solutions is a solution (of the same equation).
In the DE setting, the Wronskian is
an interesting function and a useful
tool
for proving various theorems, but, conceptually and logically,
it absolutely does not belong in
the definition of "FSS"; putting it there obscures the "basis of
the solution-space" concept.
- Questions that should then be asked are (1) whether a
linear, homogeneous DE always has a FSS, and (2) if/when
such a DE has a FSS, whether the number of functions (the
\(m\) above) is always the same as the order of the operator.
(Question 1 amounts to: do there always
exist finitely many solutions \(y_1, \dots, y_m\) of
\(L[y]=0\) on \(I\) such that every solution of \(L[y]=0\)
on \(I\) is a linear combination of \(\{y_1, \dots, y_m\}\). If
there is any such set of solutions, then there is smallest \(m\)
for which there is such a set.)
- As a (partial, but very important) answer to questions (1) and
(2) above, a theorem should then be stated that asserts
that, for an \(n^{\rm th}\)-order homogeneous linear DE
\(L[y]=0\) in standard form, with continuous
coefficient-functions, then
(1) a
FSS of \(L(y)=0\) on \(I\) exists (in
fact, infinitely many FSS's of this DE on \(I\)
exist);
(2) any such FSS has exactly \(n\) functions; and
(3) a set of solutions \( \{y_1, \dots, y_n\} \)
of \( L[y]=0\) on \(I\) is a FSS if and only if this set of
functions is linearly independent on \(I\).
(This is what the book's
Theorems 2 and 3, combined, should have said.)
-
The Wronskian should
then be introduced (and a reference for the definition and
properties of \(n\times n\) determinants for general \(n\) should be
given), and used as a tool for proving this theorem
and for checking whether a set of solutions of \(L[y]=0\) is
linearly independent. (Again:
a tool, not part of a definition of anything thing
important. Introducing the Wronskian any other way distracts from
concepts that are actually important.)
- Notation such as "\(y_h\)" should be introduced for the
general solution of the associated homogeneous equation. The
general solution is best treated as the set of all
solutions, not as a typical element of this
set. (The book does the opposite after
Theorem 2, as do many other books—generally, the same ones
that use indefinite-integral notation for an arbitrary
but specific antiderivative, rather than as the set
of all antiderivatives. Such a definition is defensible, but
misguided [in my opinion, of course], and should have
been retired by the 1960s if not earlier.)
- Theorem 4 should be stated and proved. But after equation (28),
before the next sentence, something like the following should be
inserted: "Then the general solution of (27) on \((a,b)\) is
\(y=y_p+y_h.\)" Then the book's next sentence (the one concluding
with
equation (29)) should be given, with "Then" replaced by "Thus".
- 6.1/ 1–6, 7–14, 19, 20, 23.
Do
7–14 without using Wronskians.
The sets of
functions in these problems are so simple that, if you know
your basic functions
(see The Math
Commandments), Wronskians will only increase the
amount of work you have to do. Furthermore, in these
problems, if you find that
the Wronskian is zero then you can't conclude anything (from
that alone) about
linear dependence/independence. If you do not know your basic
functions, then Wronskians will not be of much help.
|
Thursday 11/13/25
(exam info, not homework)
|
Second midterm exam
Time: 7:30 p.m.
Location: LIT 305 (same room we used for the first midterm)
everything we've covered up
through the Monday 11/10 lecture (including homework assigned with
due-dates up through 11/10).
My second midterm from last semester is now posted on Canvas,
under Files. That exam was given three lectures earlier than
yours will be, so there's likely to be more material that's fair
game for
your exam than there was for that one.
In case you'd like additional
exercises to practice
with,
you should be able to do review-problems 1–36 on
p. 231. In the third-order constant-coefficient problems, the
coefficient have been fine-tuned to ensure that the characteristic
polynomial has at least one root that's an integer of small absolute
value.
|
| F 11/14/25 |
Read Section 6.2.
6.2/ 1, 9, 11, 13, 15–18. The characteristic polynomial for #9
is a perfect cube (i.e. \( (r-r_1)^3\) for some \(r_1\)); for #11 it's
a perfect fourth power.
For some of these problems and ones later in Section 6.3, it may help you
to first review my
comments about factoring
in the assignment due 3/24/25.
Read Section 6.3.
6.3/ 1–4, 29, 32. In #29, ignore the instruction to use the
annihilator method;
just
use
MUC and superposition.
|
| M 11/17/25 |
Read Section 7.1.
In Section 7.2:
- Read Examples 1–4 and the box,
"Linearity of the Transform".
- Skim Table 7.1 (p. 356)
- Read the definitions of "piecewise continuity" (p. 357)
- On p. 359, read the box "Exponential Order \(\a\)"
the box "Conditions for Existence of the transform", and the
material in between.
In Section 7.3, read the boxes with Theorems 3, 4, and 5. Skim
the box with Table 7.2 to familiarize yourself with it.
In Section 7.4, read the boxes "Inverse Laplace Transform"
and "Linearity of the Inverse Transform". On p. 370, read the
paragraph that starts with "Given the choice ..." to make yourself
aware that the inverse transform often requires you to do a
partial-fractions decomposition of some rational function of
\(s\).
In Section 7.5, skim from the beginning up through the end of
Example 1, just to get a rough idea of how Laplace Transforms
are going to be used to solve (certain)
IVPs.
However, don't
think for a minute that what you see after the line beginning
"Substituting these expressions ..." is acceptable writing for a
math textbook or a math instructor. "Equation equation equation
equation", three non-sequiturs in a row, can be accepted
from students on exams, but not from anyone who purports
to be teaching. There are supposed to be words
between the equations, words that make clear how each equation
is related to the next one. Teachers are supposed to help students
get rid of bad habits, not reinforce them.
Time permitting, look at Section 7.6. This is the first
place in which the Laplace Transform starts to be
useful. But all the build-up in the earlier
sections is needed.
|
| W 11/19/25 |
Look again at Table 7.1, p. 356. The restrictions on \(s\)
(e.g. \(s>0\) or \(s>a\)) come from the definition of the
transform, not the "implied domain" of the formula. For any
Laplace-transformable function \(f\), the domain of the Laplace
Transform \(F\) is always one of the following \(s\)-intervals:
\((s_0,\infty)\) or \([s_0,\infty)\) for some \(s_0\in \bfr\), or
\((-\infty,\infty)\). Thus, in all of these cases, \(F(s)\) is always
defined at least on some interval of the form \((s_0,\infty)\),
i.e for all \(s\) greater than some \(s_0\). We state this
qualitatively by saying that \(F(s)\) is defined for (all) \(s\)
sufficiently large. Table 1 tells you how large "sufficiently
large" is for the functions in the table, but this information turns
out not to matter, so don't focus on it (or get distracted by it).
On your
final exam, you'll be given
this Laplace
Transform table. Familiarize yourself with where the entries
of Table 7.1 (p. 356) are located in this longer table. This
longer table comes from an older edition of your textbook that I
photocopied way back when, but is
very similar to one you can still find on the inside front cover
or inside back cover of hard-copies of the current edition, and
somewhere in the e-book (search there on "A Table of Laplace
Transforms").
Warning: On line 8 of this table, "\( (f*g)(t)\)"
is not \(f(t)g(t)\); the symbol "\(*\)" in this line denotes an
operation called convolution
(defined in Section 7.8 of the
book, which I doubt we'll get to), not simple multiplication.
For the ordinary product \(fg\) of functions \(f\)
and \(g\), there is no simple formula that expresses
\({\mathcal L}\{fg\}\) in terms of \({\mathcal L}\{f\}\) and
\({\mathcal L}\{g\}\).
Read Section 7.6. Note: my name
and notation (which I'll be using) for the book's "rectangular window
function \(\Pi_{a,b}\)" are gate function
\(\mbox{gate}_{a,b}\), which comes from the terminology
"logic gate" used in digital circuitry.
I've been using this name since before
the book's authors chose their own name and notation for these
functions (the first several editions of the book
had no name or notation for these functions).
7.2/ 1–4,
10, 12,
13–20, 21–23.
In the instructions for
1–12, "Use
Definition 1" means "Use Definition 1", NOT
any
of Laplace Transforms.
But for 13–20,
do use Table 7.1 on p. 356 (as the instructions say to do),
even if we haven't derived
the formulas there , or
discussed linearity of the Laplace Transform (Theorem 1 on p. 355)
yet.
7.3/ 1–6
7.4/ 11, 13, 14, 16, 20
7.6/ 1–10
|
| F 11/21/25 |
7.3/ 31
7.4/ 1–10, 21–24, 26, 27,
31. Normally, I would not assign these until after talking
about the inverse Laplace transform in class, but time is
short. If you are unable to do these based on your reading, it's
okay to wait, but then you'll have a much longer assignment due
the Monday after Thanksgiving. I'm trying to spread out
the
homework problems over enough days that you'll have time to do
all the problems.
To learn some shortcuts for the partial-fractions work that's
typically needed to invert the Laplace Transform, you may want
first to read the web handout
"Partial fractions and
Laplace Transform problems".
7.5/ 15, 17, 18, 21, 22. Note that in these problems, you're being
asked only to find \(Y(s)\), not \(y(t)\). (I.e. there are no
inverse transforms involved in these problems.)
Theorem 5 (p. 363) is the basic property of the Laplace transform that
lets you transform a constant-coefficient
\(n^{\rm the}\)-order linear IVP
\(L[y]=g, \ \ y(0)=\mbox{something}, y'(0)=\mbox{something},
\dots \) into an algebraic equation of the form that I wrote
in class as "\(p_L(s)Y(s) +q_{n-1}(s) = G(s)\)."
7.5/1–8, 10, 29. These do require inverse
transforms, and are the first exercises in which you'll
actually use Laplace Transforms to solve any IVPs.
However, we have simpler ways of
solving these specific, very simple IVPs; the only reason to solve
them via Laplace Tranforms this way is to get practice the
Laplace Transfom method. We don't start solving DEs
for which Laplace Transform is really useful until Section 7.6.
7.6/ 11–18
|
| M 12/1/25 |
7.6/ 19–32, 36ac. In 21–24, you
may skip the "Sketch the graph" part of the exercises.
For all of the above problems (or
those of a similar type) in which you solve an IVP, write your
final answer in "tabular form", by which I mean an expression
like the one given for \(f(t)\) in Example 1, equation (4),
p. 385. Do not leave your final answer in the form of
equation (5) in that example. On an exam, I would treat the
book's answer to exercises 19–33 as incomplete, and would
deduct several points. The unit step-functions and "window
functions" (or "gate functions", as I call them) should be
viewed as convenient gadgets to use in intermediate
steps, or in writing down certain differential equations (the
DEs themselves, not their solutions). The purpose of these
special functions is to help us solve certain IVPs
efficiently; they do not promote understanding of solutions.
In fact, when writing a formula for a solution of a DE, the use
of unit step-functions and window-functions
often obscures understanding of how the solution behaves
(e.g. what its graph looks like).
For example, with the least
amount of simplification I would consider acceptable, the
answer to problem 23 can be written as
$$ y(t)=\left\{\begin{array}{ll} t, & 0\leq t\leq 2, \\
4+ \sin(t-2)-2\cos(t-2), & t\geq 2.\end{array}\right.
\hspace{1in} (*)$$
The book's way of writing the answer obscures the fact that the
"\(t\)" on the first line disappears on the second
line—i.e. that for \(t\geq 2\), the solution is purely
oscillatory (oscillating around the value 4); its magnitude does
not grow forever.
Note. In equation (*), observe that
I overdefined \(y(2),\) giving it a value on the first
line and then again on the second. The only reason this is
okay is that both lines give the same value for \(y(2)\), a
reflection of the fact that \(y(t)\) is continuous.
Since solutions \(y(t)\) of differential equations are always
continuous, we are guaranteed that if our tabular form for
a piecewise-expressed solution \(y(t)\) of a DE (or IVP) is
correct, then at any "break-point" \(t_1\) we will have
\(\lim_{t\to t_1-} y(t) = y(t_1) = \lim_{t\to t_1+} y(t),\) so
we can "overdefine" \(y(t_1)\) as in equation (*) without fear
of contradicting ourselves. This provides a useful
consistency-check on our tabular-form answer: At a "break
point" \(t_1\), if overdefining \(y(t_1)\) leads to two
different values of \(y(t_1)\) on the two lines on which
\(y(t_1)\) is defined, then our answer cannot be
correct (and we should go back and find our
mistake(s)). This consistency-check is very easy to do,
so we should always do it.
In exercise 23, using trig identities the
formula for \(t\geq 2\) can be further simplified to several
different expressions, one of which is \(4+
\sqrt{5}\sin(t-2-t_0)\), where \(t_0=\cos^{-1}(\frac{1}{\sqrt{5}}) =
\sin^{-1}(\frac{2}{\sqrt{5}})\). (Thus, for \(t\geq 2\),
the solution \(y(t)\)
oscillates between a minimum value of \(4-\sqrt{5}\) and a maximum
value of \(4+\sqrt{5}\).) This latter type of simplification is important
in physics and electrical engineering (especially for electrical
circuits). However, I would not expect you to do this further
simplification on an exam in MAP 2302.
|
| W 12/3/25 |
|
| |
|