In the table below, "NSS" stands for our textbook. Exercises are
from NSS unless otherwise specified.
Date due |
Section # / problem #'s |
W 1/15/25 |
Read
the class home page and
syllabus webpages.
Go to the Miscellaneous Handouts page (linked to the
class home page) and read the web handouts
"Taking and Using
Notes in a College Math Class," "Sets and Functions,", and
"What is a solution?"
Never treat any reading portion
of any assignment as optional, or as something you're sure
you already know, or as something you can postpone
(unless I tell you otherwise)! I can pretty much guarantee that
every one of my handouts has something in it that you don't know,
no matter how low-level the handout may appear to be at
first.
Read Section 1.1 and do problems 1.1/ 1–16.
Since not everyone may have access to the
textbook yet, here is a
scan of the first 15 pages (Sections
1.1–1.2, including all the exercises).
Do non-book problem 1.
In my notes on first-order
ODEs (also linked to the Miscellaneous Handouts page), read
the first three paragraphs of the introduction, all of Section
3.1, and Section 3.2.1 through Definition 3.1. In
all readings I assign from these notes,
you should skip anything labeled "Note(s) to instructors".
Whenever I update these notes (whether
substantively or just to
fix typos), I update the
version-date line on p. 1. Each time
you're going to look at the notes,
re-load them to make sure that you're looking at
the latest version.
|
F 1/17/25 |
1.2/ 1, 3–6, 17, 19–22.
Something I didn't have time to talk
about in Wednesday's class: whenever you
see the term "explicit solution" in the book, you should
(mentally) delete the word "explicit".
(Until the third author was added to later
editions of the textbook, what NSS now calls an explicit
solution is exactly what it had previously called, simply and
correctly,a solution.. The authors tried to "improve" the
completely standard meaning of "solution of a DE". They did not
succeed.
See Notes on some book
problems for additional corrections to the wording of several
of the Section 1.2 problems.
Note: The exercise portions of many
(probably most) of your homework assignments will be a lot more
time-consuming than in the assignments to date; I want to give you
fair warning of this before the end of Drop/Add.
However, since my posted notes are only
on first-order ODEs, the reading portions of the
assignments will become much lighter once we're finished with
first-order equations (which will take the first month or so of
the semester).
In
my notes, read from where you left off in the last assignment
through Example 3.11 (p. 15).
|
W 1/22/25 |
In the textbook, read the
first page of Section 2.2, minus the last sentence.
(We will discuss how to solve separable
equations after we've finished discussing linear equations, the topic
of Section 2.3. The only reason I'm
having you read the first page of Section 2.2 now is so that you can
do the first few exercises of Section 2.3. But as a "bonus", you'll also
be able to do the exercises in Section 2.2 assigned
below.)
2.2/ 1–4, 6
2.3/ 1–6
In
my notes, read from where you left off in the last assignment
through the one-sentence paragraph after Definition 3.19. (Exception:
you may treat the blue Remark 3.16 as optional reading.)
|
F 1/24/25 |
In Section 2.3, read up through p. 50, but mentally
makes some modifications:
- Replace the book's transition from
equation (6) to equation (7) by what I said in class: that
\(\mu'/\mu
=\frac{d}{dx} \ln |\mu|.\) (The book's reference to separable
equations is unnecessary, and does not lead directly to equation
(7); it leads to a similar equation but with \(|\mu(x)|\) on the
left-hand side, just as my argument on
Wednesday 1/22 did. On Friday 1/24, I'll go over why we can get rid of
the absolute-value symbols in this setting. Equation (7) itself
is fine, modulo the meaning of indefinite-integral notation;
it's only the book's derivation that has problems.)
-
Remember that whenever you see an indefinite
integral in the book, e.g. \(\int f(x)\, dx,\) the meaning is
my "\(\int_{\rm spec} f(x)\, dx\)." To review
what I said about notation for indefinite integrals, go to the
my Spring 2024 homework page,
locate the assignment that was due 1/19/24, and in the first
bullet-point, read from the beginning of the second sentence ("Remember
...") to the end of the green text.
In
my notes, read from where you left off in the last assignment
through the end of Section 3.2.4 (the middle of p. 27). For a while,
the readings I'm assigning will be beyond what we've gotten to yet in
class. On any of the topics in my notes, there's simply too much to
read for me to assign it all at once when we get to that topic in
class. There is also simply too much there to talk about all of it
in class. Much of it is material that could and should
have been taught in Calculus 1 (and that used to be
taught there).
In Wednesday's class I didn't get quite far enough to
write a clean summary of the method we had effectively
derived, but the box on p. 50 serves that purpose (modulo the
indefinite-integral notation). Armed with this, you should
be able to do most of the exercises I'll be assigning from
Section 2.3, but I'm putting these exercises in to
the next assignment since I didn't get quite far
enough. However, I recommend
that,
by Friday
1/24, you do as many of these exercises as you can, so that
the next assignment isn't extra-long.
If you want to read more examples
before starting on the exercises, it's
okay to look at Section 2.3's Examples 1–3, but be warned:
Examples 1 and 2 have some extremely poor writing that you
probably won't realize is poor, and that
reinforces certain bad habits that
most students have, but few are aware of. (Example 3 is
better written, but shouldn't be read before the other two.)
Specific
problems with Example 2 (one of which also occurs in Example
1) are discussed in the same
Spring 2024 assignment as above. The most pervasive of these
is the one in last small-font paragraph in that assignment.
|
M 1/27/25 |
2.3/ 7–9,
12–15 (note which variable is which in #13!),
17–20
When you apply the integrating-factor
method
don't forget
the first step: writing the equation in "standard linear form",
equation (15) in the book. (If the original DE had an \(a_1(x)\)
multiplying \(\frac{dy}{dx}\) — even
a constant function other than 1—you have to divide
through by \(a_1(x)\) before you can use the formula for \(\mu(x)\) in
the box on p. 50; otherwise the method doesn't work.) Be
especially careful to identify the function \(P\) correctly; its
sign is
very important. For example, in 2.3/17,
\(P(x)= -\frac{1}{x}\), not just \(\frac{1}{x}\).
One thing I didn't get to say in Friday's
lecture:
The general solution of any
derivative-form
DE—the set of all maximal solutions—is the same
as the set of maximal solutions of all possible IVP's for that DE.
Hence, by figuring out all solutions to all IVPs for the DE, we are
simultaneously figuring out the general solution of the DE.
2.3 (continued)/ 22, 23, 25a, 27a, 28, 31, 33, 35
See my Spring 2024 homework
page, assignment due 1/22/24, for corrections to some of the
Section 2.3 exercises. Also, in that same assignment read the three
paragraphs at the bottom of the assignment.
(The reason I'm not simply recopying such items into this semester's
homework page is that some of my 2023-2024 students said that,
although my comments and corrections were intended to be helpful, they
made the look of those assignments
overwhelming.)
2.2/ 34. Although this exercise is in the section on
"Separable Equations" (which we haven't discussed yet), the DE
happens to be linear as well as separable, so you're equipped to
solve it. For solving this equation, the "linear equations
method" is actually simpler than—I would even say
better than—the (not yet discussed) "separable
equations method".
(The
same is true of Section 2.1's equation (1), which the book solves
by the "separable equations method"—and makes two
mistakes in the sentence containing equation (4). This is why
I did not assign you to read Section 2.1.)
Do non-book problem
2.
In
my notes, read from the
beginning of Section 3.2.5 (p. 27) through the end of
Definition 3.23 (p. 33), plus the paragraph after that definition.
(All of this is needed for a
proper understanding of the word "determines" in the book's Definition
2 in its Section 1.2! [I still haven't defined "implicit solution of a
DE" yet; the above reading is needed just to understand the single
word "determines" in that definition.] This is one of the biggest
reasons I didn't assign you to read Section 1.2.)
Then
do the exercise that's shortly after Definition 3.23 in my notes.
|
W 1/29/25 |
In
my notes:
- Read the remainder of Section 3.2.5 (pp. 33–34).
- Read Section 5.5 (The Implicit Function
Theorem).
- In Section 3.2.6, read up through the end of Example 3.27
(pp. 37–38).
After you've done that reading,
do the following exercises from the
textbook: 1.2/ 2, 9–12, 30. In #30, ignore the
book's statement of the Implicit Function Theorem; use the
statement in my notes. The theorem stated in problem 30
is much weaker than the Implicit Function Theorem, and should
not be called by that name. In fact, problem 30 cannot even be done using
the book's theorem, because of the the words "near the point (0,1)"
at the end of the problem.
Note that the next assignment includes a more
challenging version of problems 9–12. Since the reading
portion of the current assignment is substantial (if you take the
time
needed to understand what you're reading), I didn't want to
make the exercise portion too time-consuming.
However, if you already feel up to tackling the more-challenging
versions of these problems, then, by all means, get started on them early!
|
F 1/31/25 |
Re-do 1.2/ 9–12 without the book's instruction
to assume that "the relationship does define \(y\) as a function
of \(x\)."
In my notes, read
Sections 5.2 and 5.4.
(If you have any uncertainty
about what an interval is, read Section 5.1 as well.
If you need to review
anything about the Fundamental Theorem of Calculus, read Section 5.3.)
My notes' Theorem
5.8, the "FTODE", is what the textbook's Theorem 1 on
p. 11 should have said (modulo my having used
"open set"
in the FTODE instead of the book's "open rectangle").
1.2/ 18, 23–28, 31. Do not do these until
after you've read Section 5.4 in my notes.
Anywhere that the book asks you whether its Theorem 1 implies
something, replace that Theorem 1 with the FTODE stated in my notes.
  See my Fall 2024 homework
page, assignment due 9/11/24, for corrections to some of these
exercises, and some other brief comments.
In my notes, read
Section 3.2.6 up through the
paragraph before Example 3.31 on p. 42.
Reminder: reading my notes is not optional
(except for portions that I [or the notes] say you may
skip, and the footnotes or parenthetic comments that say "Note to
instructor(s)").
You should do
your best to complete each reading assignment by the due date I
give you. If you let yourself fall
significantly behind, planning to catch up later, you will
have far too much to absorb in too little
time. What I've put in the notes are things that are
not adequately covered in our textbook (or any current textbook
that I know of). Unfortunately there isn't enough time to go
over most of these carefully in class; we would not get through
all the topics we're supposed to cover.
|
M 2/3/25 |
Skim Section 2.2 in the textbook, up through Example 3.
I'm always uneasy about having my
students read this section. The book's explanations and
definitions in this section say many of the right
things, but don't hold up under scrutiny, and there's lot of
poor writing that I hate exposing you to. Furthermore, the
most prominent item in the section—the box on
p. 42—is misleading. The correct "method for solving
separable DEs" has two parts, one of which is
the (not quite finished) mechanical method in the box, the
"brain off" method that I illustrated in class. The correct
name for the method in the box on p. 42 is
separation of variables.
Furthermore, this PART of the method for solving
separable DEs has (potentially) one
more step: solving equation (3) explicitly for \(y\) in terms
of \(x\) when possible (as it was in the example I did
Friday's
in class, \(\frac{dy}{dx}= x(y-1)^2\)).
We still have a partial lecture's worth of
conceptual material that's absent from the book, before which
doing the exercises in Section 2.2 would amount to little more than
pushing the symbols around the page a certain way. However,
you do need to start getting some practice with the mechanical
separation-of-variables method; otherwise you'll have too much to
do in too short a time. So I've assigned some exercises from
Section 2.2 below, for you to attempt based on your reading, but
with special temporary instructions.
2.2/ 7–14. For now (with the Monday 2/3
due date), all I want you to do in these exercises is
to achieve
an answer of the form of equation (3) in the box on
p. 42—without worrying about intervals, regions, or exactly what
an equation of this form has to do with (properly
defined) solutions of a DE.
Save your work, so that when I re-assign these exercises later,
at which time your goal will be to get a complete answer that you fully
understand, you won't have to re-do this part of the work.
In my notes, finish
reading Section 3.2.6.
|
W 2/5/25 |
In my notes:
- Read Section 3.2.10 up through the paragraph after the
statement of Theorem 3.45. (This
one-sentence paragraph explains the notation in equation (3.103)
in Theorem 3.45.) This theorem assures us that,
when its hypotheses are met, every solution of
\(\frac{dy}{dx}=g(x)p(y)\) in the indicated region \(R\) is either
a constant solution or can be found, at least in implicit form, by
separation of variables (the "brain-off" method in the box on
p. 42 of the textbook).
On my Spring 2024 homework
page, go to the assignment that was due 1/26/24, and read the
(whole) second bullet-point (which continues until the end of that
assignment). This details several of the items that are misleading or
just plain wrong in the book's Section 2.2. In the last
non-parenthetic sentence of that assignment, "the method we've
studied" is the method that we've just begun to study this
semester (the method summarized by Theorem 3.45 in my notes, and
justified by the proof of that theorem a few pages later).
Return to exercises 2.2/ 7–14 that I had you partially
do in the previous assignment. Using Theorem 3.45 in my notes,
this time find all the maximal solutions. Don't worry
about graphing the solution-curves for any of the exercises in
the current assignment; that's more than the exercises are asking
for, and would take more time than it's worth.
You are not yet expected to understand
yet why the two-part method given by my notes' Theorem
3.45 works, or to fully understand "implicit solutions". For now, you are just getting practice with
the two-part procedure for solving
separable DEs
(one part being separation
of variables [the box on NSS p. 42], the other being
finding any constant solutions the DE may have [it may not
have any]).
Do non-book problems
3–5 . Although I haven't finished discussing various
subtleties, or justified the separation-of-variables technique yet,
the two-part procedure mentioned above does
find all the solutions of the DEs in this
assignment.
Answers to these
non-book problems are posted on the
"Miscellaneous handouts" page.
General comment. In doing the
exercises from Section 2.2 or the non-book problems you may
find that, often, the hardest part of doing
such problems
is doing the integrals. I
intentionally assign problems that require you to refresh most of your
basic integration techniques (not all of which are adequately
refreshed by the book's problems).
If you need to review the method of partial fractions,
you can undoubtedly find it online somewhere, but our textbook has
its own review on pp. 370–374. This
review is interspersed with examples related to the topic of
Chapter 7, Laplace Transforms, which we are a long way from
starting to cover. For purposes of simply reviewing
partial fractions, ignore everything in Examples 5, 6, and 7
on these pages except for the partial fractions
computations. (For example, ignore any equation that has a curly
"L" in it.)
2.2/ 17–19, 21, 24
The book's IVP exercises are not rich
enough, by a long shot, to illustrate the dangers
of keeping your brain turned off after you've separated
variables (putting all \(y\)'s on one side of the equation and all
\(x\)'s on the other, if these are the variable-names) and done
the relevant integrals. Non-book problems 7 and 8, which will be
in an upcoming assignment, were constructed to remedy this
poverty. Feel free to tackle these before they're assigned.
|
F 2/7/25 |
2.2/ 27abc
Do non-book problems
6–8.
Re-do 2.2/ 18 with the initial condition \(y(5)=1.\)
In my notes:
- Read the remainder of Section 3.2.7.
- Read Sections 3.2.8 and 3.2.9. It's okay if you read
one of these sections as part of this assigment, and the
other as part of the next assignment.
- In Section 3.2.10, starting where you left off,
read up through at least the portion of the proof of Theorem
3.45 that ends with statement (3.109).
|
M 2/10/25 |
In my notes:
- Read the remainder of Section 3.2.10.
- Read Section 3.3.1. With the exception of
the definition of the differential \(dF\) of a two-variable function
\(F\), the material in Section 3.3.1 of my notes
is basically not discussed in the book at all, even though
differential-form DEs appear in (not-yet-assigned) exercises for the
book's Section 2.2 and in all remaining sections of Chapter 2. (Except
for "Exact equations"—Section 3.3.6 of my notes—hardly
anything in Section 3.3 of my notes [First-order equations in
differential form] is discussed in the book at all.)
In the textbook, read Section 2.4 up through the boxed
definition "Exact Differential Form" on p. 59. Also, on
my Spring 2024 homework page,
go to the assignment that was due 2/7/24, and read
"Comments, part 1" and "Comments, part 2."
|
W 2/12/25 |
In my notes:
- Read
Section 3.3.2 and 3.3.3. You may skip the portions labeled
"optional reading".
- In Section 3.3.5, read up through Example 3.71.
Section 3.3.5 essentially addresses: what
constitutes a possible answer to various questions, based the
type of DE (derivative-form or differential-form) you're being asked
to solve? A proper answer to this question requires taking into
account some important facts omitted from the textbook (e.g. the fact
that DEs in derivative form and DEs in differential form
are not "essentially the same thing").
2.2 (not 2.3 or 2.4)/ 5, 15, 16.
(I did not assign these when we were
covering Section 2.2 because we had not yet discussed
"differential form".)
Previously, we defined what "separable" means
only for a DE in derivative form. An equation in differential
form is called separable if, in some region of the
\(xy\) plane (not necessarily the whole region on which the given DE
is defined), the given DE is algebraically equivalent to an equation
of the form \(h(y)dy=g(x)dx\) (assuming the variables are \(x\) and
\(y\)). This is equivalent to the condition that the derivative-form
equation obtained by
formally dividing the original equation by
\(dx\) or \(dy\) is separable.
As for how to solve these equations: you will
probably be able to guess the correct mechanical procedure. A natural
question is: how can you be sure that these mechanical procedures give
you a completely correct answer? That question is, essentially, what Sections
3.4–3.6 of my notes
are devoted to.
Warning. For
questions answered in the back of the book: not all answers there are
correct
(that's a general statement; I haven't done a separate
check for the exercises in this assignment)
and some may be misleading. But most are either correct, or
pretty close.
|
F 2/14/25 |
In the textbook, continue reading Section 2.4, up through Example
3. Then do the next set of exercises:
2.4/ 1–8.
Note: For differential-form DEs, there is no
such thing as a linear equation. In these problems, the book
means for you
to classify an equation in differential form as linear if
at least one of the associated derivative-form equations (the ones
you get by formally dividing through by \(dx\) and \(dy\),
as if they were numbers) is linear. It is possible for one of
these derivative-form equations to be linear while the other is
nonlinear. This happens in several of these exercises.
For example, the associated derivative-form
DE for \(y(x)\) is linear; the associated derivative-form DE for
\(x(y)\) is not.
In my notes,
read the remainder of Section 3.3.5, and read Section 3.3.6
up through Example 3.76. (The remainder of Section 3.3.6
is optional reading.)
When reading anything in Sections 3.3
(all of the "3.3.x" subsections) and Sections 3.4–3.6,
remember that Section 3.7 summarizes all the definitions
and results in those sections. To avoid getting lost in the weeds,
refer to this summary as often as you need; that's the
whole reason for Section 3.7's existence.
|
M 2/17/25 |
If I have not yet gone through the "exact equation method" in class,
read the rest of NSS Section 2.4 to see the mechanics of solving an
exact DE. (Just don't trust any "justifications" or
terminology in this section.) This should be enough to enable you to do the
exercises below, though not necessarily with confidence if I
haven't gone through this in class yet.
Don't invent a different method for solving
exact equations, or use a different method you may have
seen before. (See next bullet-point.)
Please do not ask me
about any different method until you have completed reading
the "A terrible way ..." handout below. I
guarantee you that if you've
invented, or have ever been shown, an alternative to the method
that's shown in the book (and that I'll go over in class),
your alternative method is exactly the
"terrible method" laid out at the beginning of the handout.
Every year a student who hasn't yet read the handout comes up to
me after class and asks, "But how about this method I saw (or
was shown) for solving exact equations?" It's always exactly
the method that I'm calling the "terrible
method". ALWAYS. WITHOUT EXCEPTION. You may have thought this
method was good in the past. That's the fault of whoever taught
it to you and designed the examples you saw.
Read the handout
A terrible way to solve exact
equations. (Note The "(we
proved it!)" in the handout wasn't yet true the day I assigned this
reading, but became true a few days later in the 2/19/25
class.)
The example in
this version of the handout is rather complicated; feel free to read
the simpler example in the
original version
instead.
For additional comments on this handout and the terrible method, see
my Spring 2024 homework page,
assignment due 2/12/24.
If you still have
questions about an alternative method AFTER you've read
the handout, and after we've shown in class why the correct
method works, I'm happy to discuss those questions with you in
office hours.
2.4/ 9, 11–14, 16, 17, 19,
20
In my notes, read
Section 3.3.4.
2.2 (not 2.3 or 2.4)/ 22.
Note that although the differential
equation doesn't specify independent and dependent variables, the
initial condition does. Thus your goal in this exercise is to
produce a solution "\(y(x)= ...\)". This exercise, as written, is an
example of what I call a "schizophrenic" IVP.
If
what you're after are solutions with independent variable \(x\) and dependent
variable \(y\) (which is what an initial condition of the form
"\(y(x_0)=y_0\)'' indicates), then the differential equation you were
interested in at the start was one in derivative form
(which in exercise 22 would be \(x^2 +2y \frac{dy}{dx}=0\), or an
algebraically equivalent version), not one in differential
form. Putting the DE into differential form is often a useful
intermediate step for solving such a problem, but differential form is
not the natural starting point. On the other hand, if what you are
interested in from the start is a solution to a
differential-form DE, then it's illogical to express a preference for
one variable over the other by asking for a solution that satisfies a
condition of the form "\(y(x_0)=y_0\)'' or "\(x(y_0)=x_0\)''. What's
logical to ask for is a solution whose graph passes through the
point \((x_0,y_0)\), which in exercise 22 would be the point
(0,2). (That's how the exercise should have been written.)
2.4/ 21, 22 (note that
#22 is the same DE as #16, so you don't have to solve a new DE; you
just have to incorporate the initial condition into your answer
to #16.).
Note that exercises
21–26 are what I termed "schizophrenic" IVPs.
Your goal in these problems is to find an an
explicit formula for a solution, one expressing the dependent
variable explicitly as a function of the independent variable
—if algebraically possible—with the choice of
independent/dependent variables indicated by the initial condition.
However, for
these schizophrenic IVPs, if the algebraic equation ''\(F({\rm variable}_1, {\rm
variable}_2)=0\)'' that you get via the exact-equation method
can't be solved explicitly for the
dependent variable in terms of the independent variable, you have to
settle for an implicit solution.
|
W 2/19/25 |
2.4/ 29, modified as below.
- In part (b), after the word "exact", insert "on some regions
in \({\bf R}^2\)." What regions are these?
- In part (c), the answer in the back of the book is missing a solution
other than the one in part (d). What is this extra missing
solution?
- In part (c), the exact-equation method gives an answer of the
form \(F(x,y)=C\). The book's answer is what you get if you try
to solve for \(y\) in terms of \(x\). Because the equation you
were asked to solve was in differential form, there
is no reason to solve for \(y\) in terms of \(x\), any more
than there is a reason to solve for \(x\) in terms of \(y\).
As my notes say (currently on p. 77),
For any differential-form DE, if
you reverse the variable names you should get the same set of
solutions, just with the variables reversed in all your
equations. This will not be the case if you do what the book did
to get its answer to 29(c), treating your new \(x\) (old
\(y\)) as an independent variable.
In my notes:
- Skim
Section 3.3.7 up through the boldfaced statement (3.151). Read
statement (3.151) itself.
-
Read Example 3.79.
- Read Sections 3.4, 3.5, and 3.6 . (Remember that the most important
conclusions—the ones displayed in boldface—are summarized
in Section 3.7. It's OK to read the summary first, and do a more
careful reading when you have more time.)
Do non-book problem
10. You may not get completely correct answers to parts of
problem 10 if you haven't read Sections 3.4–3.6 of my
notes.
|
F 2/21/25 |
Do non-book problems 9 and
11.
|
M 2/24/25 |
Read The Math
Commandments.
Read Section 4.1 of the textbook.
(We're skipping Sections 2.5 and 2.6, and all of Chapter 3.)
We will be covering the
material in Sections 4.1–4.7 in an order that's different from the
book's.
|
W 2/26/25 |
First midterm exam
On Canvas, under Files, I've posted my
Fall 2024 first midterm (problems only). I've also posted there a
sample cover-page for the exam-booklet.
Familiarize yourself with the instructions on this page;
your instructions will be similar or identical.
Reminder: As
the syllabus says, "[U]nless I say
otherwise, you are responsible for knowing any material I
cover in class, any subject covered in homework, and all the
material in the textbook chapters we are studying." I have
not "said otherwise." The homework has included readings from
my notes ( not optional!)
as well as doing book and
non-book exercises. The textbook chapters/sections we'll have
covered before the exam are 1.1, 1.2, 2.2, 2.3, 2.4,
and possibly parts of sections 4.1 and 4.2.
In case you'd like additional
exercises to practice
with:
If you have done all your homework,
you should
be able to do all the review problems on p. 79 except #s
8, 9, 11, 12, 15,
18, 19, 22, 25, 27, 28, 29, 32, 35, 37, and the last part of 41. A
good feature of the book's "review problems" is that, unlike the
exercises after each section, the location gives you no clue as to
what method(s) is/are likely to work. You will have no such
clues on exams either. Even if you don't have time to work
through the problems on p. 79, they're good practice for figuring
out the appropriate methods are.
A negative feature of the book's exercises
(including the review problems) is that they
don't give you enough practice with a few important integration
skills. This is why I assigned several of my non-book problems.
|
F 2/28/25 |
4.7 (yes, 4.7) / 30.
(This exercise does not
require you to have read anything in Sections 4.1–4.7.)
Read Section 4.2 up through the bottom of p. 161. Some
corrections and comments:
- On p. 157, between the next-to-last line and the last line,
insert the words "which we may rewrite as".
(The book's " ... we obtain [equation 1], [equation 2]"
is a run-on sentence, the last part of which (equation 2) is a
non-sequitur, since there are no words saying how this
equation is related to what came before.
Writing [equation] [equation]
... [equation],
on successive lines, with no words or logical connectors
in between—is a very common bad habit among students,
and is tolerable from students at the level of MAP2302; they
haven't had much opportunity to learn better yet.
However, tolerating a bad habit until students can be trained
out of it is one thing; reinforcing that bad habit is
another.
In older math textbooks, you would rarely if ever see this
writing mistake; in our edition of NSS, it's all over the
place.)
- On p. 158, the authors say that equation (3) is called the
auxiliary equation and say, parenthetically, that it is also known
as the characteristic equation.
While this is literally true, a more accurate depiction of reality would
be to say that equation (3) is called the
characteristic equation and to say, parenthetically, that it
is also known as the auxiliary equation.
"Characteristic equation" is more common, and that's the term
I'll be using.
- The second paragraph on p. 160 should say: "The proof of the
uniqueness statement in Theorem 1 is beyond the scope of a first
course in differential equations; in this text we defer that proof
to chapter 13.\(^\dagger\) However, in the present section and the
next, we will explicitly construct solutions to (10) for all
constants \(a\, (\neq 0),\ b,\) and \(c,\) and all
initial values \(Y_0, Y_1\), thereby proving directly
the existence of at least one solution to (10). For purposes of
an introductory course, we will simply take it on faith that the
uniqueness statement in Theorem 1 is true as well."
|
M 3/3/25 |
4.7 (yes, 4.7) / 1–8
  These exercises do not require anything from
Section 4.7 that we haven't covered in class already. "Theorem 5" (p.
192), referred to in the instructions for exercises 1–8, is
simply the 2nd-order case of the "Fundamental Theorem of Linear
ODEs" that I stated in class.
4.2/ 1, 3, 4, 7, 8, 10, 12, 13–16, 18,
27–32,
46ab.
Relatively few of Section 4.2's exercises are
doable until the whole section has been covered. Above, I've
selected
ones that are doable based on the reading that was due Friday 2/28.
In #46, the instructions should say that the
hyperbolic cosine and hyperbolic sine functions can be
defined as the solutions of the indicated IVPs, not that
they are defined this way. The customary definitions are
more direct: \(\cosh t=(e^t+e^{-t})/2\) (this is what you're
expected to use in 35(d))
and \( \sinh t= (e^t-e^{-t})/2\). Part of what you're doing in
46(a) is showing that the definitions in problem 46 are equivalent
to the customary ones. One reason that these functions have
"cosine" and "sine" as part of their names is that the ordinary
cosine and sine functions are the solutions of the DE \(y''+y=0\)
(note the plus sign) with the same initial conditions at \(t=0\)
that are satisfied by \(\cosh\) and \(\sinh\) respectively. Note
what an enormous difference the sign-change makes for the
solutions of \(y''-y=0\) compared to the solutions of \(y''+y=0\).
For the latter, all the nontrivial solutions (i.e. those that are
not identically zero) are periodic and oscillatory; for the
former, none of them are periodic or oscillatory, and all of them
grow without bound either as \(t\to\infty\), as \(t\to -\infty\),
or in both directions.
  Note: "\(\cosh\)" is
pronounced the way it's spelled; "\(\sinh\)" is pronounced "cinch".
|
W 3/5/25 |
4.7/ 25
|
F 3/7/25 |
4.7/ 26. Note: To compute \(\frac{d}{dt} |t^3|\) at \(t=0\),
use the definition of derivative (\(f'(t_0)=\lim_{t\to t_0}
\frac{f(t)-f(t_0)}{t-t_0}\)).
4.2/ 2, 5, 9, 11, 17, 19, 20, 26.
When combined with what was
in the previous assignment, the list of exercises assigned from this
section is:
4.2/ 1–20, 26–32, 35, 46ab.
4.3/ 1–18, 21–26. These
exercises are numerous, but you should find 1–18 very short.
However, if you can't finish them all by Friday, that's okay; add
the unfinished ones to the next assignment.
Reading Section 4.3 is optional. As with most sections of the
book, there are many correct statements, but they're intertwined
with many incorrect (or incomplete) statements and/or
explanations. In class, I'll go over the complex-exponential
material done correctly.
If you do read Section 4.3:
-
See my
my Spring 2024 homework page,
assignment due 3/4/24, for several comments and corrections.
- The book's solution of Example 4 starts with
"Equation (14) is a minor alteration of equation (12) in Example
3." This is true in the
same sense that the word "spit" is a minor alteration of the word
"suit". Changing one letter can radically alter the meaning of a
word. Any of the numerous words obtainable from "suit" by changing
the second letter has its own meaning, all very different from the
others.
It's true that the only difference between
the DEs in Examples 3 and 4 is the sign of the \(y'\)
coefficient, and that the only difference between equation (15) (the
general solution in Example 4) and equation (13) (the general
solution in Example 3) is that equation (15) has an \(e^{t/6}\)
where equation (13) has an \(e^{-t/6}\). But for modeling a
physical system, these differences are enormous; the
solutions are drastically different. Example 4 models a
system that does not exist, naturally, in our universe.
(More precisely: there could be a
real-life physical system (for example) could be
modeled approximately by equation (14) for a short enough
period of time. But the physical conditions that were used as
assumptions when modeling the system would break down after a while,
after which the system could no longer be modeled by the same
DE.) In this system, the amplitude of the
oscillations grows exponentially, without bound. This is
displayed in Figure 4.7 (except for the "without bound" part).
Example 3, by contrast,
models a realistic mass/spring system, one that could
actually exist in our universe. All the solutions exhibit
damped oscillation. Every solution \(y\) in Example 3 has the
property that \(\lim_{t\to\infty} y(t)=0\); the oscillations die
out. For a picture of this—which the book should have
provided either in place of the less-important Figure 4.7 or
alongside it—draw a companion diagram that corresponds to
replacing Figure 4.7's \(e^{t/6}\) with \(e^{-t/6}\). If you
take away the dotted lines, your companion diagram should look
something like Figure 4.3(a) on p. 154, modulo how many wiggles you
draw.
When working with any linear,
constant-coefficient DE, it is crucial that you make NO
mistake in identifying the characteristic polynomial and its
roots. The most common result of misidentifying the characteristic
roots is to completely change the nature of the solutions.
|
M 3/10/25 |
4.3/ 28, 32, 33 (students in
electrical engineering may do #34 instead of #33).
Before
doing problems 32 and 33/34, see Examples 3 and 4 in Section 4.3.
Do non-book problem 12.
Read Section 4.4 up through Example 3.
Read Section 4.5 up through Example 2.
We will be covering Sections 4.4 and 4.5 simultaneously, more or
less, rather than one after the other. What most mathematicians
(including me) call "the Method of Undetermined Coefficients" is what
the book calls "the Method of Undetermined Coefficients plus
superposition." You should think of Section 4.5
as completing the (second-order case of) the Method of
Undetermined Coefficients, whose presentation is begun in Section 4.4.
|
W 3/12/25 |
Finish reading Sections 4.4 and 4.5.
4.4/ 9, 10, 11, 14,
15, 18, 19, 21–23, 28,
29, 32; more TBA either for this assignment or as part of next.
Add parts (b) and (c) to 4.4/ 9–11, 14, 18 as follows:
- (b) Find the general solution of the DE in each problem.
- (c) Find the solution of the initial-value problem for the DE in each
problem, with the following initial conditions:
- In 9, 10, and 14: \(y(0)=0=y'(0)\).
- In 11 and 18: \(y(0)=1, y'(0)=2\).
Note: Anywhere that the book says
"form of a particular solution," such as in exercises
4.4/ 27–32, it should be "MUC form of a
particular solution." The terms "a solution" (as defined
in the first lecture or two of this course), "one
solution", and "particular solution",
are synonymous. Each of these terms stands in contrast
to general solution, which means the set of all
solutions (of a given DE). Said another way, the general
solution is the set of all particular solutions (for a given
DE). Every solution of an initial-value problem for a DE is also
a particular solution of that DE.
The Method of Undetermined Coefficients, when applicable,
simply produces a particular solution
of a very specific form, "MUC form". (There is
an underlying theorem that guarantees that when the MUC
is applicable, there is a unique solution of that form.
Time permitting, later in the course I'll show you why the
theorem is true.)
|
F 3/14/25 |
4.4/ 1–8, 12, 16, 17, 20, 24, 30, 31
Note that the MUC is not needed to do exercises 1–8,
since (modulo having to use superposition in some cases) the
\(y_p\)'s are handed to you on a silver platter. All that's needed
is the "general solution is \(y_p+y_h\)" principle derived in class
for any linear DE, plus superposition (problem 4.7/ 30,
previously assigned) in certain problems, plus your knowledge (from
Sections 4.2 and 4.3) of \(y_h\)
for all the DEs in these problems.
Problem 12 can also be done by Chapter 2
methods. The purpose of this exercise in Chapter 4 is to see that
it also can be done using the Method of Undetermined Coefficients,
so make sure you do it the latter way.
4.5/ 1–8, 24–26, 28. (More in next assignment.)
Why so many exercises?
The "secret" to learning math skills
in a way that you won't forget them
is repetition. Repetition builds retention.
Virtually nothing else does (at least not for basic skills).
Some notes:
- In class I used the
term multiplicity of a root of the characteristic polynomial.
This is the integer \(s\) in the box on
p. 178. (The book eventually uses the term
"multiplicity", but not till Chapter 6; see the box on p. 337. On
p. 337, the linear constant-coefficient operators are allowed to have
any order, so multiplicities greater than 2 can occur—but not in
Chapter 4, where we are now.)
In the the box on p. 178, replace the \(r\) in the box on p. 178 by
the letter \(\alpha\), so that the right-hand side of the first
equation in the box is written as \(Ct^m e^{\alpha t}\). In order to
restate cleanly what I said in class about
multiplicity, it is imperative not to use the identical letter \(r\)
in "\(t^me^{rt}\)" as in the characteristic polynomial
\(p_L(r)=ar^2+br+c\) and the characteristic equation
\(ar^2+br+c=0\).
Note that if \(p_L\) has a non-real root
\(r_1=\a+i\b\), then it has such a root with \(\b>0\). The relevant
multiplicity is the number of times \(r-(\a+i\b)\) appears in a
factorization of \(p_L(r)=ar^2+br+c\) into degree-one factors. For a
quadratic polynomial, this can only be 0 or 1, since if \(r-(\a+i\b)\)
appears, then so does \(r-(\a-i\b)\); the factorization of \(p_L(r)\)
is \(a\big(r-(\a+i\b)\big)\big(r-(\a+i\b)\big)\). We can define
\(s\) as the multiplicity of \(\a+i\b\) OR the
multiplicity of \(\a-i\b\), but not both at the same time.
(I.e. we count the multiplicity of only one of these conjugate
roots.) These two multiplicities are always equal (even for
higher-degree polynomials with real coefficients), so for simplicity's
sake, in the conjugate pair of roots \(\a\pm i\b\), we may confine
ourselves to considering only the "\(a+i\b\)" for which \(\b>0\).
- It's important to remember that the MUC works only for
constant-coefficient linear differential operators \(L\)
(and even then, only for certain functions \(g\) in
"\(L[y]=g\)"). That can be easy to forget when doing Chapter 4
exercises, since virtually all the DEs in these exercises are
constant-coefficient. (Remember that a linear DE
\(L[y]=g\) is called a constant-coefficient equation
if \(L\) is a constant-coefficient operator; the function \(g\) is
irrelevant to the constant/non-constant-coefficient
classification.)
-
In class, for the sake of simplicity and
time-savings, for second-order equations
I've consistently been using the letter \(t\) for the
independent variable and the letter \(y\) for the independent
variable in linear DE's. The book generally does this in Chapter 4
discussion as well, but not always in
the exercises—as I'm sure you've noticed. For each DE
in the book's exercises, you can still easily tell which variable is
which: the variable being differentiated (usually indicated with
"prime" notation) is the dependent variable.
While you're learning methods, it's
perfectly fine as an intermediate step to replace
variable-names with the letters you're most used to, as long as,
when writing your final answer, you remember to switch your
variable-names them back to what they were in the problem you were
given. On exams, some past students have simply written a note
telling me how to interpret their new variable-names. No.
[Not if you want 100% credit for an otherwise correct answer
to. That translation is your job, not mine. Writing your
answer in terms of the given variables accounts for part of the
point-value and time I've budgeted for.])
Do these non-book exercises on the
Method of Undetermined Coefficients. The answers to these
exercises are here. (These links
are also on the Miscellaneous Handouts page.)
|
M 3/24/25 |
4.5/ 9–12, 14–23, 27, 29, 31, 32, 34–36.
In #23,
the same comment as for 4.4/12 applies.
Problem 42b (if done correctly) shows
that the particular solution of the DE in part (a) produced by the
Method of Undetermined Coefficients actually has physical
significance.
4.5 (continued)/ 37–40.
In these, note that you are
not being asked for the general solution (for which you'd need
to be able to solve a third- or fourth-order homogeneous linear
DE, which we haven't yet discussed explicitly—although you would
likely be able to guess correctly how to do it for
the DEs in exercises 37–40). Some tips for 38 and 40 are
given below.
As mentioned in class (with less precision),
in a
constant-coefficient differential equation \(L[y]=g\), the functions
\(g\) to which the MUC applies are the same regardless of the order
of the DE, and, for a given \(g\), the MUC form of a particular
solution is also the same regardless of the order of the DE. The
degree of the characteristic polynomial is the same as the order of
the DE (to get the characteristic polynomial, just replace each
derivative appearing in \(L[y]\) by the corresponding power of
\(r\), remembering that the "zeroeth" derivative—\(y\)
itself—corresponds to \(r^0\), i.e. to 1, not to \(r\).)
However, a polynomial of degree greater than 2 can have roots of
multiplicity greater than 2. The possibilities for the exponent
"\(s\)" in the general MUC formula (for functions of "MUC type" with
a single associated "\(\alpha + i\beta\)") range from 0 up to the
largest multiplicity in the factorization of \(p_L(r)\).
Thus the only real difficulty in applying the
MUC when \(L\) has order greater than 2 is that you may have to
factor a polynomial of degree at least 3, in order to correctly
identify root-multiplicities. Explicit factorizations are possible
only for some such
polynomials. (However, depending on the
function \(g\), you may not have to factor \(p_L(r)\) at all. For an
"MUC type" function \(g\) whose corresponding complex number is
\(\alpha +i \beta\), if \(p_L(\alpha +i \beta)\neq 0\), then
\(\alpha +i \beta\) is not a characteristic root, so the
corresponding "\(s\)" is zero.) Every cubic or
higher-degree characteristic polynomial arising in this textbook is
one of these special, explicitly factorable polynomials (and even
among these special types of polynomials, the ones arising in the
book are very simple):
- In all the problems in this textbook in which
you have to solve a constant-coefficient, linear DE of order
greater than two, the corresponding characteristic polynomial
has at least one root that is an integer of small absolute
value (usually 0 or 1). For any cubic polynomial \(p(r)\),
if you are able to guess even one root, you can factor the whole
polynomial. (If the root you know is \(r_1\), divide \(p(r)\) by
\(r-r_1\), yielding a quadratic polynomial \(q(r)\). Then
\(p(r)=(r-r_1)q(r)\), so to complete the factorization of
\(p(r)\) you just need to factor \(q(r)\). You already know how
to factor any quadratic polynomial, whether or not it has
easy-to-guess roots, using the quadratic formula.)
From the book's examples and exercises, you
might get the impression that plugging-in integers, or perhaps
just plugging-in \(0\), \(1\), and \(-1\), is the only tool for
trying to guess a root of a polynomial of degree greater than 2.
If you were a math-team person in high school, you should know
that this is not the case. If you know the
Rational Root Theorem then for all the cubic characteristic
polynomials arising in this textbook, you'll be able to guess an
integer root quickly. If you do not know the Rational
Root Theorem, you will still be able to guess an integer
root quickly, but perhaps slightly less quickly.
- For problem
38, note that if all terms in a polynomial \(p(r)\)
have even degree, then effectively \(p(r)\) can be treated as a
polynomial in the quantity \(r^2\). Hence, a polynomial of the form
\(r^4+cr^2+d\) can be factored into the form \((r^2-a)(r^2-b)\),
where \(a\) and \(b\) either are both real or are complex-conjugates
of each other. You can then factor \(r^2-a\) and \(r^2-b\) to get a
complete factorization of \(p(r)\). (If \(a\) and \(b\) are not real,
you may not have learned yet how to compute their square roots, but
in problem 38 you'll find that \(a\) and \(b\) are real.)
You can also do problem 38 by extending the
method mentioned above for cubic polynomials. Start by guessing one
root \(r_1\) of the fourth-degree characteristic polynomial \(p(r)\).
(Again, the authors apparently want you to think that the way to find
roots of higher-degree polynomials is to plug in integers, starting
with those of smallest absolute value, until you find one that works.
In real life, this rarely works—but it does work in all the
higher-degree polynomials that you need to factor in this
book; they're misleadingly fine-tuned.)
Then
\(p(r)=(r-r_1)q_3(r)\), where \(q_3(r)\) is a cubic polynomial that you
can compute by dividing \(p(r)\) by \(r-r_1\). Because of the
authors' choices, this \(q_3(r)\) has a root \(r_2\) that you should be
able to guess easily. Then divide \(q_3(r)\) by \(r-r_2\) to get a
quadratic polynomial \(q_2(r)\)—and, as mentioned above, you
already know how to factor any quadratic polynomial.
- For
problem 40, you should be able to recognize that \(p_L(r)\) is \(r\)
times a cubic polynomial, and then factor the cubic polynomial by
the guess-method mentioned above (or, better still, recognize that
this cubic polynomial is actually a perfect cube).
4.5 (continued)/ 41, 42, 45.
Exercise 45 is a nice (but
long)
problem that requires you to combine several things
you've learned. The strategy is similar to the approach
outlined in Exercise 41. Because of the "piecewise-expressed" nature of the
right-hand side of the DE, there is a sub-problem on
each of three intervals: \(I_{\rm left}= (-\infty,
-\frac{L}{2V}\,] \), \(I_{\rm mid} = [-\frac{L}{2V},
\frac{L}{2V}] \), \(I_{\rm right}= [\frac{L}{2V},
\infty) \). The solution \(y(t)\) defined on the whole
real line restricts to solutions \(y_{\rm left}, y_{\rm
mid}, y_{\rm right}\) on these intervals.
You are given that \(y_{\rm left}\)
is identically zero. Use the
terminal values \(y_{\rm left}(- \frac{L}{2V}), {y_{\rm
left}}'(- \frac{L}{2V})\), as the initial values \(y_{\rm
mid}(- \frac{L}{2V}), {y_{\rm mid}}'(- \frac{L}{2V})\). You then have
an IVP to solve on \(I_{\rm mid}\). For this, first find a
"particular" solution on this interval using the Method of
Undetermined Coefficients (MUC). Then, use this to obtain the general
solution of the DE on this interval; this will involve constants \(
c_1, c_2\). Using the IC's at \(t=- \frac{L}{2V}\), you obtain specific
values for \(c_1\) and \(c_2\), and plugging these back into the general
solution gives you the solution \(y_{\rm mid}\) of the relevant IVP on
\(I_{\rm mid}\).
Now compute the terminal values
\(y_{\rm mid}(\frac{L}{2V}), {y_{\rm
mid}}'(\frac{L}{2V})\), and use them as the initial
values
\(y_{\rm right}(\frac{L}{2V}), {y_{\rm
right}}'(\frac{L}{2V})\). You then have a new IVP to
solve on \(I_{\rm right}\). The solution,
\(y_{\rm right}\), is what you're looking for in part (a) of the
problem.
If you do everything correctly (which may
involve some trig identities, depending on how you do certain steps),
under the book's simplifying assumptions \(m=k=F_0=1\) and \(L=\pi\),
you will end up with just what the book says: \(y_{\rm right}(t) =
A\sin t\), where \(A=A(V)\) is a \(V\)-dependent constant
(i.e. constant as far as \(t\) is concerned, but a function
of the car's speed \(V\)). In part (b) of the problem you are interested in the
function \(|A(V)|\), which you may use a graphing calculator or
computer to plot. The graph is very interesting.
Note: When using MUC to find a
particular solution on \(I_{\rm mid}\), you have to handle the cases
\(V\neq 1\) and \(V = 1\) separately. (If we were not making the
simplifying assumptions \(m = k = 1\) and \(L=\pi\), these two cases
would be \(\frac{\pi V}{L}\neq \sqrt{\frac{k}{m}}\) and \(\frac{\pi
V}{L}= \sqrt{\frac{k}{m}}\), respectively.) Using \(s\) for the
multiplicity of a certain number as a root of the characteristic
polynomial, \(V\neq 1\) puts you in the \(s= 0\) case, while \(V = 1\)
puts you in the \(s= 1\) case.
|
W 3/26/25 |
No new homework. Just make sure you're caught up on the old!
|
F 3/28/25 |
Do the (newly added)
non-book problem 13.
(This is a multi-part problem.)
I'm posting this too late for the 3/28 due-date
to be realistic. But if you can read through the problem
before the Friday class, that would be helpful.
Exam 2 will be either Wednesday or Friday next
week. I'll decide as soon as possible after the Friday lecture, and
try to announce it by Friday evening. The cutoff on material is
still TBD, but you can start studying without knowing
the last thing you'll have to study.
|
F 3/31/25 |
(This should have been in a
pre-Spring-Break
assigment.) On the Miscellaneous Handouts page, there's a section with
several MUC-related handouts. Look at the "granddaddy" file and
read the accompanying "Read Me" file, which is essentially a long
caption for the diagram in the "granddaddy file".
4.7/ 29, 31, 34a. (In #29, assume that the functions \(p\) and
\(q\) are linearly independent on the interval \( (a,b)\) . In
#34, assume that the interval of interest is the whole real
line.)
  For the above exercises, you don't have to
have read Section 4.7; we've covered everything necessary in class.
For 34a, we did that work
several weeks ago, but that makes the exercise good review for an
upcoming exam.
Read or skim Section 4.7 up to, but not including, Theorem 7
(Variation of Parameters). The only part of this that we have
not already covered in class is the part that starts after
Definition 2 and ends with Example 3.
Read "Correction to
book" at the end of this assignment.
4.7 (continued)/ 9–14, 19, 20
Reminder about some terminology. As I've
said in class, "Characteristic equation" and "characteristic
polynomial" are things that exist only for constant-coefficient
DEs. This terminology should be avoided in the setting of
Cauchy-Euler DEs (and
was avoided for these DEs in early editions of our
textbook). The term I used in class for
equation (7) on p. 194, "indicial equation", is what's used in
most textbooks I've seen, and really is better
terminology—you (meaning the book's authors) invite
confusion when you choose to give two different meanings to the
same terminology.
In our textbook, p. 194's equation (7) is actually introduced
twice for
Cauchy-Euler DEs, the second time as Equation (4) in Section
8.5.
For some reason, the authors
give the terminology "indicial equation" only in Section 8.5,
Do non-book problem
14. (You'll need this before trying the exercises below.)
4.7 (continued)/ 15–18, 23ab.
In #23, ignore the
first sentence ("To justify ...").
Problem 23b, with \(f=0\), shows that the
indicial equation for the Cauchy-Euler DE is the same as
the characteristic equation for the associated
constant-coefficient DE obtained by the Cauchy-Euler
substitution \(t=e^x\). (That's if \(t\) is the independent
variable in the given Cauchy-Euler equation; the substitution
leads to a constant-coefficient equation with independent
variable \(x\).) This is one of the reasons for keeping
the terminology "indicial equation" and "characteristic
equation" distinct.
In my experience it's unusual to hybridize the
terminology and call the book's Equation (7) the characteristic
equation for the Cauchy-Euler DE, but you'll need to be
aware that that's what the book does.
Check directly (i.e. without using complex-valued functions)
that if the indicial equation for a
second-order homogeneous Cauchy-Euler DE
\(at^2y''+bty'+cy=0\) has complex roots \(\alpha \pm
i\beta,\) with \(\beta\neq 0\), then the functions
\(y_1(t)=t^{\alpha}\cos(\beta \ln t)\) and
\(y_2(t)=t^{\alpha}\sin(\beta \ln t)\) are solutions of the DE
on the interval \( (0,\infty) \).
In class, when we substituted "\(y=t^r\)"
into a homogeneous Cauchy-Euler DE \(L[y]=0\) on the interval \(
(0,\infty) \), in the hope that some \(r\)'s might yield solutions,
we assumed that \(r\) was real and used the "power-differentiation
rule" $$\frac{d}{dt} t^r=rt^{r-1},$$ together with the
"addition-of-exponents rule" \(t^a t^b =t^{a+b}\),
and found that \(L[t^r]=
q_L(r)\, t^r\), where \(q_L(r)\) is the indicial polynomial. This is how
we deduced that if \(r_1\) is a real root of \(q_L\), then
\(L[t^r]=0\).
If \(r_1\) is a non-real root, we would like to be
able to conclude that \(y=t^{r_1}\) is a complex solution
of \(L[y]=0\), and therefore that the real and imaginary parts of
\(t^{r_1}\) are real solutions of \(L[y]=0.\) But both the
power-differentiation rule and the addition-of-exponents rule
are rules whose validity we know, so far, only when \(r\) is
real, so our calculation of \(L[t^r]\) does not yet justify
concluding that if \(r_1=\a+i\b\) is a non-real root of \(q_L\),
then \(t^{r_1}\), or its real and imaginary parts, are solutions
of \(L[y]=0\).
There are two ways of remedying this ignorance. One way is
what you did in the previous problem: two functions
\(y=t^\a \cos(\b\ln t)\) and \(y=t^\a\sin(\b\ln t)\) (which happen
to be the real and imaginary parts of \(t^{\a+i\b}\)) either
satisfy \(L[y]=0\), or they don't. The DE doesn't care
whether we
got these functions by Vulcan logic or in fevered
hallucinations; it cares only that they work.
The second way, of course, is to show that the
differentation and addition-of-exponent rules that we used
are valid even for complex exponents. That is the next
exercise below.
Power functions with positive base and complex exponent
- Using the definition \(t^r:=e^{r\ln t}\), where \(t\in
(0,\infty)\) and \(r\) is a complex number (possibly real), show that
$$t^{r+s}
= t^r t^s\ \ \ \ \ \ (*)$$ for any complex numbers \(r\) and \(s\),
and any real \(t>0\).
Here and below, remember that there is no
such thing as "proof by notation". Even for arbitrary real
exponents \(r\) and \(s\), without the definition of "\(t\) to an
arbitrary real exponent" in terms of the exponential and natural log
functions,
equation (*) is by no means obvious when \(r\) and \(s\) are
not integers.
Choosing the same notation for "\(t\) to a power"
whether or not the exponent is an integer, cannot imply any
algebraic rules for non-integer exponents. The fact that the
integer-exponent rules extend to more general exponents is beautiful
and very convenient, but it's something we have to derive; the
choice of
notation can't make something true or false. The notation
\(t^r\) we're using was chosen to
reflect and remind us
of various properties we're familiar with for integer exponents.
The properties drive the choice of notation, not the other way
around.
-
Check that when \(r=-1\)
our new, general definition of \(t^r\) for complex \(r\) and real
\(t>0\)
is consistent with the integer-exponent definition "\(t^{-1}=1/t\)"
for real \(t\neq 0\).
- As we showed in class several weeks ago,
the Chain Rule is valid for
functions of the form \(t\mapsto f(h(t))\), where \(t\) is a real
variable, and \(h\) and \(f\) are, respectively, a real-valued and
a complex-valued differentiable function of a real
variable.
I.e., we
showed
that for \(h\)
and \(f\) as above, the function \(t\mapsto f(h(t))\) is
differentiable, and
$$\frac{d}{dt}(f(h(t))=h'(t)\, f'(h(t)).$$
Observe that our
definition of \(t^r\)
can be expressed as \(f(h(t))\), with \(f(t)=e^{rt}\) and \(h(t)=\ln
t\).
Use the Chain Rule and (*) to show that for any complex number \(r\),
the function \(t\mapsto t^r\) is differentiable on the
interval \( (0,\infty)\), and $$\frac{d}{dt}
t^r=rt^{r-1}.$$
Correction to book.
On p. 194, the sentence "If
\(r\) is complex ..." falsely implies that the identity
\(t=e^{\ln t}\), together with the definition \(e^{i\th}=\cos\th
+ i\sin\th\), are all that's needed for the sequence of
equations displayed misleadingly as a derivation of the
formula \(t^{\a +i\b}=t^\a(\cos(\b\ln t)+i\sin(\b\ln
t)\).
Sorry, no. The very first equation in this
"derivation", \(t^{\a+i\b} =t^\a
t^{i\b}\), assumes that the not-yet-defined
"complex exponential with real, positive base \(t\)" has this
property, just because the formula is true for real
exponents. There is no such thing as "proof by notation".
One correct version of the book's presentation is to start
by defining \(t^{\a+i\b}\) to be
\(e^{(\a+i\b)\ln t}\) (a definition suggested by
the fact that "\(\ t^\a = e^{\a \ln t}\ \) "
is the correct definition of \(t^\a\) for
real \(t>0\) and [possibly irrational] real \(\a\)).
Using this definition, we then have
$$ \begin{array}{rclll} t^{\a+i\b} &\ =\ & e^{(\a+i\b)\ln t} & =&
e^{\a\ln t +i\b\ln t} \\ &&& \ =\ & \ e^{\a\ln t} \ e^{i\b\ln t} \ \ \
\mbox{(by definition of $e^z$ for complex $z$)}, \end{array} $$ as
well as \( e^{\a\ln t} = t^\a\) (by definition) and \(e^{i\b\ln
t} =t^{i\b}\) (using the definition of
\(t^{\a+i\b}\) with \(\a=0\)). Combining these yields
\(t^{\a+i\b} =t^\a t^{i\b}\). Furthermore,
\(t^{i\b}=e^{i\b\ln t} = \cos(\b\ln t)+i\sin(\b\ln t)\) by definition of
\(e^{i\th}\) when \(\th\) is real, and therefore
$$t^{\a+i\b}=t^\a t^{i\b} = t^\a (\cos(\b\ln t)+i\sin(\b\ln t)).$$
|
W 4/2/25
|
Second midterm exam
Fair-game material is everything we've covered up through the Monday
3/31 lecture (including homework assigned with due-dates up through
3/31).
My second midterm from last semester is now posted on Canvas,
under Files. There's a little more material that's fair game for
your exam than there was for that one.
We'll use the location and time as for the first exam.
|
F 4/4/25
|
Read (or at least skim) Section 4.6, but without the (implicit)
assumption in equation (1), p. 187, that the linear DE has constant
coefficients. Replace that assumption with: the coefficients are
continuous functions on an interval \(I\), on which \(a(t)\) is
nowhere 0. The method (and the argument that it works) is no
different in this more general situation.
When I present the method in class, my starting-point is a
DE that's already in standard linear form: \(y''+py'+qy=0\);
i.e. I've already divided through by the "\(a\)" (not necessarily
constant) in equation (1). For me, that dividing-through is Step
0. So, in place of the second equation in (9), I'll have one
who RHS is simply \(g\).
Get started on the exercises due Monday, based on your
reading, if possible.
|
M 4/7/25 |
4.6/ 2, 5–8, 9, 10, 11, 12, 15, 17, 19 (first sentence only).
Remember that to apply Variation of
Parameters as presented in class, you must first put the DE in
"standard linear form", with the coefficient of the second-derivative
term being 1 (so, divide by the coefficient of this term, if the
coefficient isn't 1 to begin with). The book's approach to remembering
this is to cast the two-equations-in-two-unknowns system as (9) on
p. 188.
This is fine, but my personal preference is to put
the DE in standard form from the start, in which case the "\(a\)" in
the book's pair-of-equations (9) disappears.
One good piece of advice in the book is the sentence after
the box on p. 189: "Of course, in step (b) one could use the
formulas in (10), but [in examples] \(v_1(t)\) and
\(v_2(t)\) are so easy to derive that you are advised not to
memorize them." (This advice applies even if you've put the DE
into standard linear form, so that the coefficient-function \(a\) in
equation (10) is 1.)
Incorrectly memorized formulas are worthless. If you attempt
to memorize a formula instead of learning the underlying method, and
your formula is wrong in any way (e.g. a sign is wrong), or
you misuse the correct formula in any way, don't expect
to get much partial credit on an exam problem.
4.7/ 24cd, 37–40. Some comments on these exercises:
- In #37 and #39, the presence of the expression \(\ln
t\) in the given equation means that, automatically, we're restricted
to considering only the domain-interval \( (0,\infty) \). In #40,
presence of \(t^{5/2}\) has the same effect, but the instructions
explicitly say, anyway, to restrict attention to the positive
\(t\)\interval. But in #38, there is no need to restrict attention to
\( (0,\infty) \); you should solve on the negative-\(t\) interval as
well as the positive-\(t\) interval.
- On \((0, \infty)\),
the DEs in all these exercises can be solved either by
using the Cauchy-Euler
substitution "\(t=e^x\)" or
by
first using the indicial equation
just to find a FSS for the associated homogeneous DE and then
using Variation of Parameters for the non-homogeneous DE. Both methods
work. I've deliberately assigned exercises that have you solving some
of these equations by one method and some by the other, so that you
get practice with both approaches. Neither is automatically faster
or "better" than the other.
- Regarding #38: in
contrast
to what you saw for the situation for homogeneous
Cauchy-Euler DEs in non-book homework problem 14,
if a
function
\(y\) is a solution to a non-homogeneous
DE on \( (0, \infty) \), then the function
\(\tilde{y}\) on \( (-\infty,0) \) defined by \(\tilde{y}(t)
=y(-t)\) need not be a solution of the same non-homogeneous
DE. So in #38 you'll need to do something a little different to
get a solution to the non-homogeneous equation on \(
(-\infty,0) \).
- In #40, to apply Variation of Parameters as I
presented it in class, don't forget to put the DE into standard form
first!
But after you've done the problem
correctly, I recommend going back and seeing what happens if you
forget to divide by the coefficient of \(y''\). Go as far as seeing
what integrals you'd need to do to get \(v_1'\) and \(v_2'\). You
should see that if you were to do these (wrong) integrals, you'd be
putting in a lot of extra work (compared to doing the right
integrals), all to get the wrong answer in the end. I've made this
mistake on this specific problem several times in the
past!
Redo 4.7/40 by starting with the substitution
\(y(t)=t^{-1/2}u(t)\)
and seeing where
that takes you.
(This should
answer the question, "How did anyone ever figure out, or guess,
a FSS for the homogeneous DE in this problem?" Most, if not all,
of the homogeneous linear DEs for which anyone has ever figured
out a completely explicit FSS, are DEs that can be
"turned into" constant-coefficient DEs by some clever
substitution! Some substitutions change the independent variable
[e.g. the Cauchy-Euler substitution in 4.7/23]; some change the
dependent variable [e.g. the one I just gave you for
4.7/40].)
|
W 4/9/25 and
F 4/11/25 |
This is being posted too late for
you to get much of it done before Wednesday's class. Do as much
as you can, as soon as you can, and do the rest by the time
of Friday's class. There are some additional exercises
on Variation of Parameters that I plan to add to this assignment,
so check this page frequently for updates several times
over the next few days.
Skim Section 6.1, a
lot of which is review of material we've covered already.
I'm
not fond of the way the section is organized or the material is
presented. Among other things:
- There is too much emphasis on the Wronskian,
especially since most students in
their first DE course haven't yet learned how to compute (or define) a
determinant that isn't \(2\times 2\) or \(3\times 3\). "Fundamental set of solutions" (or "fundamental
solution set") should not be defined using the
Wronskian.
- Linear dependence/independence of functions should
be introduced sooner, definitely before the Wronskian.
For easy reference: a set of functions
\(\{f_1, f_2, \dots, f_m\}\) on an interval \(I\) is:
- linearly dependent (on \(I\))
if there are constants \(c_1, c_2, \dots, c_m\), not all
zero, such that \(c_1f_1+c_2f_2+\dots +c_mf_m =0\) (the
constant function 0 on \(I\)); equivalently, if at least one
of the functions \(f_i\) is a linear combination of the
others.
- linearly independent (on \(I\))
otherwise (i.e. if the only constants \(c_i\) for which
\(c_1f_1+c_2f_2+\dots +c_mf_m\) is identically 0 on
\(I\) are \(c_1=c_2=\dots = c_m=0\); equivalently,
if no \(f_i\) is a a linear combination of the
others).
Here is how the material in Section 6.1 should be organized
(I suggest using this outline to guide your
thinking about the material in this section):
- Immediately after the "As
a consequence ..." sentence near the bottom of p. 320, before
anything else is said (or the book's "Is
it true ...?" question is asked), the term fundamental set
of solutions (FSS) should be defined. Specifically,
for
a homogeneous linear DE
\(L[y]=0\) on an interval \(I\), a fundamental set
of solutions (FSS) should be defined in one of the
following equivalent ways.
(i) A finite set of
functions \( \{y_1, \dots, y_m\} \) on \(I\)
for which the general solution of \( L[y]=0\) on \(I\)
is the set of linear combinations \( \{c_1y_1+ \dots
+c_m y_m\} \), and for which \(m\) is as small as
possible among all such sets of
functions.
(ii) A finite, linearly independent set
of solutions \( \{y_1, \dots, y_m\} \) of \(
L[y]=0\) on \(I\) such that every solution of
\(L[y]=0\) on \(I\) is a linear combination of
\( \{y_1, \dots, y_m\}. \)
(iii) A finite, linearly independent set of solutions \(
\{y_1, \dots, y_m\} \) of \( L[y]=0\) on \(I\) such that
the general solution is the set of linear combinations
\( \{c_1y_1+ \dots +c_m y_m\} \).
Note that in
definition (i), a consequence of "\(m\) is as small as possible" is
that \( \{y_1, \dots, y_m\} \) is linearly independent. (Why?) Thus,
whichever of (i), (ii), or (iii) is used, a FSS
is automatically linearly
independent.
The concept of
"FSS" really has nothing to do with differential equations,
intrinsically; it is a concept that comes straight from linear
algebra. In linear algebra, given a homogeneous linear equation
\(L[y]=0\) (where \(L\) is a linear operator on the "space of inputs
\(y\)"), what we are calling "fundamental set of solutions" would be
called "basis of the solution space, provided that the solution space
is finite-dimensional". For a homogeneous linear equation,
"solution space" means the same thing as
"solution set"—the set of all solutions; equivalently,
the general solution—but with an added reminder that this set is
"closed under taking linear combinations", meaning that any linear
combination of solutions is a solution (of the same equation).
In the DE setting, the Wronskian is
an interesting function and a useful
tool
for proving various theorems, but, conceptually and logically,
it absolutely does not belong in
the definition of "FSS"; putting it there obscures the "basis of
the solution-space" concept.
- Questions that should then be asked are (1) whether a
linear, homogeneous DE always has a FSS, and (2) if/when
such a DE has a FSS, whether the number of functions (the
\(m\) above) is always the same as the order of the operator, as
we have observed it to be in the second-order, consant-coefficient
case (Question 1 amounts to: do there always
exist finitely many solutions \(y_1, \dots, y_m\) of
\(L[y]=0\) on \(I\) such that every solution of \(L[y]=0\)
on \(I\) is a linear combination of \(\{y_1, \dots, y_m\}\). If
there is any such set of solutions, then there is smallest \(m\)
for which there is such a set.)
- As a (partial, but very important) answer to questions (1) and
(2) above, a theorem should then be stated that asserts
that, for an \(n^{\rm th}\)-order homogeneous linear DE
\(L[y]=0\) in standard form, with continuous
coefficient-functions, then
(1) a
FSS of \(L(y)=0\) on \(I\) exists (in
fact, infinitely many FSS's of this DE on \(I\)
exist);
(2) any such FSS has exactly \(n\) functions; and
(3) a set of solutions \( \{y_1, \dots, y_n\} \)
of \( L[y]=0\) on \(I\) is a FSS if and only if this set of
functions is linearly independent on \(I\).
(This is what the book's
Theorems 2 and 3, combined, should have said.)
-
The Wronskian should
then be introduced (and a reference for the definition and
properties of \(n\times n\) determinants for general \(n\) should be
given), and used as a tool for proving this theorem
and for checking whether a set of solutions of \(L[y]=0\) is
linearly independent. (Again:
a tool, not part of a definition of anything thing
important. Introducing the Wronskian any other way distracts from
concepts that are actually important.)
- Notation such as "\(y_h\)" should be introduced for the
general solution of the associated homogeneous equation. The
general solution is best treated as the set of all
solutions, not as a typical element of this
set. (The book does the opposite after
Theorem 2, as do many other books—generally, the same ones
that use indefinite-integral notation for an arbitrary
but specific antiderivative, rather than as the set
of all antiderivatives. Such a definition is defensible, but
misguided [in my opinion, of course], and should have
been retired by the 1960s if not earlier.)
- Theorem 4 should be stated and proved. But after equation (28),
before the next sentence, something like the following should be
inserted: "Then the general solution of (27) on \((a,b)\) is
\(y=y_p+y_h.\)" Then the book's next sentence (the one concluding
with
equation (29)) should be given, with "Then" replaced by "Thus".
6.1/ 1–6, 7–14, 19, 20, 23.
Do
7–14 without using Wronskians.
The sets of
functions in these problems are so simple that, if you know
your basic functions
(see The Math
Commandments), Wronskians will only increase the
amount of work you have to do. Furthermore, in these
problems, if you find that
the Wronskian is zero then you can't conclude anything (from
that alone) about
linear dependence/independence. If you do not know your basic
functions, then Wronskians will not be of much help.
Read Section 6.2.
6.2/ 1, 9, 11, 13, 15–18. The characteristic polynomial for #9
is a perfect cube (i.e. \( (r-r_1)^3\) for some \(r_1\)); for #11 it's
a perfect fourth power.
For some of these problems and ones later in Section 6.3, it may help you
to first review my
comments about factoring
in the assignment due 3/24/25.
Read Section 6.3.
6.3/ 1–4, 29, 32. In #29, ignore the instruction to use the
annihilator method;
just
use
MUC and superposition.
|
F 4/11/25
|
No new homework
|
M 4/14/25
|
Read Section 7.1.
In Section 7.2:
- Read Examples 1–4 and the box,
"Linearity of the Transform".
- Skim Table 7.1 (p. 356)
- Read the definitions of "piecewise continuity" (p. 357)
- On p. 359, read the box "Exponential Order \(\a\)"
the box "Conditions for Existence of the transform", and the
material in between.
In Section 7.3, read the boxes with Theorems 3, 4, and 5. Skim
the box with Table 7.2 to familiarize yourself with it.
In Section 7.4, read the boxes "Inverse Laplace Transform"
and "Linearity of the Inverse Transform". On p. 370, read the
paragraph that starts with "Given the choice ..." to make yourself
aware that the inverse transform often requires you to do a
partial-fractions decomposition of some rational function of
\(s\). If you need to review the method of partial fractions,
In Section 7.5, skim from the beginning up through the end of
Example 1, just to get a rough idea of how Laplace Transforms
are going to be used to solve (certain)
IVPs.
However, don't
think for a minute that what you see after the line beginning
"Substituting these expressions ..." is acceptable writing for a
math textbook or a math instructor. "Equation equation equation
equation", three non-sequiturs in a row, can be accepted
from students on exams, but not from anyone who purports
to be teaching. There are supposed to be words
between the equations, words that make clear how each equation
is related to the next one. Teachers are supposed to help students
get rid of bad habits, not reinforce them.
Time permitting, look at Section 7.6. This is the first
place in which the Laplace Transform starts to be
useful. But all the build-up in the earlier
sections is needed.
|
W 4/16/25
|
Look again at Table 7.1, p. 356. The restrictions on \(s\)
(e.g. \(s>0\) or \(s>a\)) come from the definition of the
transform, not the "implied domain" of the formula. For any
Laplace-transformable function \(f\), the domain of the Laplace
Transform \(F\) is always one of the following \(s\)-intervals:
\((s_0,\infty)\) or \([s_0,\infty)\) for some \(s_0\in \bfr\), or
\((-\infty,\infty)\). Thus, in all of these cases, \(F(s)\) is always
defined at least on some interval of the form \((s_0,\infty)\),
i.e for all \(s\) greater than some \(s_0\). We state this
qualitatively by saying that \(F(s)\) is defined for (all) \(s\)
sufficiently large. Table 1 tells you how large "sufficiently
large" is for the functions in the table, but this information turns
out not to matter, so don't focus on (or get distracted by) it.
On your
final exam, you'll be given
this Laplace
Transform table. Familiarize yourself with where the entries
of Table 7.1 (p. 356) are located in this longer table. This
longer table comes from an older edition of your textbook that I
photocopied way back when, but is
very similar to one you can still find on the inside front cover
or inside back cover of hard-copies of the current edition, and
somewhere in the e-book (search there on "A Table of Laplace
Transforms").
Warning: On line 8 of this table, "\( (f*g)(t)\)"
is not \(f(t)g(t)\); the symbol "\(*\)" in this line denotes an
operation called convolution
(defined in Section 7.8 of the
book, which I doubt we'll get to), not simple multiplication.
For the ordinary product \(fg\) of functions \(f\)
and \(g\), there is no simple formula that expresses
\({\mathcal L}\{fg\}\) in terms of \({\mathcal L}\{f\}\) and
\({\mathcal L}\{g\}\).
Read Section 7.6. Note: my name
and notation (which I'll be using) for the book's "rectangular window
function \(\Pi_{a,b}\)" are gate function
\(\mbox{gate}_{a,b}\), which comes from the terminology
"logic gate" used in digital circuitry.
I've been using this name since before
the book's authors chose their own name and notation for these
functions (the first several editions of the book had
had no name or notation for these functions).
7.2/ 1–4,
10, 12,
13–20, 21–23.
In the instructions for
1–12, "Use
Definition 1" means "Use Definition 1", NOT
any
of Laplace Transforms.
But for 13–20,
do use Table 7.1 on p. 356 (as the instructions say to do),
even though we haven't derived
the formulas there , or
discussed linearity of the Laplace Transform (Theorem 1 on p. 355)
yet.
7.3/ 1–6
7.4/ 11, 13, 14, 16, 20
7.6/ 1–10
|
F 4/18/25
|
7.3/ 31
7.4/ 1–10, 21–24, 26, 27,
31. Normally, I would not assign these until after talking
about the inverse Laplace transform in class, but time is
short. If you are unable to do these based on your reading, it's
okay to wait, but then you'll have a much longer assignment due on
Monday. I'm trying to spread the homework problems out over
enough days that you'll have time to do all the problems.
To learn some shortcuts for the partial-fractions work that's
typically needed to invert the Laplace Transform, you may want
first to read the web handout
"Partial fractions and
Laplace Transform problems".
7.5/ 15, 17, 18, 21, 22. Note that in these problems, you're being
asked only to find \(Y(s)\), not \(y(t)\). (I.e. there are no
inverse transforms involved in these problems.)
Theorem 5 (p. 363) is the basic property of the Laplace transform that
lets you transform a constant-coefficient
\(n^{\rm the}\)-order linear IVP
\(L[y]=g, \ \ y(0)=\mbox{something}, y'(0)=\mbox{something},
\dots \) into an algebraic equation of the form that I wrote
in class as "\(p_L(s)Y(s) +q_{n-1}(s) = G(s)\)."
7.5/1–8, 10, 29. These do require inverse
transforms, and are the first exercises in which you'll
actually use Laplace Transforms to solve any IVPs.
However, we have simpler ways of
solving these specific, very simple IVPs; the only reason to solve
them via Laplace Tranforms this way is to get practice the
Laplace Transfom method. We don't start solving DEs
for which Laplace Transform is really useful until Section 7.6.
7.6/ 11–18
|
M 4/21/25
|
7.6/ 19–32, 36ac. In 21–24, you
may skip the "Sketch the graph" part of the exercises.
For all of the above problems (or
those of a similar type) in which you solve an IVP, write your
final answer in "tabular form", by which I mean an expression
like the one given for \(f(t)\) in Example 1, equation (4),
p. 385. Do not leave your final answer in the form of
equation (5) in that example. On an exam, I would treat the
book's answer to exercises 19–33 as incomplete, and would
deduct several points. The unit step-functions and "window
functions" (or "gate functions", as I call them) should be
viewed as convenient gadgets to use in intermediate
steps, or in writing down certain differential equations (the
DEs themselves, not their solutions). The purpose of these
special functions is to help us solve certain IVPs
efficiently; they do not promote understanding of solutions.
In fact, when writing a formula for a solution of a DE, the use
of unit step-functions and window-functions
often obscures understanding of how the solution behaves
(e.g. what its graph looks like).
For example, with the least
amount of simplification I would consider acceptable, the
answer to problem 23 can be written as
$$ y(t)=\left\{\begin{array}{ll} t, & 0\leq t\leq 2, \\
4+ \sin(t-2)-2\cos(t-2), & t\geq 2.\end{array}\right.
\hspace{1in} (*)$$
The book's way of writing the answer obscures the fact that the
"\(t\)" on the first line disappears on the second
line—i.e. that for \(t\geq 2\), the solution is purely
oscillatory (oscillating around the value 4); its magnitude does
not grow forever.
Note. In equation (*), observe that
I overdefined \(y(2),\) giving it a value on the first
line and then again on the second. The only reason this is
okay is that both lines give the same value for \(y(2)\), a
reflection of the fact that \(y(t)\) is continuous.
Since solutions \(y(t)\) of differential equations are always
continuous, we are guaranteed that if our tabular form for
a piecewise-expressed solution \(y(t)\) of a DE (or IVP) is
correct, then at any "break-point" \(t_1\) we will have
\(\lim_{t\to t_1-} y(t) = y(t_1) = \lim_{t\to t_1+} y(t),\) so
we can "overdefine" \(y(t_1)\) as in equation (*) without fear
of contradicting ourselves. This provides a useful
consistency-check on our tabular-form answer: At a "break
point" \(t_1\), if overdefining \(y(t_1)\) leads to two
different values of \(y(t_1)\) on the two lines on which
\(y(t_1)\) is defined, then our answer cannot be
correct (and we should go back and find our
mistake(s)). This consistency-check is very easy to do,
so we should always do it.
In exercise 23, using trig identities the
formula for \(t\geq 2\) can be further simplified to several
different expressions, one of which is \(4+
\sqrt{5}\sin(t-2-t_0)\), where \(t_0=\cos^{-1}(\frac{1}{\sqrt{5}}) =
\sin^{-1}(\frac{2}{\sqrt{5}})\). (Thus, for \(t\geq 2\),
the solution \(y(t)\)
oscillates between a minimum value of \(4-\sqrt{5}\) and a maximum
value of \(4+\sqrt{5}\).) This latter type of simplification is important
in physics and electrical engineering (especially for electrical
circuits). However, I would not expect you to do this further
simplification on an exam in MAP 2302.
|
W 4/23/25
|
(No new homework anticipated as of Sunday night.)
|
Wednesday 4/30/25 |
Final Exam
Location: Our usual classroom
Starting time: 3:00 p.m.
As I mentioned in a recent email, the exam-date/time info on One.UF
has reverted to being wrong. (This has still not
been re-corrected.) IGNORE ONE.UF for
exam-date/time info for this class. The correct date and starting time,
Wednesday Apr. 30 at 3:00 p.m.,
have always been the ones in the syllabus.
|