uxx + uyy = 0
in the rectangle 0 < x < 1, 0 < y < 1, satisfying the boundary conditions
u(0,y) = 0
u(1,y) = 0
u(x,0) = f(x)
u(x,1) = 0
Am I right to assume that u(x,y) = X(x)Y(y)?
If I am, then let
X'' + p*X = 0
Y'' - p*Y = 0.
Case I: Let p = 0.
X''(x) = 0
X(x) = c1x + c2
X(0) = c2 = 0
X(1) = c1 = 0
There are only trivial solutions for this case.
Case II: Let p = -k^2.
X''(x) = k^2 * X(x)
X(x) = c1sinh(kx) + c2cosh(kx).
X(0) = 0 so c2 = 0.
X(1) = 0 so sinh(k) = 0 but this is true only if k = 0 so this case isn't possible.
Okay, if those cases aren't possible, there is only one case left:
Case III: Let p = k^2.
X''(x) = -k^2 * X(x)
X(x) = c1sin(kx) + c2cos(kx)
X(0) = c2 = 0
X(1) = sin(k) = 0, so k = n*Pi
Y''(y) = k^2 * Y(y)
Y(y) = c3sinh(ky) + c4cosh(ky)
Then Y(y) = c3sinh(n*Pi*y) + c4cosh(n*Pi*y)
Tell me how I would use the conditions Y(0) = f(x) and Y(1) = 0 to solve this problem!
Because Y(1) = c3sinh(n*Pi) + c4cosh(n*Pi) doesn't seem to ever equal zero unless Y(y) = 0. It seems like this problem doesn't have any solutions!
in the rectangle 0 < x < 1, 0 < y < 1, satisfying the boundary conditions
u(0,y) = 0
u(1,y) = 0
u(x,0) = f(x)
u(x,1) = 0
Am I right to assume that u(x,y) = X(x)Y(y)?
If I am, then let
X'' + p*X = 0
Y'' - p*Y = 0.
Case I: Let p = 0.
X''(x) = 0
X(x) = c1x + c2
X(0) = c2 = 0
X(1) = c1 = 0
There are only trivial solutions for this case.
Case II: Let p = -k^2.
X''(x) = k^2 * X(x)
X(x) = c1sinh(kx) + c2cosh(kx).
X(0) = 0 so c2 = 0.
X(1) = 0 so sinh(k) = 0 but this is true only if k = 0 so this case isn't possible.
Okay, if those cases aren't possible, there is only one case left:
Case III: Let p = k^2.
X''(x) = -k^2 * X(x)
X(x) = c1sin(kx) + c2cos(kx)
X(0) = c2 = 0
X(1) = sin(k) = 0, so k = n*Pi
Y''(y) = k^2 * Y(y)
Y(y) = c3sinh(ky) + c4cosh(ky)
Then Y(y) = c3sinh(n*Pi*y) + c4cosh(n*Pi*y)
Tell me how I would use the conditions Y(0) = f(x) and Y(1) = 0 to solve this problem!
Because Y(1) = c3sinh(n*Pi) + c4cosh(n*Pi) doesn't seem to ever equal zero unless Y(y) = 0. It seems like this problem doesn't have any solutions!
-
So far so good. The super position principle tell us that the solution u(x, y) may be expressed as a sum of all such product functions
∞
Σ [(a_n cosh(nπy) + b_n sinh(nπy))sin(nπx)] = u(x, y)
n=1
I labelled the coefficients a_n and b_n as opposed to c3 and c4 since the coefficients may be different for each different eigenvalue n²π².
Now you can apply the boundary conditions pertaining to the variable y. Notice that these are not conditions on Y alone, but rather on the whole solution u(x, y). When y = 0, we see that u is
∞
Σ [a_n sin(nπx)] = u(x, 0) = f(x)
n=1
I've used the fact that cosh(0) = 1 and sinh(0) = 0. What this tells us is that the coefficients a_n must be the half-range Fourier sine coefficients for the function f(x)--on the interval (0, 1). Hence
1
∫ 2f(x) sin(nπx) dx = a_n <---the factor of 2 is due to this being a half-range series
0
At this stage, the a_n's are determined. Now apply the boundary condition on u at y = 1. The problem at hand is quite simple, but the approach applies even in the case that u(x, 1) = g(x) where g is not the zero function. You get
∞
Σ [(a_n cosh(nπ) + b_n sinh(nπ))sin(nπx)] = u(x, 1) = g(x) = 0.
n=1
The numbers cosh(nπ) and sinh(nπ) are just some bunch of numbers---they are known or can at least be computed with a calculator or table. This equation tells us that
a_n cosh(nπ) + b_n sinh(nπ)
are the half range sine series coefficients for g(x). Since g = 0 here, these must be zero. Recalling that the a_n's are known, you get
b_n = - coth(nπ) a_n.
All the coefficients are known. The solution is completely determined.
∞
Σ [(a_n cosh(nπy) + b_n sinh(nπy))sin(nπx)] = u(x, y)
n=1
I labelled the coefficients a_n and b_n as opposed to c3 and c4 since the coefficients may be different for each different eigenvalue n²π².
Now you can apply the boundary conditions pertaining to the variable y. Notice that these are not conditions on Y alone, but rather on the whole solution u(x, y). When y = 0, we see that u is
∞
Σ [a_n sin(nπx)] = u(x, 0) = f(x)
n=1
I've used the fact that cosh(0) = 1 and sinh(0) = 0. What this tells us is that the coefficients a_n must be the half-range Fourier sine coefficients for the function f(x)--on the interval (0, 1). Hence
1
∫ 2f(x) sin(nπx) dx = a_n <---the factor of 2 is due to this being a half-range series
0
At this stage, the a_n's are determined. Now apply the boundary condition on u at y = 1. The problem at hand is quite simple, but the approach applies even in the case that u(x, 1) = g(x) where g is not the zero function. You get
∞
Σ [(a_n cosh(nπ) + b_n sinh(nπ))sin(nπx)] = u(x, 1) = g(x) = 0.
n=1
The numbers cosh(nπ) and sinh(nπ) are just some bunch of numbers---they are known or can at least be computed with a calculator or table. This equation tells us that
a_n cosh(nπ) + b_n sinh(nπ)
are the half range sine series coefficients for g(x). Since g = 0 here, these must be zero. Recalling that the a_n's are known, you get
b_n = - coth(nπ) a_n.
All the coefficients are known. The solution is completely determined.