Classical Electromagnetism - Fitzpatrick
Classical Electromagnetism - Fitzpatrick
Classical Electromagnetism - Fitzpatrick
Classical Electromagnetism
an upper-division undergraduate level lecture course given by
Richard Fitzpatrick
Assistant Professor of Physics
The University of Texas at Austin
Fall 1997
Email: rfitzp@farside.ph.utexas.edu, Tel.: 512-471-9439
Homepage: http://farside.ph.utexas.edu/em1/em.html
1
1.1
Introduction
Major sources
The textbooks which I have consulted most frequently whilst developing course
material are:
Introduction to electrodynamics: D.J. Griffiths, 2nd edition (Prentice Hall,
Englewood Cliffs NJ, 1989).
Electromagnetism: I.S. Grant and W.R. Phillips (John Wiley & Sons, Chichester, 1975).
Classical electromagnetic radiation: M.A. Heald and J.B. Marion, 3rd edition (Saunders College Publishing, Fort Worth TX, 1995).
The Feynman lectures on physics: R.P. Feynman, R.B. Leighton, and M.
Sands, Vol. II (Addison-Wesley, Reading MA, 1964).
1
1.2
Outline of course
The main topic of this course is Maxwells equations. These are a set of eight
first order partial differential equations which constitute a complete description
of electric and magnetic phenomena. To be more exact, Maxwells equations constitute a complete description of the behaviour of electric and magnetic fields.
You are all, no doubt, quite familiar with the concepts of electric and magnetic
fields, but I wonder how many of you can answer the following question. Do
electric and magnetic fields have a real physical existence or are they just theoretical constructs which we use to calculate the electric and magnetic forces
exerted by charged particles on one another? In trying to formulate an answer
to this question we shall, hopefully, come to a better understanding of the nature
of electric and magnetic fields and the reasons why it is necessary to use these
concepts in order to fully describe electric and magnetic phenomena.
At any given point in space an electric or magnetic field possesses two properties, a magnitude and a direction. In general, these properties vary from point to
point. It is conventional to represent such a field in terms of its components measured with respect to some conveniently chosen set of Cartesian axes (i.e., x, y,
and z axes). Of course, the orientation of these axes is arbitrary. In other words,
different observers may well choose different coordinate axes to describe the same
field. Consequently, electric and magnetic fields may have different components
according to different observers. We can see that any description of electric and
magnetic fields is going to depend on two different things. Firstly, the nature of
the fields themselves and, secondly, our arbitrary choice of the coordinate axes
with respect to which we measure these fields. Likewise, Maxwells equations, the
equations which describe the behaviour of electric and magnetic fields, depend on
two different things. Firstly, the fundamental laws of physics which govern the
behaviour of electric and magnetic fields and, secondly, our arbitrary choice of
coordinate axes. It would be nice if we could easily distinguish those elements of
Maxwells equations which depend on physics from those which only depend on
coordinates. In fact, we can achieve this using what mathematicians call vector
field theory. This enables us to write Maxwells equations in a manner which
is completely independent of our choice of coordinate axes. As an added bonus,
Maxwells equations look a lot simpler when written in a coordinate free manner.
In fact, instead of eight first order partial differential equations, we only require
four such equations using vector field theory. It should be clear, by now, that we
are going to be using a lot of vector field theory in this course. In order to help
you with this, I have decided to devote the first few lectures of this course to a
review of the basic results of vector field theory. I know that most of you have
already taken a course on this topic. However, that course was taught by somebody from the mathematics department. Mathematicians have their own agenda
when it comes to discussing vectors. They like to think of vector operations as a
sort of algebra which takes place in an abstract vector space. This is all very
well, but it is not always particularly useful. So, when I come to review this topic
I shall emphasize those aspects of vectors which make them of particular interest
to physicists; namely, the fact that we can use them to write the laws of physics
in a coordinate free fashion.
Traditionally, an upper division college level course on electromagnetic theory
is organized as follows. First, there is a lengthy discussion of electrostatics (i.e.,
electric fields generated by stationary charge distributions) and all of its applications. Next, there is a discussion of magnetostatics (i.e., magnetic fields generated
by steady current distributions) and all of its applications. At this point, there is
usually some mention of the interaction of steady electric and magnetic fields with
matter. Next, there is an investigation of induction (i.e., electric and magnetic
fields generated by time varying magnetic and electric fields, respectively) and its
many applications. Only at this rather late stage in the course is it possible to
write down the full set of Maxwells equations. The course ends with a discussion
of electromagnetic waves.
The organization of my course is somewhat different to that described above.
There are two reasons for this. Firstly, I do not think that the traditional course
emphasizes Maxwells equations sufficiently. After all, they are only written down
in their full glory more than three quarters of the way through the course. I find
this a problem because, as I have already mentioned, I think that Maxwells equations should be the principal topic of an upper division course on electromagnetic
theory. Secondly, in the traditional course it is very easy for the lecturer to fall
into the trap of dwelling too long on the relatively uninteresting subject matter at
the beginning of the course (i.e., electrostatics and magnetostatics) at the expense
of the really interesting material towards the end of the course (i.e., induction,
2
2.1
Others, denoted vectors, are represented by directed line elements: e.g. P Q. Note
Q
that line elements (and therefore vectors) are movable and do not carry intrinsic
position information. In fact, vectors just possess a magnitude and a direction,
whereas scalars possess a magnitude but no direction. By convention, vector
quantities are denoted by bold-faced characters (e.g. a) in typeset documents and
by underlined characters (e.g. a) in long-hand. Vectors can be added together
but the same units must be used, like in scalar addition. Vector addition can be
Q
S
(2.1)
Here, ax is the x-coordinate of the head of the vector minus the x-coordinate
of its tail. If a (ax , ay , az ) and b (bx , by , bz ) then vector addition is defined
a + b (ax + bx , ay + by , az + bz ).
(2.2)
(2.5)
r(t + t) r(t)
dr
= lim
.
t0
dt
t
(2.6)
x/
origin are (x0 , y 0 , z 0 ). These coordinates are related to the previous coordinates
via
x0
= x cos + y sin ,
y0
= x sin + y cos ,
z0
(2.7)
= z.
We do not need to change our notation for the displacement in the new basis. It
is still denoted r. The reason for this is that the magnitude and direction of r
are independent of the choice of basis vectors. The coordinates of r do depend on
the choice of basis vectors. However, they must depend in a very specific manner
[i.e., Eq. (2.7) ] which preserves the magnitude and direction of r.
Since any vector can be represented as a displacement from an origin (this is
just a special case of a directed line element) it follows that the components of a
7
= ax cos + ay sin ,
ay 0
= ax sin + ay cos ,
az 0
= az ,
(2.8)
with similar transformation rules for rotation about the y- and z-axes. In the
coordinate approach Eq. (2.8) is the definition of a vector. The three quantities
(ax , ay , az ) are the components of a vector provided that they transform under
rotation like Eq. (2.8). Conversely, (ax , ay , az ) cannot be the components of
a vector if they do not transform like Eq. (2.8). Scalar quantities are invariant
under transformation. Thus, the individual components of a vector (a x , say)
are real numbers but they are not scalars. Displacement vectors and all vectors
derived from displacements automatically satisfy Eq. (2.8). There are, however,
other physical quantities which have both magnitude and direction but which are
not obviously related to displacements. We need to check carefully to see whether
these quantities are vectors.
2.2
Vector areas
Suppose that we have planar surface of scalar area S. We can define a vector
area S whose magnitude is S and whose direction is perpendicular to the plane,
in the sense determined by the right-hand grip rule on the rim. This quantity
clearly possesses both magnitude and direction. But is it a true vector? We know
that if the normal to the surface makes an angle x with the x-axis then the area
8
(2.9)
which is the correct transformation rule for the x-component of a vector. The
other components transform correctly as well. This proves that a vector area is
a true vector.
According to the vector addition theorem the projected area of two plane
surfaces, joined together at a line, in the x direction (say) is the x-component
of the sum of the vector areas. Likewise, for many joined up plane areas the
projected area in the x-direction, which is the same as the projected area of the
rim in the x-direction, is the x-component of the resultant of all the vector areas:
X
S=
Si .
(2.10)
i
If we approach a limit, by letting the number of plane facets increase and their
area reduce, then we obtain a continuous surface denoted by the resultant vector
area:
X
S=
Si .
(2.11)
i
It is clear that the projected area of the rim in the x-direction is just S x . Note
that the rim of the surface determines the vector area rather than the nature
of the surface. So, two different surfaces sharing the same rim both possess the
same vector areas.
In conclusion, a loop (not all in one plane) has a vector area S which is the
resultant of the vector areas of any surface ending on the loop. The components
of S are the projected areas of the loop in the directions of the basis vectors. As
a corollary, a closed surface has S = 0 since it does not possess a rim.
9
2.3
(2.12)
and b = (1/ 2, 1/ 2, 0), giving a&b = 1/2. Clearly, a&b is not invariant under
rotational transformation, so the above definition is a bad one.
Consider, now, the dot product or scalar product:
a b = ax bx + ay by + az bz = scalar number.
(2.13)
Let us rotate the basis though degrees about the z-axis. According to Eq. (2.8),
in the new basis a b takes the form
a b = (ax cos + ay sin )(bx cos + by sin )
+(ax sin + ay cos )(bx sin + by cos ) + az bz
(2.14)
= a x bx + a y by + a z bz .
Thus, a b is invariant under rotation about the z-axis. It can easily be shown
that it is also invariant under rotation about the x- and y-axes. Clearly, a b
is a true scalar, so the above definition is a good one. Incidentally, a b is the
only simple combination of the components of two vectors which transforms like
a scalar. It is easily shown that the dot product is commutative and distributive:
a b = b a,
a (b + c) = a b + a c.
(2.15)
The associative property is meaningless for the dot product because we cannot
have (a b) c since a b is scalar.
10
B
b
b-a
(2.18)
(2.19)
(2.20)
|a| = 0, |b| = 0, or the vectors a and b are perpendicular. The angle subtended
between two vectors can easily be obtained from the dot product:
cos =
ab
.
|a||b|
(2.21)
(2.22)
The rate of flow of liquid of constant velocity v through a loop of vector area S
is the product of the magnitude of the area times the component of the velocity
perpendicular to the loop. Thus,
Rate of flow = v S.
2.4
(2.23)
We have discovered how to construct a scalar from the components of two general vectors a and b. Can we also construct a vector which is not just a linear
combination of a and b? Consider the following definition:
axb = (ax bx , ay by , az bz ).
(2.24)
Is axb a proper vector? Suppose that a = (1, 0, 0), b = (0, 1, 0). Clearly,
axb = 0.
basis through 45 about the z-axis then
However,
if we rotate
the
a = (1/ 2, 1/ 2, 0), b = (1/ 2, 1/ 2, 0), and axb = (1/2, 1/2, 0). Thus,
axb does not transform like a vector because its magnitude depends on the choice
of axis. So, above definition is a bad one.
Consider, now, the cross product or vector product:
a b = (ay bz az by , az bx ax bz , ax by ay bx ) = c.
12
(2.25)
Does this rather unlikely combination transform like a vector? Let us try rotating
the basis through degrees about the z-axis using Eq. (2.8). In the new basis
cx0
(2.26)
(2.27)
a (b + c) = a b + a c,
(2.28)
a (b c) 6= (a b) c.
(2.29)
distributive:
but is not associative:
The cross product transforms like a vector, which means that it must have a
well defined direction and magnitude. We can show that a b is perpendicular
to both a and b. Consider a a b. If this is zero then the cross product must
be perpendicular to a. Now
a a b = ax (ay bz az by ) + ay (az bx ax bz ) + az (ax by ay bx )
= 0.
(2.30)
(2.31)
thumb
a^ b
middle finger
index finger
Thus,
|a b| = |a||b| sin .
(2.32)
Clearly, a a = 0 for any vector, since is always zero in this case. Also, if
a b = 0 then either |a| = 0, |b| = 0, or b is parallel (or antiparallel) to a.
Consider the parallelogram defined by vectors a and b. The scalar area is
ab sin . The vector area has the magnitude of the scalar area and is normal to
the plane of the parallelogram, which means that it is perpendicular to both a
and b. Clearly, the vector area is given by
a
S = a b,
(2.33)
with the sense obtained from the right-hand grip rule by rotating a on to b.
Suppose that a force F is applied at position r. The moment about the origin
O is the product of the magnitude of the force and the length of the lever arm OQ.
Thus, the magnitude of the moment is |F ||r| sin . The direction of a moment
is conventionally the direction of the axis through O about which the force tries
14
r sin
to rotate objects, in the sense determined by the right-hand grip rule. It follows
that the vector moment is given by
M = r F.
2.5
(2.34)
Rotation
Let us try to define a rotation vector whose magnitude is the angle of the
rotation, , and whose direction is the axis of the rotation, in the sense determined
by the right-hand grip rule. Is this a good vector? The short answer is, no.
The problem is that the addition of rotations is not commutative, whereas vector
addition is. The diagram shows the effect of applying two successive 90 rotations,
one about x-axis, and the other about the z-axis, to a six-sided die. In the
left-hand case the z-rotation is applied before the x-rotation, and vice versa in
the right-hand case. It can be seen that the die ends up in two completely
different states. Clearly, the z-rotation plus the x-rotation does not equal the xrotation plus the z-rotation. This non-commuting algebra cannot be represented
by vectors. So, although rotations have a well defined magnitude and direction
they are not vector quantities.
But, this is not quite the end of the story. Suppose that we take a general
15
z
x
y
z-axis
x-axis
x-axis
z-axis
vector a and rotate it about the z-axis by a small angle z . This is equivalent
to rotating the basis about the z-axis by z . According to Eq. (2.8) we have
a0 ' a + z k a,
(2.35)
where use has been made of the small angle expansions sin ' and cos ' 1.
The above equation can easily be generalized to allow small rotations about the
x- and y-axes by x and y , respectively. We find that
a0 ' a + a,
16
(2.36)
where
= x i + y j + z k.
(2.37)
Clearly, we can define a rotation vector , but it only works for small angle
rotations (i.e., sufficiently small that the small angle expansions of sine and cosine
are good). According to the above equation, a small z-rotation plus a small xrotation is (approximately) equal to the two rotations applied in the opposite
order. The fact that infinitesimal rotation is a vector implies that angular velocity,
,
t0 t
= lim
(2.38)
2.6
a
c
b
formed from a, b, and c, except that
a b c = a c b.
17
(2.40)
So, the volume is positive if a, b, and c form a right-handed set (i.e., if a lies
above the plane of b and c, in the sense determined from the right-hand grip rule
by rotating b on to c) and negative if they form a left-handed set. The triple
product is unchanged if the dot and cross product operators are interchanged:
a b c = a b c.
(2.41)
The triple product is also invariant under any cyclic permutation of a, b, and c,
a b c = b c a = c a b,
(2.42)
(2.43)
The scalar triple product is zero if any two of a, b, and c are parallel, or if a, b,
and c are co-planar.
If a, b, and c are non-coplanar, then any vector r can be written in terms of
them:
r = a + b + c.
(2.44)
Forming the dot product of this equation with b c then we obtain
r b c = a b c,
(2.45)
so
rbc
.
(2.46)
abc
Analogous expressions can be written for and . The parameters , , and
are uniquely determined provided a b c 6= 0; i.e., provided that the three basis
vectors are not co-planar.
=
2.7
For three vectors a, b, and c the vector triple product is defined a (b c).
The brackets are important because a (b c) 6= (a b) c. In fact, it can be
demonstrated that
a (b c) (a c)b (a b)c
(2.47)
18
and
(a b) c (a c)b (b c)a.
(2.48)
Let us try to prove the first of the above theorems. The left-hand side and
the right-hand side are both proper vectors, so if we can prove this result in
one particular coordinate system then it must be true in general. Let us take
convenient axes such that the x-axis lies along b, and c lies in the x-y plane. It
follows that b = (bx , 0, 0), c = (cx , cy , 0), and a = (ax , ay , az ). The vector b c
is directed along the z-axis: b c = (0, 0, bx cy ). It follows that a (b c) lies
in the x-y plane: a (b c) = (ax bx cy , ax bx cy , 0). This is the left-hand side
of Eq. (2.47) in our convenient axes. To evaluate the right-hand side we need
a c = ax cx + ay cy and a b = ax bx . It follows that the right-hand side is
RHS = ( (ax cx + ay cy )bx , 0, 0) (ax bx cx , ax bx cy , 0)
= (ay cy bx , ax bx cy , 0) = LHS,
(2.49)
2.8
Vector calculus
Suppose that vector a varies with time, so that a = a(t). The time derivative of
the vector is defined
a(t + t) a(t)
da
.
(2.50)
= lim
t0
dt
t
When written out in component form this becomes
da
dax day daz
=
,
,
.
dt
dt dt dt
(2.51)
(2.52)
da
d
db
=
b+ .
dt
dt
dt
(2.53)
Likewise,
(a b) = a b + a b.
dt
(2.54)
(a b) = a b + a b.
dt
(2.55)
It can be seen that the laws of vector differentiation are analogous to those in
conventional calculus.
2.9
Line integrals
f
l
P
x
plane? We first draw out f as a function of length l along the path. The integral
is then simply given by
Z
20
(2.56)
y
Q = (1,1)
1
P = (0,0)
2
x
x = y, so dl =
2 dx. Thus,
Z Q
xy dl =
x2 2 dx =
Z Q
Z 1
xy dl =
xy dx
P
= 0+
+
y=0
y dy =
1
.
2
2
.
3
1
0
xy dy
(2.57)
x=1
(2.58)
Note that the integral depends on the route taken between the initial and final
points.
The most common type of line integral is where the contributions from dx
and dy are evaluated separately, rather that through the path length dl;
Z Q
[f (x, y) dx + g(x, y) dy] .
(2.59)
P
y dx + x dy
P
21
(2.60)
Q = (2,1)
1
2
x
P = (1,0)
along the two routes indicated in the diagram below. Along route 1 we have
x = y + 1 and dx = dy, so
Z Q Z 1
3
7
(2.61)
=
y dy + (y + 1) dy = .
4
P
0
Along route 2
=
P
2
1
y 3 dx
+
y=0
1
0
x dy
= 2.
(2.62)
x=2
for some function F . Given F (P ) for one point P in the x-y plane, then
Z Q
F (Q) = F (P ) +
(f dx + g dy)
(2.64)
P
defines F (Q) for all other points in the plane. We can then draw a contour map
of F (x, y). The line integral between points P and Q is simply the change in
height in the contour map between these two points:
Z Q
Z Q
(f dx + g dy) =
dF (x, y) = F (Q) F (P ).
(2.65)
P
22
Thus,
dF (x, y) = f (x, y) dx + g(x, y) dy.
For instance, if F = xy 3 then dF = y 3 dx + 3xy 2 dy and
Z Q
3
Q
y dx + 3xy 2 dy = xy 3 P
(2.66)
(2.67)
It is clear that there are two distinct types of line integral. Those that depend
only on their endpoints and not on the path of integration, and those which
depend both on their endpoints and the integration path. Later on, we shall
learn how to distinguish between these two types.
2.10
A vector field is defined as a set of vectors associated with each point in space.
For instance, the velocity v(r) in a moving liquid (e.g., a whirlpool) constitutes
a vector field. By analogy, a scalar field is a set of scalars associated with each
point in space. An example of a scalar field is the temperature distribution T (r)
in a furnace.
Consider a general vector field A(r). Let dl = (dx, dy, dz) be the vector
element of line length. Vector line integrals often arise as
Z Q
Z Q
A dl =
(Ax dx + Ay dy + Az dz).
(2.68)
P
For instance, if A is a force then the line integral is the work done in going from
P to Q.
As an example, consider the work done in a repulsive, inverse square law,
central field F = r/|r 3 |. The element of work done is dW = F dl. Take
P = (, 0, 0) and Q = (a, 0, 0). Route 1 is along the x-axis, so
a
Z a
1
1
1
= .
(2.69)
W =
2 dx =
x
x
a
23
The second route is, firstly, around a large circle (r = constant) to the point (a,
, 0) and then parallel to the y-axis. In the first part no work is done since F is
perpendicular to dl. In the second part
Z 0
1
1
y dy
= .
=
(2.70)
W =
2
2
3/2
2
2
1/2
a
(y + a )
(a + y )
0
In this case the integral is independent of path (which is just as well!).
2.11
Surface integrals
Let us take a surface S, which is not necessarily co-planar, and divide in up into
(scalar) elements Si . Then
Z Z
X
f (x, y, z) dS = lim
f (x, y, z) Si
(2.71)
Si 0
is a surface integral. For instance, the volume of water in a lake of depth D(x, y)
is
Z Z
V =
D(x, y) dS.
(2.72)
To evaluate this integral we must split the calculation into two ordinary integrals.
y
y2
dy
y1
x1
x2
24
Z x2
D(x, y) dx dy.
(2.73)
x1
Note that the limits x1 and x2 depend on y. The total volume is the sum over
all strips:
# Z Z
Z y2 "Z x2 (y)
V =
dy
D(x, y) dx
D(x, y) dx dy.
(2.74)
y1
x1 (y)
Of course, the integral can be evaluated by taking the strips the other way around:
Z x2
Z y2 (x)
V =
D(x, y) dy.
(2.75)
dx
x1
y1 (x)
Interchanging the order of integration is a very powerful and useful trick. But
great care must be taken when evaluating the limits.
As an example, consider
Z Z
x2 y dx dy,
(2.76)
where S is shown in the diagram below. Suppose that we evaluate the x integral
y
(0,1)
1-y = x
(1,0) x
first:
dy
1y
x2 y dx
0
x3
= y dy
3
25
1y
0
y
(1 y)3 dy.
3
(2.77)
Z 1
4
y
1
y
y2 + y3
.
dy =
3
3
60
0
(2.78)
Z 1 Z 1
1
1 1
dx
x2 y dy =
= ,
x2 dx
y dy =
3 2
6
0
0
0
0
(2.81)
2.12
Surface integrals often occur during vector analysis. For instance, the rate of flow
of a liquid of velocity v through an infinitesimal surface of vector area dS is v dS.
The net rate of flow of a surface S made up of lots of infinitesimal surfaces is
Z Z
i
hX
v dS = lim
v cos dS ,
(2.82)
S
dS0
where is the angle subtended between the normal to the surface and the flow
velocity.
26
As with line integrals, most surface integrals depend both on the surface and
the rim. But some (very important) integrals depend only on the rim, and not
on the nature of the surface which spans it. As an example of this, consider
incompressible fluid flow between two surfaces S1 and S2 which end on the same
rim. The volume between the surfaces is constant, so what goes in must come
out, and
Z Z
Z Z
v dS.
(2.83)
v dS =
S2
S1
It follows that
Z Z
v dS
(2.84)
depends only on the rim, and not on the form of surfaces S 1 and S2 .
2.13
Volume integrals
f (x, y, z) dV
(2.85)
27
Z Z Z
z=
z dV
Z Z Z
dV.
(2.86)
The bottom integral is simply the volume of the hemisphere, which is 2a 3 /3.
The top integral is most easily evaluated in spherical polar coordinates, for which
z = r cos and dV = r 2 sin dr d d. Thus,
Z Z Z
z dV
=
=
Z
Z
dr
0
a
3
/2
d
0
r dr
0
giving
z=
2.14
/2
d r cos r 2 sin
0
sin cos d
0
a4 3
3a
.
=
4 2a3
8
d =
0
a4
,
4
(2.87)
(2.88)
Gradient
f(x)
where dh is the change in height after moving an infinitesimal distance dl. This
quantity is somewhat like the one-dimensional gradient, except that dh depends
on the direction of dl, as well as its magnitude. In the immediate vicinity of some
Contours of h(x,y)
y
point P the slope reduces to an inclined plane. The largest value of dh/dl is
straight up the slope. For any other direction
dh
dh
cos .
(2.89)
=
dl
dl max
Let us define a two-dimensional vector grad h, called the gradient of h, whose
magnitude is (dh/dl)max and whose direction is the direction of the steepest slope.
Because of the cos property, the component of grad h in any direction equals
dh/dl for that direction. [The argument, here, is analogous to that used for vector
areas in Section 2.2. See, in particular, Eq. (2.9). ]
The component of dh/dl in the x-direction can be obtained by plotting out the
profile of h at constant y, and then finding the slope of the tangent to the curve
at given x. This quantity is known as the partial derivative of h with respect to
x at constant y, and is denoted (h/x)y . Likewise, the gradient of the profile
at constant x is written (h/y)x . Note that the subscripts denoting constant-x
and constant-y are usually omitted, unless there is any ambiguity. If follows that
in component form
h h
grad h =
.
(2.90)
,
x y
29
(2.91)
h
,
x
h
y
(2.92)
h
h
dx +
dy,
x
y
(2.93)
(2.94)
T T T
.
(2.95)
grad T =
,
,
x y z
Here, T /x (T /x)y,z is the gradient of the one-dimensional temperature
profile at constant y and z. The change in T in going from point P to a neighbouring point offset by dl = (dx, dy, dz) is
dT =
T
T
T
dx +
dy +
dz.
x
y
z
30
(2.96)
(2.97)
(2.98)
T = constant
dl
gradT
isotherm
31
where V (Q) is a well-defined function due to the path independent nature of the
line integral. Consider moving the position of the end point by an infinitesimal
amount dx in the x-direction. We have
Z Q+dx
V (Q + dx) = V (Q) +
A dl = V (Q) + Ax dx.
(2.101)
Q
Hence,
V
= Ax ,
(2.102)
x
with analogous relations for the other components of A. It follows that
A = grad V.
(2.103)
H
where
corresponds to the line integral around some closed loop. The fact
that zero net work is done in going around a closed loop is equivalent to the
conservation of energy (this is why conservative fields are called conservative).
A good example of a non-conservative field is the force due to friction.
Clearly, a
H
frictional system loses energy in going around a closed cycle, so A dl 6= 0.
It is useful to define the vector operator
,
,
x y z
(2.105)
which is usually called the grad or del operator. This operator acts on
everything to its right in a expression until the end of the expression or a closing
bracket is reached. For instance,
f f f
,
,
grad f = f =
.
(2.106)
x y x
32
(2.107)
(2.108)
Suppose that we rotate the basis about the z-axis by degrees. By analogy
with Eq. (2.7), the old coordinates (x, y, z) are related to the new ones (x 0 , y 0 ,
z 0 ) via
x
= x0 cos y 0 sin ,
= x0 sin + y 0 cos ,
= z0.
(2.109)
Now,
=
x0
x
x0
y 0 ,z 0
giving
+
x
y
x0
y 0 ,z 0
+
y
z
x0
y 0 ,z 0
,
z
(2.110)
=
cos
+
sin
,
x0
x
y
(2.111)
x0 = cos x + sin y .
(2.112)
and
It can be seen that the differential operator transforms like a proper vector,
according to Eq. (2.8). This is another proof that f is a good vector.
2.15
Divergence
H
Let us start with a vector field A. Consider S A dS over some closed surface
S, where dS denotes an outward pointing surface element. This surface integral
33
His usually called the flux of A out of S. If A is the velocity of some fluid, then
A dS is the rate of flow of material out of S.
S
If A is constant in space then it is easily demonstrated that the net flux out
of S is zero:
I
I
A dS = A dS = A S = 0,
(2.113)
since the vector area S of a closed surface is zero.
Ax
Ax
dx dy dz =
dV,
x
x
(2.114)
z+dz
y+dy z
z
x
y
x+dx
from the sides normal to the y and z-axes, so the total of all the contributions is
I
Ay
Az
Ax
+
+
dV.
(2.115)
A dS =
x
y
z
The divergence of a vector field is defined
div A = A =
Ax
Ay
Az
+
+
.
x
y
z
34
(2.116)
R
ward. We divide up the volume into lots of very small cubes and sum A dS
over all of the surfaces. The contributions from the interior surfaces cancel out,
leaving just the contribution from the outer surface. We can use Eq. (2.115)
for
R each cube individually. This tells us that the summation is equivalent to
div A dV over the whole volume. Thus, the integral of A dS over the outer
surface is equal to the integral of div A over the whole volume, which proves the
divergence theorem.
Now, for a vector field with div A = 0,
I
A dS = 0
S
35
(2.119)
for any closed surface S. So, for two surfaces on the same rim,
Z
Z
A dS =
A dS.
S1
(2.120)
S2
Thus, if div A = 0 then the surface integral depends on the rim but not the
nature of the surface which spans it. On the other hand, if div A 6= 0 then the
integral depends on both the rim and the surface.
S1
RIM
S2
Consider,
now, a compressible fluid of density and velocity v. The surface
H
integral S v dS is the net rate of mass flow out of the closed surface S. This
must be equal to the rate of
R decrease of mass inside the volume V enclosed by S,
which is written (/t)( V dV ). Thus,
Z
I
v dS =
dV
(2.122)
t
S
V
36
.
t
(2.123)
This is called the equation of continuity of the fluid, since it ensures that fluid is
neither created nor destroyed as it flows from place to place. If is constant then
the equation of continuity reduces to the previous incompressible result div v = 0.
It is sometimes helpful to represent a vector field A by lines of force or
field lines. The direction of a line of force at any point is the same as the
direction of A. The density of lines (i.e., the number of lines crossing a unit
surface perpendicular to A) is equal to |A|. In the diagram, |A| is larger at point
If div A = 0 then there is no net flux of lines out of any surface, which mean that
the lines of force must form closed loops. Such a field is called a solenoidal vector
field.
37
2.16
The Laplacian
,
,
x y z
(2.125)
Ay
Az
Ax
+
+
,
x
y
z
(2.126)
which is a scalar field formed from a vector field. There are two ways in which
we can combine grad and div. We can either form the vector field grad(div A)
or the scalar field div(grad ). The former is not particularly interesting, but
the scalar field div(grad ) turns up in a great many physics problems and is,
therefore, worthy of discussion.
Let us introduce the heat flow vector h which is the rate of flow of heat
energy per unit area across a surface perpendicular to the direction of h. In
many substances heat flows directly down the temperature gradient, so that we
can write
h = grad T,
(2.127)
H
where is the thermal conductivity. The net rate of heat flow S h dS out of
some closed surface S must be equal to the rate of decrease of heat energy in the
volume V enclosed by S. Thus, we can write
Z
h dS =
c T dV ,
(2.128)
t
S
where c is the specific heat. It follows from the divergence theorem that
div h = c
T
.
t
(2.129)
Taking the divergence of both sides of Eq. (2.127), and making use of Eq. (2.129),
we obtain
T
div ( grad T ) = c
,
(2.130)
t
38
or
T
.
t
If is constant then the above equation can be written
(2.131)
(T ) = c
div(grad T ) =
c T
.
t
(2.132)
T
T
T
+
+
div(grad T ) =
x x
y y
z z
=
2T
2T
2T
+
+
2 T.
2
2
2
x
y
z
2
2
2
2
+ 2+ 2
x2
y
z
(2.133)
(2.134)
is called the Laplacian. The Laplacian is a good scalar operator (i.e., it is coordinate independent) because it is formed from a combination of div (another good
scalar operator) and grad (a good vector operator).
What is the physical significance of the Laplacian? In one-dimension 2 T
reduces to 2 T /x2 . Now, 2 T /x2 is positive if T (x) is concave (from above)
39
and negative if it is convex. So, if T is less than the average of T in its surroundings
then 2 T is positive, and vice versa.
In two dimensions
2T
2T
T =
+
.
x2
y 2
2
(2.135)
c T
.
t
(2.136)
It is clear that if 2 T is positive then T is locally less than the average value, so
T /t > 0; i.e., the region heats up. Likewise, if 2 T is negative then T is locally
greater than the average value and heat flows out of the region; i.e., T /t < 0.
Thus, the above heat conduction equation makes physical sense.
2.17
Curl
40
I = Imax cos . Let us introduce the vector field curl A whose magnitude is
H
A dl
(2.137)
|curl A| = lim
dS0
dS
for the orientation giving Imax . Here, dS is the area of the loop. The direction
of curl A is perpendicular to the plane of the loop, when it is in the orientation
giving Imax , with the sense given by the right-hand grip rule assuming that the
loop is right-handed.
Let usH now express curl A in terms of the components of A. First, we shall
evaluate Adl around a small rectangle in the y-z plane. The contribution from
z
4
z+dz
1
z
y
y+dy
sides 1 and 3 is
Az (y + dy) dz Az (y) dz =
Az
dy dz.
y
(2.138)
Ay
dy dz.
y
I
Ay
Az
dS,
A dl =
y
z
where dS = dy dz is the area of the loop.
41
(2.139)
(2.140)
I
Az
Ay
A dl =
dSx
(2.141)
y
z
is valid for a small loop dS = (dSx , 0, 0) of any shape in the y-z plane. Likewise,
we can show that if the loop is in the x-z plane then dS = (0, dS y , 0) and
I
Az
Ax
A dl =
dSy .
(2.142)
z
x
Finally, if the loop is in the x-y plane then dS = (0, 0, dS z ) and
I
Ay
Ax
A dl =
dSz .
x
y
(2.143)
Imagine an arbitrary loop of vector area dS = (dSx , dSy , dSz ). We can construct this out of three loops in the x, y, and z directions, as indicated in the
diagram below. If we form the line integral around all three loops then the intez
dS
2
1
3
rior contributions cancel and we are left with the line integral around the original
42
loop. Thus,
giving
where
A dl =
curl A =
A dl1 +
A dl2 +
A dl3 ,
Ay Ax
Az Ay
Ax
Az
y
z z
x x
y
(2.144)
(2.145)
(2.146)
Note that
curl A = A.
(2.147)
This demonstrates that curl A is a good vector field, since it is the cross product
of the operator (a good vector operator) and the vector field A.
Consider a solid body rotating about the z-axis. The angular velocity is given
by = (0, 0, ), so the rotation velocity at position r is
v =r
(2.148)
(2.149)
The first part has the same curl as the velocity field on the axis, and the second
part has zero curl since it is constant. Thus, curl v = (0, 0, 2) everywhere in the
body. This allows us to form a physical picture of curl A. If we imagine A as the
velocity field of some fluid then curl A at any given point is equal to twice the
local angular rotation velocity, i.e., 2. Hence, a vector field with curl A = 0
everywhere is said to be irrotational.
43
Another important result of vector field theory is the curl theorem or Stokes
theorem:
I
Z
A dl =
curl A dS,
(2.150)
C
for some (non-planar) surface S bounded by a rim C. This theorem can easily be
proved by splitting the loop up into many small rectangular loops and forming
the integral around all of the resultant loops. All of the contributions from the
interior loops cancel, leaving just the contribution from the outer rim. Making
use of Eq. (2.145) for each of the small loops, we can see that the contribution
from all of the loops is also equal to the integral of curl A dS across the whole
surface. This proves the theorem.
One immediate consequence of of Stokes theorem is that curl A is incompressible. Consider two surfaces,
S1 and S2 , which share the same rim. It is clear
R
from StokesHtheorem that curl A dS is the same for both surfaces. Thus, it
follows that curl A dSH = 0 for any closed
R surface. However, we have from the
divergence theorem that curl A dS = div(curl A) dV = 0 for any volume.
Hence,
div(curl A) 0.
(2.151)
So, the field-lines of curl A never begin or end. In other words, curl A is a
solenoidal field.
H
We have seen that for a conservative field A dl = 0 for any loop. This
is entirely equivalent to A = grad . However, the magnitude of curl A is
44
H
limdS0 A dl/dS for some particular loop. It is clear then that curl A = 0
for a conservative field. In other words,
curl(grad ) 0.
(2.152)
(2.153)
2 A = (2 Ax , 2 Ay , 2 Az ).
(2.154)
where
It should be emphasized, however, that the above result is only valid in Cartesian
coordinates.
2.18
Summary
Vector addition:
a + b (ax + bx , ay + by , az + bz )
Vector multiplication:
na (nax , nay , naz )
Scalar product:
a b = a x bx + a y by + a z bz
Vector product:
a b = (ay bz az by , az bx ax bz , ax by ay bx )
Scalar triple product:
a b c = a b c = b c a = b a c
45
grad =
,
,
x y z
Divergence:
div A =
Curl:
curl A =
Gauss theorem:
Stokes theorem:
Ay
Az
Ax
+
+
x
y
z
Ay Ax
Az Ay
Ax
Az
y
z z
x x
y
I
I
A dS =
A dl =
div A dV
V
curl A dS
Del operator:
=
,
,
x y z
grad =
div A = A
curl A = A
Vector identities:
2
= =
46
2 2 2
+ 2 + 2
x2
y
z
A=0
= 0
2 A = ( A) A
Other vector identities:
() = () + ()
(A) = A + A
(A) = A + A
(A B) = B A A B
(A B) = A( B) B( A) + (B )A (A )B
(A B) = A ( B) + B ( A) + (A )B + (B )A
Acknowledgment
This section is almost entirely based on my undergraduate notes taken during a
course of lectures given by Dr. Steven Gull of the Cavendish Laboratory, Cambridge.
47
3
3.1
Maxwells equations
Coulombs law
Between 1785 and 1787 the French physicist Charles Augustine de Coulomb performed a series of experiments involving electric charges and eventually established what is nowadays known as Coulombs law. According to this law the
force acting between two charges is radial, inverse-square, and proportional to the
product of the charges. Two like charges repel one another whereas two unlike
charges attract. Suppose that two charges, q1 and q2 , are located at position
vectors r1 and r2 . The electrical force acting on the second charge is written
f2 =
q1 q2 r2 r 1
40 |r2 r1 |3
(3.1)
in vector notation. An equal and opposite force acts on the first charge, in
accordance with Newtons third law of motion. The SI unit of electric charge is
q
f1
r2 _ r1
r1
f2
r2
(3.2)
Coulombs law has the same mathematical form as Newtons law of gravity.
Suppose that two masses, m1 and m2 , are located at position vectors r1 and r2 .
48
r2 r 1
|r2 r1 |3
(3.3)
(3.4)
1
.
|r2 r1 |2
(3.5)
However, they differ in two crucial respects. Firstly, the force due to gravity
is always attractive (there is no such thing as a negative mass!). Secondly, the
magnitudes of the two forces are vastly different. Consider the ratio of the electrical and gravitational forces acting on two particles. This ratio is a constant,
independent of the relative positions of the particles, and is given by
q1 q2
1
|felectrical |
=
.
|fgravitational |
m1 m2 40 G
(3.6)
(3.7)
This is a colossal number! Suppose you had a homework problem involving the
motion of particles in a box under the action of two forces with the same range
but differing in magnitude by a factor 1042 . I think that most people would write
on line one something like it is a good approximation to neglect the weaker force
in favour of the stronger one. In fact, most people would write this even if the
forces differed in magnitude by a factor 10! Applying this reasoning to the motion
of particles in the universe we would expect the universe to be governed entirely
by electrical forces. However, this is not the case. The force which holds us to
the surface of the Earth, and prevents us from floating off into space, is gravity.
The force which causes the Earth to orbit the Sun is also gravity. In fact, on
49
astronomical length-scales gravity is the dominant force and electrical forces are
largely irrelevant. The key to understanding this paradox is that there are both
positive and negative electric charges whereas there are only positive gravitational
charges. This means that gravitational forces are always cumulative whereas
electrical forces can cancel one another out. Suppose, for the sake of argument,
that the universe starts out with randomly distributed electric charges. Initially,
we expect electrical forces to completely dominate gravity. These forces try to
make every positive charge get as far away as possible from other positive charges
and as close as possible to other negative charges. After a bit we expect the
positive and negative charges to form close pairs. Just how close is determined
by quantum mechanics but, in general, it is pretty close; i.e., about 10 10 m.
The electrical forces due to the charges in each pair effectively cancel one another
out on length-scales much larger than the mutual spacing of the pair. It is only
possible for gravity to be the dominant long-range force if the number of positive
charges in the universe is almost equal to the number of negative charges. In this
situation every positive change can find a negative charge to team up with and
there are virtually no charges left over. In order for the cancellation of long-range
electrical forces to be effective the relative difference in the number of positive
and negative charges in the universe must be incredibly small. In fact, positive
and negative charges have to cancel each other out to such accuracy that most
physicists believe that the net charge of the universe is exactly zero. But, it is
not enough for the universe to start out with zero charge. Suppose there were
some elementary particle process which did not conserve electric charge. Even
if this were to go on at a very low rate it would not take long before the fine
balance between positive and negative charges in the universe were wrecked. So,
it is important that electric charge is a conserved quantity (i.e., the charge of the
universe can neither increase or decrease). As far as we know, this is the case.
To date no elementary particle reactions have been discovered which create or
destroy net electric charge.
In summary, there are two long-range forces in the universe, electromagnetism
and gravity. The former is enormously stronger than the latter, but is usually
hidden away inside neutral atoms. The fine balance of forces due to negative
and positive electric charges starts to break down on atomic scales. In fact, interatomic and intermolecular forces are electrical in nature. So, electrical forces are
basically what prevent us from falling though the floor. But, this is electromag50
acting on particles are not quite equal and opposite, momentum is still conserved.
We can bypass some of the problematic aspects of action at a distance by only
considering steady-state situations. For the moment, this is how we shall proceed.
Consider N charges, q1 though qN , which are located at position vectors r1
through rN . Electrical forces obey what is known as the principle of superposition.
The electrical force acting on a test charge q at position vector r is simply the
vector sum of all of the Coulomb law forces from each of the N charges taken in
isolation. In other words, the electrical force exerted by the ith charge (say) on
the test charge is the same as if all the other charges were not there. Thus, the
force acting on the test charge is given by
N
X
qi r r i
.
f (r) = q
3
4
|r
r
|
0
i
i=1
(3.8)
It is helpful to define a vector field E(r), called the electric field, which is the
force exerted on a unit test charge located at position vector r. So, the force on
a test charge is written
f = q E,
(3.9)
and the electric field is given by
N
X
qi r r i
.
E(r) =
3
4
|r
r
|
0
i
i=1
(3.10)
At this point, we have no reason to believe that the electric field has any real
existence; it is just a useful device for calculating the force which acts on test
charges placed at various locations.
The electric field from a single charge q located at the origin is purely radial, points outwards if the charge is positive, inwards if it is negative, and has
magnitude
q
Er (r) =
,
(3.11)
40 r2
where r = |r|.
52
53
3.2
(3.13)
. (3.14)
x [(x x0 )2 + (y y 0 )2 + (z z 0 )2 ]1/2
r r0
1
,
=
|r r 0 |3
|r r 0 |
(3.15)
where (/x, /y, /z) is a differential operator which involves the components of r but not those of r 0 . It follows from Eq. (3.12) that
E = ,
where
1
(r) =
40
(r 0 ) 3 0
d r.
|r r 0 |
(3.16)
(3.17)
Thus, the electric field generated by a collection of fixed charges can be written
as the gradient of a scalar potential, and this potential can be expressed as a
simple volume integral involving the charge distribution.
The scalar potential generated by a charge q located at the origin is
(r) =
q
.
40 r
54
(3.18)
where
qi
.
40 |r ri |
i (r) =
(3.20)
Thus, the scalar potential is just the sum of the potentials generated by each of
the charges taken in isolation.
Suppose that a particle of charge q is taken along some path from point P to
point Q. The net work done on the particle by electrical forces is
W =
Q
P
f dl,
(3.21)
where f is the electrical force and dl is a line element along the path. Making
use of Eqs. (3.9) and (3.16) we obtain
W =q
Q
P
E dl = q
Q
P
dl = q ( (Q) (P ) ) .
(3.22)
Thus, the work done on the particle is simply minus its charge times the difference in electric potential between the end point and the beginning point. This
quantity is clearly independent of the path taken from P to Q. So, an electric
field generated by stationary charges is an example of a conservative field. In
fact, this result follows immediately from vector field theory once we are told, in
Eq. (3.16), that the electric field is the gradient of a scalar potential. The work
done on the particle when it is taken around a closed path is zero, so
I
E dl = 0
(3.23)
C
for any closed loop C. This implies from Stokes theorem that
E =0
55
(3.24)
for any electric field generated by stationary charges. Equation (3.24) also follows
directly from Eq. (3.16), since = 0 for any scalar potential .
The SI unit of electric potential is the volt, which is equivalent to a joule per
coulomb. Thus, according to Eq. (3.22) the electrical work done on a particle
when it is taken between two points is the product of its charge and the voltage
difference between the points.
We are familiar with the idea that a particle moving in a gravitational field
possesses potential energy as well as kinetic energy. If the particle moves from
point P to a lower point Q then the gravitational field does work on the particle causing its kinetic energy to increase. The increase in kinetic energy of
the particle is balanced by an equal decrease in its potential energy so that the
overall energy of the particle is a conserved quantity. Therefore, the work done
on the particle as it moves from P to Q is minus the difference in its gravitational potential energy between points Q and P . Of course, it only makes
sense to talk about gravitational potential energy because the gravitational field
is conservative. Thus, the work done in taking a particle between two points is
path independent and, therefore, well defined. This means that the difference
in potential energy of the particle between the beginning and end points is also
well defined. We have already seen that an electric field generated by stationary
charges is a conservative field. In follows that we can define an electrical potential
energy of a particle moving in such a field. By analogy with gravitational fields,
the work done in taking a particle from point P to point Q is equal to minus the
difference in potential energy of the particle between points Q and P . It follows
from Eq. (3.22) that the potential energy of the particle at a general point Q,
relative to some reference point P , is given by
E(Q) = q (Q).
(3.25)
Free particles try to move down gradients of potential energy in order to attain a
minimum potential energy state. Thus, free particles in the Earths gravitational
field tend to fall downwards. Likewise, positive charges moving in an electric field
tend to migrate towards regions with the most negative voltage and vice versa
for negative charges.
The scalar electric potential is undefined to an additive constant. So, the
56
transformation
(r) (r) + c
(3.26)
leaves the electric field unchanged according to Eq. (3.16). The potential can
be fixed unambiguously by specifying its value at a single point. The usual
convention is to say that the potential is zero at infinity. This convention is
implicit in Eq. (3.17),
where it can be seen that 0 as |r| provided that
R
0
the total charge (r ) d3 r 0 is finite.
3.3
Gauss law
Consider a single charge located at the origin. The electric field generated by
such a charge is given by Eq. (3.11). Suppose that we surround the charge by a
concentric spherical surface S of radius r. The flux of the electric field through
this surface is given by
S
V
q
E dS =
Er dSr = Er (r) 4r 2 =
S
q
q
2
4r
=
,
40 r2
0
(3.27)
since the normal to the surface is always parallel to the local electric field. However, we also know from Gauss theorem that
I
Z
E dS =
E d3 r,
(3.28)
S
57
3x x
1
r3
r4 r
q r2 3x2
=
.
40
r5
(3.30)
r
x
= .
(3.31)
x
r
Formulae analogous to Eq. (3.30) can be obtained for E y /y and Ez /z. The
divergence of the field is given by
Ex
Ey
Ez
q 3r2 3x2 3y 2 3z 2
E =
+
+
=
= 0.
x
y
z
40
r5
This is a puzzling result! We have from Eqs. (3.27) and (3.28) that
Z
q
E d3 r = ,
0
V
(3.32)
(3.33)
and yet we have just proved that E = 0. This paradox can be resolved after a
close examination of Eq. (3.32). At the origin (r = 0) we find that E = 0/0,
which means that E can take any value at this point. Thus, Eqs. (3.32) and
(3.33) can be reconciled if E is some sort of spike function; i.e., it is zero
everywhere except arbitrarily close to the origin, where it becomes very large.
This must occur in such a manner that the volume integral over the spike is
finite.
Let us examine how we might construct a one-dimensional spike function.
Consider the box-car function
1/
for |x| < /2
g(x, ) =
(3.34)
0
otherwise.
58
g(x)
1/
/2 /2
g(x, ) dx = 1.
(3.35)
(3.36)
Thus, (x) has all of the required properties of a spike function. The onedimensional spike function (x) is called the Dirac delta-function after the
Cambridge physicist Paul Dirac who invented it in 1927 while investigating quantum mechanics. The delta-function is an example of what mathematicians call a
generalized function; it is not well-defined at x = 0, but its integral is nevertheless well-defined. Consider the integral
Z
f (x) (x) dx,
(3.38)
59
where use has been made of Eq. (3.37). The above equation, which is valid for
any well-behaved function f (x), is effectively the definition of a delta-function.
A simple change of variables allows us to define (x x0 ), which is a spike
function centred on x = x0 . Equation (3.39) gives
Z
f (x) (x x0 ) dx = f (x0 ).
(3.40)
The integral can be turned into an integral over all space by taking
R the limit a
. However, we know that for one-dimensional delta-functions (x) dx = 1,
so it follows from the above equation that
Z
(r) d3 r = 1,
(3.43)
which is the desired result. A simple generalization of previous arguments yields
Z
f (r) (r) d3 r = f (0),
(3.44)
where f (r) is any well-behaved scalar field. Finally, we can change variables and
write
(r r 0 ) = (x x0 )(y y 0 )(z z 0 ),
(3.45)
60
0
V
for a spherical volume V centered on the origin. These two facts imply that
E =
q
(r),
0
(3.48)
from Gauss theorem, plus Eq. (3.48). From these, it is clear that the flux of E
out of S is q/0 for a spherical surface displaced from the origin. However, the
flux becomes zero when the displacement is sufficiently large that the origin is
61
62
gives
N
X
qi
E =
(r ri ).
0
i=1
(3.50)
(3.51)
where Q is the total charge enclosed by the surface S. This result is called Gauss
law and does not depend on the shape of the surface.
Suppose, finally, that instead of having a set of discrete charges we have a
continuous charge distribution described by a charge density (r). The charge
contained in a small rectangular volume of dimensions dx, dy, and dz, located at
position r is Q = (r) dx dy dz. However, if we integrate E over this volume
element we obtain
dx dy dz
Q
=
,
(3.52)
E dx dy dz =
0
0
where use has been made of Eq. (3.51). Here, the volume element is assumed to
be sufficiently small that E does not vary significantly across it. Thus, we
obtain
E = .
(3.53)
0
This is the first of four field equations, called Maxwells equations, which together
form a complete description of electromagnetism. Of course, our derivation of
Eq. (3.53) is only valid for electric fields generated by stationary charge distributions. In principle, additional terms might be required to describe fields generated
by moving charge distributions. However, it turns out that this is not the case
and that Eq. (3.53) is universally valid.
Equation (3.53) is a differential equation describing the electric field generated
by a set of charges. We already know the solution to this equation when the
charges are stationary; it is given by Eq. (3.12):
Z
r r0 3 0
1
0
(r )
d r.
(3.54)
E(r) =
40
|r r 0 |3
63
1
r r0
2
=
= 4 (r r 0 ),
0
3
0
|r r |
|r r |
where use has been made of Eq. (3.15). It follows that
Z
1
r r0
0
d3 r 0
(r )
E(r) =
0
3
40
|r r |
Z
(r 0 )
(r)
=
(r r 0 ) d3 r 0 =
,
0
0
(3.55)
(3.56)
which is the desired result. The most general form of Gauss law, Eq. (3.51), is
obtained by integrating Eq. (3.53) over a volume V surrounded by a surface S
and making use of Gauss theorem:
Z
I
1
(r) d3 r.
(3.57)
E dS =
0 V
S
3.4
Poissons equation
We have seen that the electric field generated by a set of stationary charges can
be written as the gradient of a scalar potential, so that
E = .
(3.58)
This equation can be combined with the field equation (3.53) to give a partial
differential equation for the scalar potential:
2 =
.
0
(3.59)
(3.60)
(3.62)
Poissons equation has this property because it is linear in both the potential and
the source term.
The fact that the solutions to Poissons equation are superposable suggests
a general method for solving this equation. Suppose that we could construct
all of the solutions generated by point sources. Of course, these solutions must
satisfy the appropriate boundary conditions. Any general source function can be
built up out of a set of suitably weighted point sources, so the general solution of
Poissons equation must be expressible as a weighted sum over the point source
solutions. Thus, once we know all of the point source solutions we can construct
any other solution. In mathematical terminology we require the solution to
2 G(r, r 0 ) = (r r 0 )
(3.63)
which goes to zero as |r| . The function G(r, r 0 ) is the solution generated by
a point source located at position r 0 . In mathematical terminology this function
is known as a Greens function. The solution generated by a general source
function v(r) is simply the appropriately weighted sum of all of the Greens
function solutions:
Z
u(r) = G(r, r 0 )v(r 0 ) d3 r 0 .
(3.64)
We can easily demonstrate that this is the correct solution:
Z
Z
2
0
0
3 0
2
G(r, r ) v(r ) d r = (r r 0 ) v(r 0 ) d3 r 0 = v(r).
u(r) =
65
(3.65)
.
0
(3.66)
The Greens function for this equation satisfies Eq. (3.63) with |G| as
|r| 0. It follows from Eq. (3.55) that
G(r, r 0 ) =
1
1
.
4 |r r 0 |
(3.67)
Note from Eq. (3.20) that the Greens function has the same form as the potential
generated by a point charge. This is hardly surprising given the definition of a
Greens function. It follows from Eq. (3.64) and (3.67) that the general solution
to Poissons equation (3.66) is written
Z
1
(r 0 ) 3 0
(r) =
d r.
(3.68)
40
|r r 0 |
In fact, we have already obtained this solution by another method [see Eq. (3.17) ].
3.5
Amp`
eres experiments
In 1820 the Danish physicist Hans Christian rsted was giving a lecture demonstration of various electrical and magnetic effects. Suddenly, much to his surprise,
he noticed that the needle of a compass he was holding was deflected when he
moved it close to a current carrying wire. Up until then magnetism has been
thought of as solely a property of some rather unusual rocks called loadstones.
Word of this discovery spread quickly along the scientific grapevine, and the
French physicist Andre Marie Amp`ere immediately decided to investigate further. Amp`eres apparatus consisted (essentially) of a long straight wire carrying
an electric current current I. Amp`ere quickly discovered that the needle of a small
compass maps out a series of concentric circular loops in the plane perpendicular
to a current carrying wire. The direction of circulation around these magnetic
loops is conventionally taken to be the direction in which the north pole of the
compass needle points. Using this convention, the circulation of the loops is given
66
I
direction of north pole
of compass needle
by a right-hand rule: if the thumb of the right-hand points along the direction of
the current then the fingers of the right-hand circulate in the same sense as the
magnetic loops.
Amp`eres next series of experiments involved bringing a short test wire, carrying a current I 0 , close to the original wire and investigating the force exerted on
the test wire. This experiment is not quite as clear cut as Coulombs experiment
I
current-carrying
test wire
I
because, unlike electric charges, electric currents cannot exist as point entities;
they have to flow in complete circuits. We must imagine that the circuit which
connects with the central wire is sufficiently far away that it has no appreciable
influence on the outcome of the experiment. The circuit which connects with the
test wire is more problematic. Fortunately, if the feed wires are twisted around
each other, as indicated in the diagram, then they effectively cancel one another
67
(3.69)
where I 0 is a vector whose direction and magnitude are the same as those of the
test current. Incidentally, the SI unit of electric current is the ampere (A), which
is the same as a coulomb per second. The SI unit of magnetic field strength
is the tesla (T), which is the same as a newton per ampere per meter. The
variation of the force per unit length acting on a test wire with the strength of
the central current and the perpendicular distance r to the central wire is summed
68
0 I
,
2r
(3.70)
(3.71)
The concept of a magnetic field allows the calculation of the force on a test
wire to be conveniently split into two parts. In the first part, we calculate the
magnetic field generated by the current flowing in the central wire. This field
circulates in the plane normal to the wire; its magnitude is proportional to the
central current and inversely proportional to the perpendicular distance from the
wire. In the second part, we use Eq. (3.69) to calculate the force per unit length
acting on a short current carrying wire located in the magnetic field generated
by the central current. This force is perpendicular to both the magnetic field and
the direction of the test current. Note that, at this stage, we have no reason to
suppose that the magnetic field has any real existence. It is introduced merely to
facilitate the calculation of the force exerted on the test wire by the central wire.
3.6
The flow of an electric current down a conducting wire is ultimately due to the
motion of electrically charged particles (in most cases, electrons) through the
conducting medium. It seems reasonable, therefore, that the force exerted on
the wire when it is placed in a magnetic field is really the resultant of the forces
exerted on these moving charges. Let us suppose that this is the case.
Let A be the (uniform) cross-sectional area of the wire, and let n be the
number density of mobile charges in the conductor. Suppose that the mobile
charges each have charge q and velocity v. We must assume that the conductor
also contains stationary charges, of charge q and number density n, say, so that
69
the net charge density in the wire is zero. In most conductors the mobile charges
are electrons and the stationary charges are atomic nuclei. The magnitude of
the electric current flowing through the wire is simply the number of coulombs
per second which flow past a given point. In one second a mobile charge moves
a distance v, so all of the charges contained in a cylinder of cross-sectional area
A and length v flow past a given point. Thus, the magnitude of the current is
q nA v. The direction of the current is the same as the direction of motion of the
charges, so the vector current is I 0 = q nA v. According to Eq. (3.69) the force
per unit length acting on the wire is
F = q nA v B.
(3.72)
However, a unit length of the wire contains nA moving charges. So, assuming
that each charge is subject to an equal force from the magnetic field (we have no
reason to suppose otherwise), the force acting on an individual charge is
f = q v B.
(3.73)
We can combine this with Eq. (3.9) to give the force acting on a charge q moving
with velocity v in an electric field E and a magnetic field B:
f = q E + q v B.
(3.74)
This is called the Lorentz force law after the Dutch physicist Hendrik Antoon
Lorentz who first formulated it. The electric force on a charged particle is parallel
to the local electric field. The magnetic force, however, is perpendicular to both
the local magnetic field and the particles direction of motion. No magnetic force
is exerted on a stationary charged particle.
The equation of motion of a free particle of charge q and mass m moving in
electric and magnetic fields is
m
dv
= q E + q v B,
dt
(3.75)
according to the Lorentz force law. This equation of motion was verified in a
famous experiment carried out by the Cambridge physicist J.J. Thompson in
1897. Thompson was investigating cathode rays, a then mysterious form of
70
radiation emitted by a heated metal element held at a large negative voltage (i.e.
a cathode) with respect to another metal element (i.e., an anode) in an evacuated
tube. German physicists held that cathode rays were a form of electromagnetic
radiation, whilst British and French physicists suspected that they were, in reality,
a stream of charged particles. Thompson was able to demonstrate that the latter
view was correct. In Thompsons experiment the cathode rays passed though
a region of crossed electric and magnetic fields (still in vacuum). The fields
were perpendicular to the original trajectory of the rays and were also mutually
perpendicular.
Let us analyze Thompsons experiment. Suppose that the rays are originally
traveling in the x-direction, and are subject to a uniform electric field E in the
z-direction and a uniform magnetic field B in the y-direction. Let us assume, as
Thompson did, that cathode rays are a stream of particles of mass m and charge
q. The equation of motion of the particles in the z-direction is
d2 z
m 2 = q (E vB) ,
dt
(3.76)
where v is the velocity of the particles in the x-direction. Thompson started off his
experiment by only turning on the electric field in his apparatus and measuring
the deflection d of the ray in the z-direction after it had traveled a distance l
through the electric field. It is clear from the equation of motion that
d=
q E l2
q E t2
=
,
m 2
m 2v 2
(3.77)
where the time of flight t is replaced by l/v. This formula is only valid if d l,
which is assumed to be the case. Next, Thompson turned on the magnetic field
in his apparatus and adjusted it so that the cathode ray was no longer deflected.
The lack of deflection implies that the net force on the particles in the z-direction
is zero. In other words, the electric and magnetic forces balance exactly. It follows
from Eq. (3.76) that with a properly adjusted magnetic field strength
v=
E
.
B
(3.78)
Thus, Eqs. (3.77) and (3.78) and can be combined and rearranged to give the
71
(3.79)
Using this method Thompson inferred that cathode rays were made up of negatively charged particles (the sign of the charge is obvious from the direction of the
deflection in the electric field) with a charge to mass ratio of 1.7 10 11 C/kg.
A decade later in 1908 the American Robert Millikan performed his famous oil
drop experiment and discovered that mobile electric charges are quantized in
units of 1.6 1019 C. Assuming that mobile electric charges and the particles which make up cathode rays are one and the same thing, Thompsons and
Millikans experiments imply that the mass of these particles is 9.4 10 31 kg.
Of course, this is the mass of an electron (the modern value is 9.1 10 31 kg),
and 1.6 1019 C is the charge of an electron. Thus, cathode rays are, in fact,
streams of electrons which are emitted from a heated cathode and then accelerated because of the large voltage difference between the cathode and anode.
If a particle is subject to a force f and moves a distance r in a time interval
t then the work done on the particle by the force is
W = f r.
(3.80)
P = lim
(3.81)
where v is the particles velocity. It follows from the Lorentz force law, Eq. (3.74),
that the power input to a particle moving in electric and magnetic fields is
P = q v E.
(3.82)
Note that a charged particle can gain (or lose) energy from an electric field but not
from a magnetic field. This is because the magnetic force is always perpendicular
to the particles direction of motion and, therefore, does no work on the particle
[see Eq. (3.80) ]. Thus, in particle accelerators magnetic fields are often used to
guide particle motion (e.g., in a circle) but the actual acceleration is performed
by electric fields.
72
3.7
Amp`
eres law
Magnetic fields, like electric fields, are completely superposable. So, if a field
B1 is generated by a current I1 flowing through some circuit, and a field B2 is
generated by a current I2 flowing through another circuit, then when the currents
I1 and I2 flow through both circuits simultaneously the generated magnetic field
is B1 + B2 .
F
I1
I2
B1
B2
Consider two parallel wires separated by a perpendicular distance r and carrying electric currents I1 and I2 , respectively. The magnetic field strength at the
second wire due to the current flowing in the first wire is B = 0 I1 /2r. This
field is orientated at right angles to the second wire, so the force per unit length
exerted on the second wire is
0 I 1 I 2
F =
.
(3.83)
2r
This follows from Eq. (3.69), which is valid for continuous wires as well as short
test wires. The force acting on the second wire is directed radially inwards towards the first wire. The magnetic field strength at the first wire due to the
current flowing in the second wire is B = 0 I2 /2r. This field is orientated at
right angles to the first wire, so the force per unit length acting on the first wire
is equal and opposite to that acting on the second wire, according to Eq. (3.69).
Equation (3.83) is sometimes called Amp`eres law and is clearly another example of an action at a distance law; i.e., if the current in the first wire is suddenly
changed then the force on the second wire immediately adjusts, whilst in reality
there should be a short time delay, at least as long as the propagation time for a
73
light signal between the two wires. Clearly, Amp`eres law is not strictly correct.
However, as long as we restrict our investigations to steady currents it is perfectly
adequate.
3.8
Magnetic monopoles?
Suppose that we have an infinite straight wire carrying an electric current I. Let
the wire be aligned along the z-axis. The magnetic field generated by such a wire
is written
0 I y x
B=
, ,0
(3.84)
2
r2 r2
p
in Cartesian coordinates, where r = x2 + y 2 . The divergence of this field is
0 I 2yx 2xy
= 0,
(3.85)
B =
4
2
r4
r
where use has been made of r/x = x/r, etc. We saw in Section 3.3 that the
divergence of the electric field appeared, at first sight, to be zero, but, in reality,
it was a delta-function because the volume integral of E was non-zero. Does
the same sort of thing happen for the divergence
H of the magnetic field? Well,
if we could find Ra closed surface S for which S B dS 6= 0 then according to
Gauss theorem V B dV 6= 0 where V is the volume enclosed by S. This
would certainly imply that B is some sort of delta-function. So, can we find
such a surface? The short answer is, no. Consider a cylindrical surface aligned
with the wire. The magnetic field is everywhere tangential to the outward surface
element, so this surface certainly has zero magnetic flux
H coming out of it. In fact,
it is impossible to invent any closed surface for which S B dS 6= 0 with B given
by Eq. (3.84) (if you do not believe me, try it yourselves!). This suggests that
the divergence of a magnetic field generated by steady electric currents really is
zero. Admittedly, we have only proved this for infinite straight currents, but, as
will be demonstrated presently, it is true in general.
If B = 0 then B is a solenoidal vector field. In other words, field lines
of B never begin or end; instead, they form closed loops. This is certainly the
case in Eq. (3.84) where the field lines are a set of concentric circles centred
74
on the z-axis. In fact, the magnetic field lines generated by any set of electric
currents form closed loops, as can easily be checked by tracking the magnetic
lines of force using a small compass. What about magnetic fields generated by
permanent magnets (the modern equivalent of loadstones)? Do they also always
form closed loops? Well, we know that a conventional bar magnet has both a
north and south magnetic pole (like the Earth). If we track the magnetic field
lines with a small compass they all emanate from the south pole, spread out, and
eventually reconverge on the north pole. It appears likely (but we cannot prove
it with a compass) that the field lines inside the magnet connect from the north
to the south pole so as to form closed loops.
hiding from us! We know that if we try to make a magnetic monopole by snapping
a bar magnet in two then we just end up with two smaller bar magnets. If we
snap one of these smaller magnets in two then we end up with two even smaller
bar magnets. We can continue this process down to the atomic level without ever
producing a magnetic monopole. In fact, permanent magnetism is generated by
electric currents circulating on the atomic scale, so this type of magnetism is not
fundamentally different to the magnetism generated by macroscopic currents.
In conclusion, all steady magnetic fields in the universe are generated by
circulating electric currents of some description. Such fields are solenoidal; that
is, they form closed loops and satisfy the field equation
B = 0.
(3.86)
3.9
Amp`
eres other law
Consider, again, an infinite straight wire aligned along the z-axis and carrying a
current I. The field generated by such a wire is written
B =
0 I
2r
76
(3.87)
in cylindrical polar coordinates. Consider a circular loop C in the x-y plane which
is centred on theHwire. Suppose that the radius of this loop is r. Let us evaluate
the line integral C B dl. This integral is easy to perform because the magnetic
field is always parallel to the line element. We have
I
I
B dl = B rd = 0 I.
(3.88)
C
(3.89)
( B)y
( B)z
Bz
By
= 0,
y
z
Bx
Bz
= 0,
(3.90)
z
x
By
Bx
0 I 1
2x2
1
2y 2
=
4 + 2 4 = 0,
x
y
2 r2
r
r
r
where use has been made of r/x = x/r, etc. We now have a problem. Equations
(3.88) and (3.89) imply that
I
B dS = 0 I;
(3.91)
S
77
B = 0 I (x)(y) z.
(3.92)
p
where the integration is performed over the region x2 + y 2 r. However, since
the only part of S which actually contributes to the surface integral is the bit
which lies infinitesimally close to the z-axis, we can integrate over all x and y
without changing the result. Thus, we obtain
Z
Z
Z
B dS = 0 I
(x) dx
(y) dy = 0 I,
(3.94)
S
case. For instance, suppose that we distort our simple circular loop C so that it
is no longer circular or even lies in one plane. What now is the line integral of B
around the loop? This is no longer a simple problem for conventional analysis,
because the magnetic field is not parallel to the line element of the loop. However,
according to Stokes theorem
I
Z
B dl =
B dS,
(3.95)
C
with B given by Eq. (3.92). Note that the only part of S which contributes
to the surface integral is an infinitesimal region centered on the z-axis. So, as
long as S actually intersects the z-axis it does not matter what shape the rest
the surface is, we always get the same answer for the surface integral, namely
Z
I
B dS = 0 I.
(3.96)
B dl =
S
Thus, provided the curve C circulates the z-axis, andH therefore any surface S
attached to C intersects the z-axis, the line integral C B dl is equal to 0 I.
Of course, if C does not circulate
the z-axis then an attached surface S does
H
not intersectHthe z-axis and C B dl is zero. There is one more proviso. The
line integral C B dl is 0 I for a loop which circulates the z-axis in a clockwise
direction (looking up the z-axis). However, if the loop circulates in an anticlockwise direction then the integral is 0 I. This follows because in the latter
case the z-component of the surface element dS is oppositely directed to the
current flow at the point where the surface intersects the wire.
Let us now consider N wires directed along the z-axis, with coordinates (x i ,
yi ) in the x-y plane, each carrying a current Ii in the positive z-direction. It is
fairly obvious that Eq. (3.92) generalizes to
B = 0
N
X
i=1
Ii (x xi )(y yi ) z.
(3.97)
If we integrate the magnetic field around some closed curve C, which can have
any shape and does not necessarily lie in one plane, then Stokes theorem and the
above equation imply that
I
Z
B dl =
B dS = 0 I,
(3.98)
C
79
where I is the total current enclosed by the curve. Again, if the curve circulates
the ith wire in a clockwise direction (looking down the direction of current flow)
then the wire contributes Ii to the aggregate current I. On the other hand, if
the curve circulates in an anti-clockwise direction then the wire contributes I i .
Finally, if the curve does not circulate the wire at all then the wire contributes
nothing to I.
Equation (3.97) is a field equation describing how a set of z-directed current
carrying wires generate a magnetic field. These wires have zero-thickness, which
implies that we are trying to squeeze a finite amount of current into an infinitesimal region. This accounts for the delta-functions on the right-hand side of the
equation. Likewise, we obtained delta-functions in Section 3.3 because we were
dealing with point charges. Let us now generalize to the more realistic case of
diffuse currents. Suppose that the z-current flowing through a small rectangle
in the x-y plane, centred on coordinates (x, y) and of dimensions dx and dy, is
jz (x, y) dx dy. Here, jz is termed the current density in the z-direction. Let us
integrate ( B)z over this rectangle. The rectangle is assumed to be sufficiently
small that ( B)z does not vary appreciably across it. According to Eq. (3.98)
this integral is equal to 0 times the total z-current flowing through the rectangle.
Thus,
( B)z dx dy = 0 jz dx dy,
(3.99)
which implies that
( B)z = 0 jz .
(3.100)
Of course, there is nothing special about the z-axis. Suppose we have a set of
diffuse currents flowing in the x-direction. The current flowing through a small
rectangle in the y-z plane, centred on coordinates (y, z) and of dimensions dy and
dz, is given by jx (y, z) dy dz, where jx is the current density in the x-direction.
It is fairly obvious that we can write
( B)x = 0 jx ,
(3.101)
with a similar equation for diffuse currents flowing along the y-axis. We can
combine these equations with Eq. (3.100) to form a single vector field equation
which describes how electric currents generate magnetic fields:
B = 0 j,
80
(3.102)
where j = (jx , jy , jz ) is the vector current density. This is the third Maxwell equation. The electric current flowing through a small area dS located at position r is
j(r) dS. Suppose that space is filled with particles of charge q, number density
n(r), and velocity v(r). The charge density is given by (r) = qn. The current
density is given by j(r) = qnv and is obviously a proper vector field (velocities
are proper vectors since they are ultimately derived from displacements).
If we form the line integral of B around some general closed curve C, making
use of Stokes theorem and the field equation (3.102), then we obtain
Z
I
j dS.
(3.103)
B dl = 0
S
In other words, the line integral of the magnetic field around any closed loop C is
equal to 0 times the flux of the current density through C. This result is called
Amp`eres (other) law. If the currents flow in zero-thickness wires then Amp`eres
law reduces to Eq. (3.98).
The flux of the current density through C is evaluated by integrating j dS
over any surface S attached to C. Suppose that we take two different surfaces
S1 and SR2 . It is clear that if Amp`eres law is toRmake any sense then the surface
integral S1 j dS had better equal the integral S2 j dS. That is, when we work
out the flux of the current though C using two different attached surfaces then
we had better get the same answer, otherwise Eq. (3.103) is wrong. We saw in
S2
S1
Section 2 that if the integral of a vector field A over some surface attached to a
loop depends only on the loop, and is independent of the surface which spans it,
81
= 0,
(3.105a)
= 0 j.
(3.105b)
We should now go back and repeat the process for general currents. In fact, if
we did this we would find that the above field equations still hold (provided that
the currents are steady). Unfortunately, this demonstration is rather messy and
extremely tedious. There is a better approach. Let us assume that the above field
equations are valid for any set of steady currents. We can then, with relatively
little effort, use these equations to generate the correct formula for the magnetic
field induced by a general set of steady currents, thus proving that our assumption
is correct. More of this later.
3.10
,
0
= 0
= 0,
(3.106)
= 0 j
(3.107)
for magnetic fields. There are no other field equations. This strongly suggests that
if we know the divergence and the curl of a vector field then we know everything
there is to know about the field. In fact, this is the case. There is a mathematical
theorem which sums this up. It is called Helmholtzs theorem after the German
polymath Hermann Ludwig Ferdinand von Helmholtz.
Let us start with scalar fields. Field equations are a type of differential equation; i.e., they deal with the infinitesimal differences in quantities between neighbouring points. The question is, what differential equation completely specifies
a scalar field? This is easy. Suppose that we have a scalar field and a field
83
equation which tells us the gradient of this field at all points: something like
= A,
(3.108)
where A(r) is a vector field. Note that we need A = 0 for self consistency,
since the curl of a gradient is automatically zero. The above equation completely
specifies once we are given the value of the field at a single point, P say. Thus,
Z Q
Z Q
(Q) = (P ) +
dl = (P ) +
A dl,
(3.109)
P
(3.110a)
F = C,
(3.110b)
(3.111)
since the divergence of a curl is automatically zero. The question is, do these
two field equations plus some suitable boundary conditions completely specify
F ? Suppose that we write
F = U + W .
(3.112)
In other words, we are saying that a general field F is the sum of a conservative
field, U , and a solenoidal field, W . This sounds plausible, but it remains
to be proved. Let us start by taking the divergence of the above equation and
making use of Eq. (3.110a). We get
2 U = D.
84
(3.113)
Note that the vector field W does not figure in this equation because the divergence of a curl is automatically zero. Let us now take the curl of Eq. (3.112):
F = W = ( W ) 2 W = 2 W .
(3.114)
Here, we assume that the divergence of W is zero. This is another thing which
remains to be proved. Note that the scalar field U does not figure in this equation
because the curl of a divergence is automatically zero. Using Eq. (3.110b) we get
2 Wx = Cx ,
2 Wy = Cy ,
(3.115)
2 Wz = Cz ,
So, we have transformed our problem into four differential equations, Eq. (3.113)
and Eqs. (3.115), which we need to solve. Let us look at these equations. We
immediately notice that they all have exactly the same form. In fact, they are all
versions of Poissons equation. We can now make use of a principle made famous
by Richard P. Feynman: the same equations have the same solutions. Recall
that earlier on we came across the following equation:
(3.116)
2 = ,
0
where is the electrostatic potential and is the charge density. We proved that
the solution to this equation, with the boundary condition that goes to zero at
infinity, is
Z
(r 0 ) 3 0
1
d r.
(3.117)
(r) =
40
|r r 0 |
Well, if the same equations have the same solutions, and Eq. (3.117) is the solution
to Eq. (3.116), then we can immediately write down the solutions to Eq. (3.113)
and Eqs. (3.115). We get
Z
1
D(r 0 ) 3 0
U (r) =
d r,
(3.118)
4
|r r 0 |
and
Wx (r) =
1
4
85
Cx (r 0 ) 3 0
d r,
|r r 0 |
Wy (r) =
1
4
Wz (r) =
1
4
Cy (r 0 ) 3 0
d r,
|r r 0 |
(3.119)
Cz (r 0 ) 3 0
d r.
|r r 0 |
The last three equations can be combined to form a single vector equation:
Z
1
C(r 0 ) 3 0
W (r) =
d r.
(3.120)
4
|r r 0 |
We assumed earlier that W = 0. Let us check to see if this is true. Note
that
1
x0 x
x x0
1
=
= 0
=
,
(3.121)
x |r r 0 |
|r r 0 |3
|r r 0 |3
x |r r 0 |
which implies that
1
|r r 0 |
= 0
1
|r r 0 |
(3.122)
Z
Z
1
1
1
1
W =
C(r 0 )
d3 r 0 =
C(r 0 ) 0
d3 r 0 .
0
0
4
|r r |
4
|r r |
(3.123)
Now
Z
Z
f
g
g
dx = [gf ]
dx.
(3.124)
f
x
x
(3.126)
(3.127)
2 Wi = 0,
(3.128)
where i denotes x, y, or z, which are bounded at infinity. If there are then we are
in trouble, because we can take our solution and add to it an arbitrary amount
of a vector field with zero divergence and zero curl and thereby obtain another
solution which also satisfies physical boundary conditions. This would imply that
our solution is not unique. In other words, it is not possible to unambiguously
reconstruct a vector field given its divergence, its curl, and physical boundary
conditions. Fortunately, the equation
2 = 0,
(3.129)
which is called Laplaces equation, has a very nice property: its solutions are
unique. That is, if we can find a solution to Laplaces equation which satisfies
the boundary conditions then we are guaranteed that this is the only solution.
We shall prove this later on in the course. Well, let us invent some solutions to
Eqs. (3.128) which are bounded at infinity. How about
U = Wi = 0?
(3.130)
These solutions certainly satisfy Laplaces equation and are well-behaved at infinity. Because the solutions to Laplaces equations are unique, we know that
Eqs. (3.130) are the only solutions to Eqs. (3.128). This means that there is
no vector field which satisfies physical boundary equations at infinity and has
87
zero divergence and zero curl. In other words, our solution to Eqs. (3.110) is
the only solution. Thus, we have unambiguously reconstructed the vector field F
given its divergence, its curl, and sensible boundary conditions at infinity. This
is Helmholtzs theorem.
We have just proved a number of very useful, and also very important, points.
First, according to Eq. (3.112), a general vector field can be written as the sum
of a conservative field and a solenoidal field. Thus, we ought to be able to write
electric and magnetic fields in this form. Second, a general vector field which is
zero at infinity is completely specified once its divergence and its curl are given.
Thus, we can guess that the laws of electromagnetism can be written as four field
equations,
E
E
B
B
= something,
= something,
(3.131)
= something,
= something,
without knowing the first thing about electromagnetism (other than the fact that
it deals with two vector fields). Of course, Eq. (3.106) and (3.107) are of exactly
this form. We also know that there are only four field equations, since the above
equations are sufficient to completely reconstruct both E and B. Furthermore,
we know that we can solve the field equations without even knowing what the
right-hand sides look like. After all, we solved Eqs. (3.110) for completely general
right-hand sides. (Actually, the right-hand sides have to go to zero at infinity
otherwise integrals like Eq. (3.118) blow up.) We also know that any solutions
we find are unique. In other words, there is only one possible steady electric
and magnetic field which can be generated by a given set of stationary charges
and steady currents. The third thing which we proved was that if the right-hand
sides of the above field equations are all zero then the only physical solution is
E = B = 0. This implies that steady electric and magnetic fields cannot generate
themselves, instead they have to be generated by stationary charges and steady
currents. So, if we come across a steady electric field we know that if we trace
the field lines back we shall eventually find a charge. Likewise, a steady magnetic
field implies that there is a steady current flowing somewhere. All of these results
88
follow from vector field theory, i.e., from the general properties of fields in three
dimensional space, prior to any investigation of electromagnetism.
3.11
(3.132)
(3.133)
since the curl of a gradient is automatically zero. In fact, whenever we come across
an irrotational vector field in physics we can always write it as the gradient of
some scalar field. This is clearly a useful thing to do since it enables us to replace
a vector field by a much simpler scalar field. The quantity in the above equation
is known as the electric scalar potential.
Magnetic fields generated by steady currents (and unsteady currents, for that
matter) satisfy
B = 0.
(3.134)
This immediately allows us to write
B = A,
(3.135)
(3.136)
(3.137)
leaves the electric field invariant in Eq. (3.133). The transformations (3.136) and
(3.137) are examples of what mathematicians call gauge transformations. The
choice of a particular function or a particular constant c is referred to as a
choice of the gauge. We are free to fix the gauge to be whatever we like. The
most sensible choice is the one which makes our equations as simple as possible.
The usual gauge for the scalar potential is such that 0 at infinity. The
usual gauge for A is such that
A = 0.
(3.138)
(3.140)
But this is just Poissons equation (again!). We know that we can always find a
unique solution of this equation (see Section 3.10). This proves that, in practice,
we can always set the divergence of A equal to zero.
90
Let us consider again an infinite straight wire directed along the z-axis and
carrying a current I. The magnetic field generated by such a wire is written
0 I y x
, ,0 .
(3.141)
B=
2
r2 r2
We wish to find a vector potential A whose curl is equal to the above magnetic
field and whose divergence is zero. It is not difficult to see that
A=
0 I
0, 0, ln(x2 + y 2 )
4
(3.142)
fits the bill. Note that the vector potential is parallel to the direction of the
current. This would seem to suggest that there is a more direct relationship
between the vector potential and the current than there is between the magnetic
field and the current. The potential is not very well behaved on the z-axis, but
this is just because we are dealing with an infinitely thin current.
Let us take the curl of Eq. (3.135). We find that
B = A = ( A) 2 A = 2 A,
(3.143)
where use has been made of the Coulomb gauge condition (3.138). We can combine the above relation with the field equation (3.102) to give
2 A = 0 j.
(3.144)
= 0 jx ,
= 0 jy ,
(3.145)
= 0 jz .
But, this is just Poissons equation three times over. We can immediately write
the unique solutions to the above equations:
Z
jx (r 0 ) 3 0
0
Ax (r) =
d r,
4
|r r 0 |
91
0
Ay (r) =
4
0
Az (r) =
4
jy (r 0 ) 3 0
d r,
|r r 0 |
(3.146)
jz (r 0 ) 3 0
d r.
|r r 0 |
(3.147)
(3.148)
Equations (3.147) and (3.148) are the unique solutions (given the arbitrary choice
of gauge) to the field equations (3.106) and (3.107); they specify the magnetic
vector and electric scalar potentials generated by a set of stationary charges,
of charge density (r), and a set of steady currents, of current density j(r).
Incidentally, we can prove that Eq. (3.147) satisfies the gauge condition A = 0
by repeating the analysis of Eqs. (3.121)(3.127) (with W A and C 0 j)
and using the fact that j = 0 for steady currents.
3.12
According to Eq. (3.133) we can obtain an expression for the electric field generated by stationary charges by taking minus the gradient of Eq. (3.148). This
yields
Z
r r0 3 0
1
(r 0 )
E(r) =
d r,
(3.149)
40
|r r 0 |3
I( r )
dl
r-r
measurement
point
the fields generated by infinite, straight wires). Note that both Coulombs law
and the Biot-Savart law are gauge independent; i.e., they do not depend on
the particular choice of gauge.
Consider (for the last time!) an infinite, straight wire directed along the zaxis and carrying a current I. Let us reconstruct the magnetic field generated by
I
dl
r - r/
I
>
(r-r)
the wire at point P using the Biot-Savart law. Suppose that the perpendicular
distance to the wire is . It is easily seen that
I (r r 0 ) = I ,
l
= tan ,
d,
dl =
cos2
.
|r r 0 | =
cos
Thus, according to Eq. (3.152) we have
Z
0 /2
I
B =
d
3
3
4 /2 (cos ) cos2
94
(3.153)
0 I
4
/2
cos d =
/2
0 I
/2
[sin ]/2 ,
4
0 I
.
2
(3.154)
(3.155)
So, we have come full circle in our investigation of magnetic fields. Note that
the simple result (3.155) can only be obtained from the Biot-Savart law after
some non-trivial algebra. Examination of more complicated current distributions
using this law invariably leads to lengthy, involved, and extremely unpleasant
calculations.
3.13
We have now completed our theoretical investigation of electrostatics and magnetostatics. Our next task is to incorporate time variation into our analysis.
However, before we start this let us briefly review our progress so far. We have
found that the electric fields generated by stationary charges and the magnetic
fields generated by steady currents are describable in terms of four field equations:
E
,
0
(3.156a)
= 0,
(3.156b)
= 0,
(3.156c)
= 0 j.
(3.156d)
The boundary conditions are that the fields are zero at infinity, assuming that the
generating charges and currents are localized to some region in space. According
to Helmholtzs theorem the above field equations, plus the boundary conditions,
are sufficient to uniquely specify the electric and magnetic fields. The physical
significance of this is that divergence and curl are the only rotationally invariant
differential properties of a general vector field; i.e., the only quantities which
do not change when the axes are rotated. Since physics does not depend on the
95
orientation of the axes (which is, after all, quite arbitrary) divergence and curl are
the only quantities which can appear in field equations which claim to describe
physical phenomena.
The field equations can be integrated to give:
Z
I
1
dV,
E dS =
0
V
S
I
E dl = 0,
(3.157a)
(3.157b)
B dS
B dl
= 0,
= 0
(3.157c)
Z
S0
j dS.
(3.157d)
(3.158)
(3.159)
Here, is the electric scalar potential and A is the magnetic vector potential.
The electric field is clearly unchanged if we add a constant to the scalar potential:
EE
as + c.
96
(3.160)
The magnetic field is similarly unchanged if we add the gradient of a scalar field
to the vector potential:
BB
as A A + .
(3.161)
The above transformations, which leave the E and B fields invariant, are called
gauge transformations. We are free to choose c and to be whatever we like;
i.e., we are free to choose the gauge. The most sensible gauge is the one which
make our equations as simple and symmetric as possible. This corresponds to
the choice
(r) 0 as |r| ,
(3.162)
and
A = 0.
(3.163)
,
0
2 A = 0 j.
(3.164a)
(3.164b)
Poissons equation is just about the simplest rotationally invariant partial differential equation it is possible to write. Note that 2 is clearly rotationally
invariant since it is the divergence of a gradient, and both divergence and gradient are rotationally invariant. We can always construct the solution to Poissons
equation, given the boundary conditions. Furthermore, we have a uniqueness theorem which tells us that our solution is the only possible solution. Physically, this
means that there is only one electric and magnetic field which is consistent with
a given set of stationary charges and steady currents. This sounds like an obvious, almost trivial, statement. But there are many areas of physics (for instance,
fluid mechanics and plasma physics) where we also believe, for physical reasons,
that for a given set of boundary conditions the solution should be unique. The
problem is that in most cases when we reduce the problem to a partial differential
97
equation we end up with something far nastier than Poissons equation. In general, we cannot solve this equation. In fact, we usually cannot even prove that it
possess a solution for general boundary conditions, let alone that the solution is
unique. So, we are very fortunate indeed that in electrostatics and magnetostatics the problem boils down to solving a nice partial differential equation. When
you hear people say things like electromagnetism is the best understood theory
in physics what they are really saying is that the partial differential equations
which crop up in this theory are soluble and have nice properties.
Poissons equation
2 u = v
(3.165)
is linear, which means that its solutions are superposable. We can exploit this
fact to construct a general solution to this equation. Suppose that we can find
the solution to
2 G(r, r 0 ) = (r r 0 )
(3.166)
which satisfies the boundary conditions. This is the solution driven by a unit
amplitude point source located at position vector r 0 . Since any general source
can be built up out of a weighted sum of point sources it follows that a general
solution to Poissons equation can be built up out of a weighted superposition of
point source solutions. Mathematically, we can write
Z
(3.167)
u(r) = G(r, r 0 ) v(r 0 ) d3 r 0 .
The function G is called the Greens function. The Greens function for Poissons
equation is
1
1
G(r, r 0 ) =
.
(3.168)
4 |r r 0 |
Note that this Greens function is proportional to the scalar potential of a point
charge located at r 0 ; this is hardly surprising given the definition of a Greens
function and Eq. (3.164a).
According to Eqs. (3.164), (3.165), (3.167), and (3.168), the scalar and vector
potentials generated by a set of stationary charges and steady currents take the
98
form
(r) =
A(r) =
Z
(r 0 ) 3 0
1
d r,
40
|r r 0 |
Z
0
j(r 0 ) 3 0
d r.
4
|r r 0 |
(3.169a)
(3.169b)
Making use of Eqs. (3.158) and (3.159) we obtain the fundamental force laws for
electric and magnetic fields. Coulombs law
Z
r r0 3 0
1
0
d r,
(3.170)
(r )
E(r) =
40
|r r 0 |3
and the Biot-Savart law
0
B(r) =
4
j(r 0 ) (r r 0 ) 3 0
d r.
|r r 0 |3
(3.171)
Of course, both of these laws are examples of action at a distance laws and,
therefore, violate relativity. However, this is not a problem as long as we restrict
ourselves to fields generated by time independent charge and current distributions.
The question, now, is how badly is this scheme we have just worked out
going to be disrupted when we take time variation into account. The answer,
somewhat surprisingly, is by very little indeed. So, in Eqs. (3.156)(3.171) we can
already discern the basic outline of classical electromagnetism. Let us continue
our investigation.
3.14
Faradays law
strong force. One of the main goals of modern physics is to find some way of
combining these three forces so that all of physics can be described in terms of a
single unified force. This, essentially, is the purpose of super-symmetry theories.
The first great synthesis of ideas in physics took place in 1666 when Issac
Newton realised that the force which causes apples to fall downwards is the same
as the force which maintains the planets in elliptical orbits around the Sun. The
second great synthesis, which we are about to study in more detail, took place
in 1830 when Michael Faraday discovered that electricity and magnetism are two
aspects of the same thing, usually referred to as electromagnetism. The third
great synthesis, which we shall discuss presently, took place in 1873 when James
Clerk Maxwell demonstrated that light and electromagnetism are intimately related. The last (but, hopefully, not the final) great synthesis took place in 1967
when Steve Weinberg and Abdus Salam showed that the electromagnetic force
and the weak nuclear force (i.e., the one which is responsible for decays) can
be combined to give the electroweak force. Unfortunately, Weinbergs work lies
beyond the scope of this lecture course.
Let us now consider Faradays experiments, having put them in their proper
historical context. Prior to 1830 the only known way to make an electric current
flow through a conducting wire was to connect the ends of the wire to the positive and negative terminals of a battery. We measure a batterys ability to push
current down a wire in terms of its voltage, by which we mean the voltage difference between its positive and negative terminals. What does voltage correspond
to in physics? Well, volts are the units used to measure electric scalar potential,
so when we talk about a 6V battery what we are really saying is that the difference in electric scalar potential between its positive and negative terminals is six
volts. This insight allows us to write
Z
Z
V = () () =
dl =
E dl,
(3.172)
where V is the battery voltage, denotes the positive terminal, the negative
terminal, and dl is an element of length along the wire. Of course, the above
equation is a direct consequence of E = . Clearly, a voltage difference
between two ends of a wire attached to a battery implies the presence of an
electric field which pushes charges through the wire. This field is directed from
100
the positive terminal of the battery to the negative terminal and is, therefore, such
as to force electrons to flow through the wire from the negative to the positive
terminal. As expected, this means that a net positive current flows from the
positive to the negative terminal. The fact that E is a conservative field ensures
that the voltage difference V is independent of the path of the wire. In other
words, two different wires attached to the same battery develop identical voltage
differences. This is just as well. The quantity V is usually called the electromotive
force, or e.m.f. for short. Electromotive force is a bit of a misnomer. The e.m.f.
is certainly what causes current to flow through a wire, so it is electromotive (i.e.,
it causes electrons to move), but it is not a force. In fact, it is a difference in
electric scalar potential.
Let us now consider a closed loop of wire (with no battery). The electromotive
force around such a loop is
I
V = E dl = 0.
(3.173)
This is a direct consequence of the field equation E = 0. So, since E is
a conservative field then the electromotive force around a closed loop of wire is
automatically zero and no current flows around the wire. This all seems to make
sense. However, Michael Faraday is about to throw a spanner in our works! He
discovered in 1830 that a changing magnetic field can cause a current to flow
around a closed loop of wire (in the absence of a battery). Well, if current flows
through a wire then there must be an electromotive force. So,
I
V = E dl 6= 0,
(3.174)
which immediately implies that E is not a conservative field, and that E 6= 0.
Clearly, we are going to have to modify some of our ideas regarding electric fields!
Faraday continued his experiments and found that another way of generating
an electromotive force around a loop of wire is to keep the magnetic field constant
and move the loop. Eventually, Faraday was able to formulate a law which
accounted for all of his experiments. The e.m.f. generated around a loop of wire
in a magnetic field is proportional to the rate of change of the flux of the magnetic
field through the loop. So, if the loop is denoted C and S is some surface attached
101
V =
E dl = A
B dS,
t S
C
(3.175)
I
C
magnetic field B is increasing and the current I circulates clockwise (as seen from
above) then it generates a field B 0 which opposes the increase in magnetic flux
through the loop, in accordance with Lenzs law. The direction of the current is
102
opposite to the sense of the current loop C (assuming that the flux of B through
the loop is positive), so this implies that A = 1 in Eq. (3.175). Thus, Faradays
law takes the form
Z
I
B dS.
(3.176)
E dl =
t S
C
H
Experimentally, Faradays law is found to correctly predict the e.m.f. (i.e., Edl)
generated in any wire loop, irrespective of the position or shape of the loop. It
is reasonable to assume that the same e.m.f. would be generated in the absence
of the wire (of course, no current would flow in this case). Thus, Eq. (3.176) is
valid for any closed loop C. If Faradays law is to make any sense it must also be
true for any surface S attached to the loop C. Clearly, if the flux of the magnetic
field through the loop depends on the surface upon which it is evaluated then
Faradays law is going to predict different e.m.f.s for different surfaces. Since
there is no preferred surface for a general non-coplanar loop, this would
R not make
very much sense. The condition for the flux of the magnetic field, S B dS, to
depend only on the loop C to which the surface S is attached, and not on the
nature of the surface itself, is
I
B dS 0 = 0,
(3.177)
S0
(3.179)
This ensures that the magnetic flux through a loop is a well defined quantity.
The divergence of Eq. (3.178) yields
B
= 0.
t
103
(3.180)
Thus, the field equation (3.178) actually demands that the divergence of the
magnetic field be constant in time for self-consistency (this means that the flux
of the magnetic field through a loop need not be a well defined quantity as long as
its time derivative is well defined). However, a constant non-solenoidal magnetic
field can only be generated by magnetic monopoles, and magnetic monopoles do
not exist (as far as we are aware). Hence, B = 0. The absence of magnetic
monopoles is an observational fact, it cannot be predicted by any theory. If
magnetic monopoles were discovered tomorrow this would not cause physicists
any problems. We know how to generalize Maxwells equations to include both
magnetic monopoles and currents of magnetic monopoles. In this generalized
formalism Maxwells equations are completely symmetric with respect to electric
and magnetic fields, and B 6= 0. However, an extra term (involving the current
of magnetic monopoles) must be added to the right-hand side of Eq. (3.178) in
order to make it self-consistent.
3.15
We now have a problem. We can only write the electric field in terms of a scalar
potential (i.e., E = ) provided that E = 0. However, we have just found
that in the presence of a changing magnetic field the curl of the electric field is
non-zero. In other words, E is not, in general, a conservative field. Does this mean
that we have to abandon the concept of electric scalar potential? Fortunately,
no. It is still possible to define a scalar potential which is physically meaningful.
Let us start from the equation
B = 0,
(3.181)
which is valid for both time varying and non time varying magnetic fields. Since
the magnetic field is solenoidal we can write it as the curl of a vector potential:
B = A.
(3.182)
So, there is no problem with the vector potential in the presence of time varying
fields. Let us substitute Eq. (3.182) into the field equation (3.178). We obtain
E =
A
,
t
104
(3.183)
A
= 0.
E+
t
(3.184)
We know that a curl free vector field can always be expressed as the gradient of
a scalar potential, so let us write
E+
A
= ,
t
(3.185)
or
A
.
(3.186)
t
This is a very nice equation! It tells us that the scalar potential only describes
the conservative electric field generated by electric charges. The electric field
induced by time varying magnetic fields is non-conservative, and is described by
the magnetic vector potential.
E =
3.16
Gauge transformations
Electric and magnetic fields can be written in terms of scalar and vector potentials, as follows:
E
= A.
A
,
t
(3.187)
However, this prescription is not unique. There are many different potentials
which generate the same fields. We have come across this problem before. It
is called gauge invariance. The most general transformation which leaves the E
and B fields unchanged in Eqs. (3.187) is
,
t
A A .
+
105
(3.188)
(3.189)
A
,
t
(3.192)
we find that the part which is generated by charges (i.e., the first term on the
right-hand side) is conservative and the part induced by magnetic fields (i.e.,
the second term on the right-hand side) is purely solenoidal. Earlier on, we
proved mathematically that a general vector field can be written as the sum of
a conservative field and a solenoidal field (see Section 3.10). Now we are finding
that when we split up the electric field in this manner the two fields have different
physical origins: the conservative part of the field emanates from electric charges
whereas the solenoidal part is induced by magnetic fields.
Equation (3.192) can be combined with the field equation
E =
106
(3.193)
A
= .
t
0
(3.194)
(3.195)
2 = ,
0
which is just Poissons equation. Thus, we can immediately write down an expression for the scalar potential generated by non-steady fields. It is exactly the
same as our previous expression for the scalar potential generated by steady fields,
namely
Z
(r 0 , t) 3 0
1
d r.
(3.196)
(r, t) =
40
|r r 0 |
However, this apparently simple result is extremely deceptive. Equation (3.196)
is a typical action at a distance law. If the charge density changes suddenly at
r 0 then the potential at r responds immediately. However, we shall see later that
the full time dependent Maxwells equations only allow information to propagate
at the speed of light (i.e., they do not violate relativity). How can these two
statements be reconciled? The crucial point is that the scalar potential cannot
be measured directly, it can only be inferred from the electric field. In the time
dependent case there are two parts to the electric field; that part which comes
from the scalar potential, and that part which comes from the vector potential [see
Eq. (3.192) ]. So, if the scalar potential responds immediately to some distance
rearrangement of charge density it does not necessarily follow that the electric
field also has an immediate response. What actually happens is that the change in
the part of the electric field which comes from the scalar potential is balanced by
an equal and opposite change in the part which comes from the vector potential,
so that the overall electric field remains unchanged. This state of affairs persists
at least until sufficient time has elapsed for a light ray to travel from the distant
charges to the region in question. Thus, relativity is not violated since it is
the electric field, and not the scalar potential, which carries physically accessible
information.
It is clear that the apparent action at a distance nature of Eq. (3.196) is
highly misleading. This suggests, very strongly, that the Coulomb gauge is not
107
the optimum gauge in the time dependent case. A more sensible choice is the so
called Lorentz gauge:
.
(3.197)
A = 0 0
t
It can be shown, by analogy with earlier arguments (see Section 3.11), that it is
always possible to make a gauge transformation, at a given instance in time, such
that the above equation is satisfied. Substituting the Lorentz gauge condition
into Eq. (3.194), we obtain
2
0 0 2 2 = .
t
0
(3.198)
It turns out that this is a three dimensional wave equation in which information
propagates at the speed of light. But, more of this later. Note that the magnetically induced part of the electric field (i.e., A/t) is not purely solenoidal in
the Lorentz gauge. This is a slight disadvantage of the Lorentz gauge with respect
to the Coulomb gauge. However, this disadvantage is more than offset by other
advantages which will become apparent presently. Incidentally, the fact that the
part of the electric field which we ascribe to magnetic induction changes when
we change the gauge suggests that the separation of the field into magnetically
induced and charge induced components is not unique in the general time varying
case (i.e., it is a convention).
3.17
minus the rate of change of the magnetic flux through the loop. Finally, Amp`eres
law was the line integral of the magnetic field around a closed loop equals the total
current flowing through the loop times 0 . Maxwells first great achievement was
to realize that these laws could be expressed as a set of partial differential equations. Of course, he wrote his equations out in component form because modern
vector notation did not come into vogue until about the time of the First World
War. In modern notation, Maxwell first wrote
,
0
= 0,
B
,
t
= 0 j.
(3.199a)
(3.199b)
(3.199c)
(3.199d)
Maxwells second great achievement was to realize that these equations are wrong.
We can see that there is something slightly unusual about Eqs. (3.199). They
are very unfair to electric fields! After all, time varying magnetic fields can induce
electric fields, but electric fields apparently cannot affect magnetic fields in any
way. However, there is a far more serious problem associated with the above
equations, which we alluded to earlier on. Consider the integral form of the last
Maxwell equation (i.e., Amp`eres law)
I
Z
B dl = 0
j dS.
(3.200)
C
This says that the line integral of the magnetic field around a closed loop C is
equal to 0 times the flux of the current density through the loop. The problem
is that the flux of the current density through a loop is not, in general, a well
defined quantity. In order for the flux to be well defined the integral of j dS over
some surface S attached to a loop C must depend on C but not on the details of
S. This is only the case if
j = 0.
(3.201)
Unfortunately, the above condition is only satisfied for non time varying fields.
109
j dS =
dV.
(3.202)
t V
S
Making use of Gauss theorem, this yields
j =
.
t
(3.203)
right-hand side of Eq. (3.199d) we can somehow fix up Amp`eres law? This is,
essentially, how Maxwell reasoned more than one hundred years ago.
Let us try out this scheme. Suppose that we write
B = 0 j +
E
t
(3.204)
instead of Eq. (3.199d). Here, is some constant. Does this resolve our problem?
We want the flux of the right-hand side of the above equation through some loop
C to be well defined; i.e., it should only depend on C and not the particular
surface S (which spans C) upon which it is evaluated. This is another way of
saying that we want the divergence of the right-hand side to be zero. In fact,
we can see that this is necessary for self consistency since the divergence of the
left-hand side is automatically zero. So, taking the divergence of Eq. (3.204) we
obtain
E
.
(3.205)
0 = 0 j +
t
But, we know that
E = ,
(3.206)
0
so combining the previous two equations we arrive at
0 j +
= 0.
0 t
(3.207)
= 0.
t
(3.208)
E
t
(3.209)
then we find that the divergence of the right-hand side is zero as a consequence
of charge conservation. The extra term is called the displacement current (this
111
name was invented by Maxwell). In summary, we have shown that although the
flux of the real current through a loop is not well defined, if we form the sum of
the real current and the displacement current then the flux of this new quantity
through a loop is well defined.
Of course, the displacement current is not a current at all. It is, in fact,
associated with the generation of magnetic fields by time varying electric fields.
Maxwell came up with this rather curious name because many of his ideas regarding electric and magnetic fields were completely wrong. For instance, Maxwell
believed in the ther, and he thought that electric and magnetic fields were some
sort of stresses in this medium. He also thought that the displacement current was
associated with displacements of the ther (hence, the name). The reason that
these misconceptions did not invalidate his equations is quite simple. Maxwell
based his equations on the results of experiments, and he added in his extra term
so as to make these equations mathematically self consistent. Both of these steps
are valid irrespective of the existence or non-existence of the ther.
But, hang on a minute, you might say, you cant go around adding terms to
laws of physics just because you feel like it! The field equations (3.199) are derived
directly from the results of famous nineteenth century experiments. If there is a
new term involving the time derivative of the electric field which needs to be added
into these equations, how come there is no corresponding nineteenth century
experiment which demonstrates this? We have Amp`eres law which shows that
changing magnetic fields generate electric fields. Why is there no Joe Bloggs
law that says that changing electric fields generate magnetic fields? This is a
perfectly reasonable question. The answer is that the new term describes an effect
which is far too small to have been observed in nineteenth century experiments.
Let us demonstrate this.
First, we shall show that it is comparatively easy to detect the induction of
an electric field by a changing magnetic field in a desktop laboratory experiment.
The Earths magnetic field is about 1 gauss (that is, 10 4 tesla). Magnetic
fields generated by electromagnets (which will fit on a laboratory desktop) are
typically about one hundred times bigger that this. Let us, therefore, consider
a hypothetical experiment in which a 100 gauss magnetic field is switched on
suddenly. Suppose that the field ramps up in one tenth of a second. What
112
BA
,
(3.210)
V =
B dS
t
t
where B = 0.01 tesla is the field strength, A = 0.01 m2 is the area of the loop,
and t = 0.1 seconds is the ramp time. It follows that V 1 millivolt. Well, one
millivolt is easily detectable. In fact, most hand-held laboratory voltmeters are
calibrated in millivolts. It is clear that we would have no difficulty whatsoever
detecting the magnetic induction of electric fields in a nineteenth century style
laboratory experiment.
Let us now consider the electric induction of magnetic fields. Suppose that our
electric field is generated by a parallel plate capacitor of spacing one centimeter
which is charged up to 100 volts. This gives a field of 10 4 volts per meter.
Suppose, further, that the capacitor is discharged in one tenth of a second. The
law of electric induction is obtained by integrating Eq. (3.209) and neglecting the
first term on the right-hand side. Thus,
I
Z
B dl = 0 0
E dS.
(3.211)
t
Let us consider a loop 10 centimeters square. What is the magnetic field generated around this loop (we could try to measure this with a Hall probe). Very
approximately we find that
El2
,
(3.212)
lB 0 0
t
where l = 0.1 meters is the dimensions of the loop, B is the magnetic field
strength, E = 104 volts per meter is the electric field, and t = 0.1 seconds is the
decay time of the field. We find that B 109 gauss. Modern technology is
unable to detect such a small magnetic field, so we cannot really blame Faraday
for not noticing electric induction in 1830.
So, you might say, why did you bother mentioning this displacement current thing in the first place if it is undetectable? Again, a perfectly fair question.
The answer is that the displacement current is detectable in some experiments.
113
Suppose that we take an FM radio signal, amplify it so that its peak voltage is
one hundred volts, and then apply it to the parallel plate capacitor in the previous
hypothetical experiment. What size of magnetic field would this generate? Well,
a typical FM signal oscillates at 109 Hz, so t in the previous example changes
from 0.1 seconds to 109 seconds. Thus, the induced magnetic field is about
101 gauss. This is certainly detectable by modern technology. So, it would
seem that if the electric field is oscillating fast then electric induction of magnetic
fields is an observable effect. In fact, there is a virtually infallible rule for deciding whether or not the displacement current can be neglected in Eq. (3.209).
If electromagnetic radiation is important then the displacement current must be
included. On the other hand, if electromagnetic radiation is unimportant then
the displacement current can be safely neglected. Clearly, Maxwells inclusion of
the displacement current in Eq. (3.209) was a vital step in his later realization
that his equations allowed propagating wave-like solutions. These solutions are,
of course, electromagnetic waves. But, more of this later.
We are now in a position to write out Maxwells equations in all their glory!
We get
,
0
= 0,
= 0 j + 0 0
(3.213a)
(3.213b)
B
,
t
(3.213c)
E
.
t
(3.213d)
experimental results of Coulomb, Amp`ere, and Faraday very succinctly; they are
called Maxwells equations because James Clerk Maxwell was the first to write
them down (in component form). Maxwell also fixed them up so that they made
mathematical sense.
3.18
We have seen that Eqs. (3.213b) and (3.213c) are automatically satisfied if we
write the electric and magnetic fields in terms of potentials:
E
= A.
A
,
t
(3.214)
This prescription is not unique, but we can make it unique by adopting the
following conventions:
(r) 0
A
as |r| ,
0 0
.
t
(3.215a)
(3.215b)
2
0 0 2 2 = .
t
0
(3.216)
Let us now consider Eq. (3.213d). Substitution of Eqs. (3.214) into this formula yields
A ( A) 2 A = 0 j 0 0
or
2A
0 0
,
t
t2
2A
2
.
0 0 2 A = 0 j A + 0 0
t
t
115
(3.217)
(3.218)
We can now see quite clearly where the Lorentz gauge condition (3.215b) comes
from. The above equation is, in general, very complicated since it involves both
the vector and scalar potentials. But, if we adopt the Lorentz gauge then the last
term on the right-hand side becomes zero and the equation simplifies considerably so that it only involves the vector potential. Thus, we find that Maxwells
equations reduce to the following:
0 0
2
2 =
2
t
,
0
2A
2 A = 0 j.
0 0
2
t
(3.219)
This is the same equation written four times over. In steady state (i.e., /t = 0)
it reduces to Poissons equation, which we know how to solve. With the /t
terms included it becomes a slightly more complicated equation (in fact, a driven
three dimensional wave equation).
3.19
Electromagnetic waves
= 0,
(3.220a)
= 0,
(3.220b)
B
,
t
E
.
= 0 0
t
(3.220c)
(3.220d)
Note that these equations exhibit a nice symmetry between the electric and magnetic fields.
There is an easy way to show that the above equations possess wave-like
solutions, and a hard way. The easy way is to assume that the solutions are going
116
(3.221)
Here, E0 and B0 are constant vectors, k is called the wave-vector, and is the
angular frequency. The frequency in hertz is related to the angular frequency
via = 2 f . The frequency is conventionally defined to be positive. The
quantity is a phase difference between the electric and magnetic fields. It
is more convenient to write
E
= E0 e i(krt) ,
= B0 e i(krt) ,
(3.222)
where by convention the physical solution is the real part of the above equations.
The phase difference is absorbed into the constant vector B 0 by allowing it to
become complex. Thus, B0 B0 e i . In general, the vector E0 is also complex.
A wave maximum of the electric field satisfies
k r = t + n 2 + ,
(3.223)
where n is an integer and is some phase angle. The solution to this equation
is a set of equally spaced parallel planes (one plane for each possible value of n)
whose normals lie in the direction of the wave vector k and which propagate in
this direction with velocity
(3.224)
v= .
k
The spacing between adjacent planes (i.e., the wavelength) is given by
=
2
.
k
(3.225)
(3.226)
Ay
Az
Ax
+
+
= (A0x ikx + A0y iky + A0z ikz ) e i(krt)
x
y
z
= i k A.
(3.227)
Ay
Az
= (i ky Az i kz Ay )
y
z
= i (k A)x .
(3.228)
(3.229)
We can see that vector field operations on a plane wave simplify to dot and cross
products involving the wave-vector.
The first Maxwell equation (3.220a) reduces to
i k E0 = 0,
(3.230)
using the assumed electric and magnetic fields (3.222), and Eq. (3.227). Thus,
the electric field is perpendicular to the direction of propagation of the wave.
118
(3.231)
implying that the magnetic field is also perpendicular to the direction of propagation. Clearly, the wave-like solutions of Maxwells equation are a type of
transverse wave. The third Maxwell equation gives
i k E0 = i B0 ,
(3.232)
where use has been made of Eq. (3.229). Dotting this equation with E 0 yields
E0 B 0 =
E0 k E 0
= 0.
(3.233)
Thus, the electric and magnetic fields are mutually perpendicular. Dotting equation (3.232) with B0 yields
B0 k E0 = B02 > 0.
(3.234)
Thus, the vectors E0 , B0 , and k are mutually perpendicular and form a righthanded set. The final Maxwell equation gives
i k B0 = i 0 0 E0 .
(3.235)
(3.236)
k 2 = 0 0 2 ,
(3.237)
or
where use has been made of Eq. (3.230). However, we know from Eq. (3.224)
that the wave-velocity c is related to the magnitude of the wave-vector and the
wave frequency via c = /k. Thus, we obtain
1
c=
.
0 0
119
(3.238)
= 8.8542 1012 C2 N1 m2 ,
= 4 107 N A2 .
(3.239)
Radiation Type
Gamma Rays
X-Rays
Ultraviolet
Visible
Infrared
Microwave
TV-FM
Radio
E0
.
c
(3.241)
121
.
felectric
E0
c
(3.243)
So, unless the charge is relativistic the electric force greatly exceeds the magnetic
force. Clearly, in most terrestrial situations electromagnetic waves are an essentially electric phenomenon (as far as their interaction with matter goes). For
this reason, electromagnetic waves are usually characterized by their wave-vector
(which specifies the direction of propagation and the wavelength) and the plane
of polarization (i.e., the plane of oscillation) of the associated electric field. For a
given wave-vector k, the electric field can have any direction in the plane normal
to k. However, there are only two independent directions in a plane (i.e., we
can only define two linearly independent vectors in a plane). This implies that
there are only two independent polarizations of an electromagnetic wave, once its
direction of propagation is specified.
Let us now derive the velocity of light from Maxwells equation the hard way.
Suppose that we take the curl of the fourth Maxwell equation, Eq. (3.220d). We
obtain
B = ( B) 2 B = 2 B = 0 0
E
.
t
(3.244)
Here, we have used the fact that B = 0. The third Maxwell equation,
Eq. (3.220c), yields
1 2
2
2 2 B = 0,
(3.245)
c t
where use has been made of Eq. (3.238). A similar equation can obtained for the
electric field by taking the curl of Eq. (3.220c):
1 2
2
(3.246)
2 2 E = 0,
c t
We have found that electric and magnetic fields both satisfy equations of the
form
2
1
2 2 2 A = 0
(3.247)
c t
122
in free space. As is easily verified, the most general solution to this equation
(with a positive frequency) is
Ax
= Fx (k r kc t),
Ay
= Fy (k r kc t),
Az
= Fz (k r kc t),
(3.248)
= Fx ( k (r ct) ),
Ay
= Fy ( k (r ct) ),
Az
= Fz ( k (r ct) ).
(3.249)
The x-component of this solution is shown schematically below; it clearly propagates in r with velocity c. If we look along a direction which is perpendicular
to k then k r = 0 and there is no propagation. Thus, the components of A
are arbitrarily shaped pulses which propagate, without changing shape, along
the direction of k with velocity c. These pulses can be related to the sinusoidal
Fx (r, t=0)
Fx (r, t)
ct
r ->
plane wave solutions which we found earlier by Fourier transformation. Thus, any
arbitrary shaped pulse propagating in the direction of k with velocity c can be
123
broken down into lots of sinusoidal oscillations propagating in the same direction
with the same velocity.
The operator
1 2
(3.250)
2 2
c t
is called the dAlembertian. It is the four dimensional equivalent of the Laplacian. Recall that the Laplacian is invariant under rotational transformation. The
dAlembertian goes one better than this because it is both rotationally invariant
and Lorentz invariant. The dAlembertian is conventionally denoted 2 2 . Thus,
electromagnetic waves in free space satisfy the wave equations
2
22 E
= 0,
22 B
= 0.
(3.251)
When written in terms of the vector and scalar potentials, Maxwells equations
reduce to
22 = ,
0
22 A = 0 j.
(3.252)
These are clearly driven wave equations. Our next task is to find the solutions to
these equations.
3.20
Greens functions
(3.253)
where v(r) is denoted the source function. The potential u(r) satisfies the boundary condition
u(r) 0
as |r| ,
(3.254)
provided that the source function is reasonably localized. The solutions to Poissons equation are superposable (because the equation is linear). This property
124
is exploited in the Greens function method of solving this equation. The Greens
function G(r, r 0 ) is the potential, which satisfies the appropriate boundary conditions, generated by a unit amplitude point source located at r 0 . Thus,
2 G(r, r 0 ) = (r r 0 ).
(3.255)
Any source function v(r) can be represented as a weighted sum of point sources
Z
v(r) = (r r 0 ) v(r 0 ) d3 r 0 .
(3.256)
It follows from superposability that the potential generated by the source v(r)
can be written as the weighted sum of point source driven potentials (i.e., Greens
functions)
Z
u(r) =
G(r, r 0 ) v(r 0 ) d3 r 0 .
(3.257)
1
1
.
4 |r r 0 |
(3.258)
(3.259)
Note that the point source driven potential (3.258) is perfectly sensible. It is
spherically symmetric about the source, and falls off smoothly with increasing
distance from the source.
We now need to solve the wave equation
2
1
2 2 2 u = v,
c t
(3.260)
where v(r, t) is a time varying source function. The potential u(r, t) satisfies the
boundary conditions
u(r) 0
(3.261)
The solutions to Eq. (3.260) are superposable (since the equation is linear), so a
Greens function method of solution is again appropriate. The Greens function
G(r, r 0 ; t, t0 ) is the potential generated by a point impulse located at position r 0
and applied at time t0 . Thus,
1
2 2 2 G(r, r 0 ; t, t0 ) = (r r 0 )(t t0 ).
(3.262)
c t
Of course, the Greens function must satisfy the correct boundary conditions. A
general source v(r, t) can be built up from a weighted sum of point impulses
Z Z
v(r, t) =
(r r 0 )(t t0 ) v(r 0 , t0 ) d3 r 0 dt0 .
(3.263)
It follows that the potential generated by v(r, t) can be written as the weighted
sum of point impulse driven potentials
Z Z
u(r, t) =
G(r, r 0 ; t, t0 ) v(r 0 , t0 ) d3 r 0 dt0 .
(3.264)
So, how do we find the Greens function?
Consider
F (t t0 |r r 0 |/c)
,
G(r, r ; t, t ) =
|r r 0 |
0
(3.265)
where F () is a general scalar function. Let us try to prove the following theorem:
2
1
2 2 2 G = 4 F (t t0 ) (r r 0 ).
(3.266)
c t
At a general point, r 6= r 0 , the above expression reduces to
1 2
2
2 2 G = 0.
c t
(3.267)
So, we basically have to show that G is a valid solution of the free space wave
equation. We can easily show that
x x0
|r r 0 |
=
.
x
|r r 0 |
126
(3.268)
3(x x0 )2 |r r 0 |2
2G
F
=
x2
|r r 0 |5
(x x0 )2 F 00
3(x x0 )2 |r r 0 |2 F 0
+
+
,
|r r 0 |4
c
|r r 0 |3 c2
(3.269)
(3.270)
1
2 2 2 G = 0,
c t
(3.271)
giving
which is the desired result. Consider, now, the region around r = r 0 . It is clear
from Eq. (3.269) that the dominant term on the left-hand side as |r r 0 | 0
is the first one, which is essentially F 2 (|r r 0 |1 )/x2 . It is also clear that
(1/c2 )( 2 G/t2 ) is negligible compared to this term. Thus, as |r r 0 | 0 we
find that
1
1
.
(3.272)
2 2 2 G F (t t0 ) 2
c t
|r r 0 |
However, according to Eqs. (3.255) and (3.258)
1
= 4 (r r 0 ).
2
0
|r r |
(3.273)
We conclude that
1 2
2 2
c t
2
G = 4 F (t t0 ) (r r 0 ),
(3.274)
()
.
4
(3.275)
1 2
2
2 2 G = (r r 0 )(t t0 ).
c t
(3.276)
Thus,
1 (t t0 |r r 0 |/c)
G(r, r ; t, t ) =
4
|r r 0 |
is the Greens function for the driven wave equation (3.260).
0
(3.277)
The time dependent Greens function (3.277) is the same as the steady state
Greens function (3.258), apart from the delta function appearing in the former.
What does this delta function do? Well, consider an observer at point r. Because
of the delta function our observer only measures a non-zero potential at one
particular time
|r r 0 |
0
.
(3.278)
t=t +
c
It is clear that this is the time the impulse was applied at position r 0 (i.e., t0 )
plus the time taken for a light signal to travel between points r 0 and r. At time
t > t0 the locus of all points at which the potential is non-zero is
|r r 0 | = c (t t0 ).
(3.279)
3.21
Retarded potentials
2 = ,
0
2 A = 0 j.
128
(3.280)
The solutions to these equations are easily found using the Greens function for
Poissons equation (3.258):
Z
1
(r 0 ) 3 0
d r
(r) =
40
|r r 0 |
Z
j(r 0 ) 3 0
0
d r.
(3.281)
A(r) =
4
|r r 0 |
The time dependent Maxwell equations reduce to
22 =
,
0
22 A = 0 j.
(3.282)
We can solve these equations using the time dependent Greens function (3.277).
From Eq. (3.264) we find that
Z Z
(t t0 |r r 0 |/c) (r 0 , t0 ) 3 0 0
1
d r dt ,
(3.283)
(r, t) =
40
|r r 0 |
with a similar equation for A. Using the well known property of delta functions,
these equations reduce to
Z
1
(r 0 , t |r r 0 |/c) 3 0
(r, t) =
d r
40
|r r 0 |
Z
0
j(r 0 , t |r r 0 |/c) 3 0
A(r, t) =
d r.
(3.284)
4
|r r 0 |
These are the general solutions to Maxwells equations! Note that the time dependent solutions (3.284) are the same as the steady state solutions (3.281),
apart from the weird way in which time appears in the former. According to
Eqs. (3.284), if we want to work out the potentials at position r and time t we
have to perform integrals of the charge density and current density over all space
(just like in the steady state situation). However, when we calculate the contribution of charges and currents at position r 0 to these integrals we do not use the
values at time t, instead we use the values at some earlier time t|r r 0 |/c. What
is this earlier time? It is simply the latest time at which a light signal emitted
129
from position r 0 would be received at position r before time t. This is called the
retarded time. Likewise, the potentials (3.284) are called retarded potentials. It
is often useful to adopt the following notation
A(r 0 , t |r r 0 |/c) [A(r 0 , t)] .
(3.285)
The square brackets denote retardation (i.e., using the retarded time instead of
the real time). Using this notation Eqs. (3.284) become
Z
[(r 0 )] 3 0
1
(r) =
d r,
40
|r r 0 |
Z
[j(r 0 )] 3 0
0
d r.
(3.286)
A(r) =
4
|r r 0 |
The time dependence in the above equations is taken as read.
We are now in a position to understand electromagnetism at its most fundamental level. A charge distribution (r, t) can thought of as built up out of a
collection or series of charges which instantaneously come into existence, at some
point r 0 and some time t0 , and then disappear again. Mathematically, this is
written
Z Z
(r, t) =
(r r 0 )(t t0 ) (r 0 , t0 ) d3 r 0 dt0 .
(3.287)
Likewise, we can think of a current distribution j(r, t) as built up out of a collection or series of currents which instantaneously appear and then disappear:
Z Z
j(r, t) =
(r r 0 )(t t0 ) j(r 0 , t0 ) d3 r 0 dt0 .
(3.288)
Each of these ephemeral charges and currents excites a spherical wave in the
appropriate potential. Thus, the charge density at r 0 and t0 sends out a wave in
the scalar potential:
(r 0 , t0 ) (t t0 |r r 0 |/c)
(r, t) =
.
40
|r r 0 |
(3.289)
0 j(r 0 , t0 ) (t t0 |r r 0 |/c)
A(r, t) =
.
4
|r r 0 |
(3.290)
Likewise, the current density at r 0 and t0 sends out a wave in the vector potential:
130
These waves can be thought of as little messengers which inform other charges
and currents about the charges and currents present at position r 0 and time t0 .
However, the messengers travel at a finite speed; i.e., the speed of light. So, by
the time they reach other charges and currents their message is a little out of date.
Every charge and every current in the universe emits these spherical waves. The
resultant scalar and vector potential fields are given by Eqs. (3.286). Of course,
we can turn these fields into electric and magnetic fields using Eqs. (3.187). We
can then evaluate the force exerted on charges using the Lorentz formula. We can
see that we have now escaped from the apparent action at a distance nature of
Coulombs law and the Biot-Savart law. Electromagnetic information is carried
by spherical waves in the vector and scalar potentials and, therefore, travels at the
velocity of light. Thus, if we change the position of a charge then a distant charge
can only respond after a time delay sufficient for a spherical wave to propagate
from the former to the latter charge.
Let us compare the steady-state law
Z
(r 0 ) 3 0
1
(r) =
d r
40
|r r 0 |
with the corresponding time dependent law
Z
[(r 0 )] 3 0
1
d r
(r) =
40
|r r 0 |
(3.291)
(3.292)
These two formulae look very similar indeed, but there is an important difference.
We can imagine (rather pictorially) that every charge in the universe is continuously performing the integral (3.291), and is also performing a similar integral to
find the vector potential. After evaluating both potentials, the charge can calculate the fields and using the Lorentz force law it can then work out its equation of
motion. The problem is that the information the charge receives from the rest of
the universe is carried by our spherical waves, and is always slightly out of date
(because the waves travel at a finite speed). As the charge considers more and
more distant charges or currents its information gets more and more out of date.
(Similarly, when astronomers look out to more and more distant galaxies in the
universe they are also looking backwards in time. In fact, the light we receive
from the most distant observable galaxies was emitted when the universe was
131
only about a third of its present age.) So, what does our electron do? It simply
uses the most up to date information about distant charges and currents which
it possesses. So, instead of incorporating the charge density (r, t) in its integral
the electron uses the retarded charge density [(r, t)] (i.e., the density evaluated
at the retarded time). This is effectively what Eq. (3.292) says.
t < t1
t1< t < t 2
t > t2
1
q
40 |r r0 |
for t1 t |r r0 |/c t2
= 0
otherwise.
(3.293)
r r0
q
40 |r r0 |3
for t1 t |r r0 |/c t2
= 0
otherwise.
(3.294)
This solution is shown pictorially above. We can see that the charge effectively
emits a Coulomb electric field which propagates radially away from the charge
at the speed of light. Likewise, it is easy to show that a current carrying wire
effectively emits an Amp`erian magnetic field at the speed of light.
132
We can now appreciate the essential difference between time dependent electromagnetism and the action at a distance laws of Coulomb and Biot & Savart.
In the latter theories, the field lines act rather like rigid wires attached to charges
(or circulating around currents). If the charges (or currents) move then so do the
field lines, leading inevitably to unphysical action at a distance type behaviour.
In the time dependent theory charges act rather like water sprinklers; i.e., they
spray out the Coulomb field in all directions at the speed of light. Similarly,
current carrying wires throw out magnetic field loops at the speed of light. If we
move a charge (or current) then field lines emitted beforehand are not affected, so
the field at a distant charge (or current) only responds to the change in position
after a time delay sufficient for the field to propagate between the two charges (or
currents) at the speed of light. In Coulombs law and the Biot-Savart law it is not
obvious that the electric and magnetic fields have any real existence. The only
measurable quantities are the forces acting between charges and currents. We can
describe the force acting on a given charge or current, due to the other charges
and currents in the universe, in terms of the local electric and magnetic fields,
but we have no way of knowing whether these fields persist when the charge or
current is not present (i.e., we could argue that electric and magnetic fields are
just a convenient way of calculating forces but, in reality, the forces are transmitted directly between charges and currents by some form of magic). However,
it is patently obvious that electric and magnetic fields have a real existence in
the time dependent theory. Consider the following thought experiment. Suppose
that a charge q1 comes into existence for a period of time, emits a Coulomb field,
133
and then disappears. Suppose that a distant charge q 2 interacts with this field,
but is sufficiently far from the first charge that by the time the field arrives the
first charge has already disappeared. The force exerted on the second charge
is only ascribable to the electric field; it cannot be ascribed to the first charge
because this charge no longer exists by the time the force is exerted. The electric
field clearly transmits energy and momentum between the two charges. Anything
which possesses energy and momentum is real in a physical sense. Later on in
this course we shall demonstrate that electric and magnetic fields conserve energy
and momentum.
Let us now consider a moving charge. Such a charge is continually emitting
spherical waves in the scalar potential, and the resulting wave front pattern is
sketched below. Clearly, the wavefronts are more closely spaced in front of the
charge than they are behind it, suggesting that the electric field in front is larger
than the field behind. In a medium, such as water or air, where waves travel
at a finite speed c (say) it is possible to get a very interesting effect if the wave
source travels at some velocity v which exceeds the wave speed. This is illustrated below. The locus of the outermost wave front is now a cone instead of a
sphere. The wave intensity on the cone is extremely large: this is a shock wave!
The half angle of the shock wave cone is simply cos1 (c/v). In water, shock
waves are produced by fast moving boats. We call these bow waves. In air,
shock waves are produced by speeding bullets and supersonic jets. In the latter
case we call these sonic booms. Is there any such thing as an electromagnetic
134
ct
vt
shock wave? At first sight, the answer to this question would appear to be, no.
After all, electromagnetic waves travel at the speed of light and no wave source
(i.e., an electrically charged particle) can travel faster than this velocity. This
is a rather disappointing conclusion. However, when an electromagnetic wave
travels through matter a remarkable thing happens. The oscillating electric field
of the wave induces a slight separation of the positive and negative charges in
the atoms which make up the material. We call separated positive and negative
charges an electric dipole. Of course, the atomic dipoles oscillate in sympathy
with the field which induces them. However, an oscillating electric dipole radiates
electromagnetic waves. Amazingly, when we add the original wave to these induced waves it is exactly as if the original wave propagates through the material
in question at a velocity which is slower than the velocity of light in vacuum.
Suppose, now, that we shoot a charged particle through the material faster than
the slowed down velocity of electromagnetic waves. This is possible since the
waves are traveling slower than the velocity of light in vacuum. In practice, the
particle has to be traveling pretty close to the velocity of light in vacuum (i.e., it
has to be relativistic), but modern particle accelerators produce copious amount
of such particles. Now, we can get an electromagnetic shock wave. We expect an
intense cone of emission, just like the bow wave produced by a fast ship. In fact,
this type of radiation has been observed. It is called Cherenkov radiation, and it
is very useful in high energy physics. Cherenkov radiation is typically produced
by surrounding a particle accelerator with perspex blocks. Relativistic charged
particles emanating from the accelerator pass through the perspex traveling faster
135
than the local velocity of light and therefore emit Cherenkov radiation. We know
the velocity of light (c , say) in perspex (this can be worked out from the refractive index), so if we can measure the half angle of the radiation cone emitted by
each particle then we can evaluate the speed of the particle v via the geometric
relation cos = c /v.
3.22
Advanced potentials?
(3.295)
as the latest time at which a light signal emitted from position r 0 would reach position r before time t. We have also shown that a solution to Maxwells equations
can be written in terms of retarded potentials:
Z
1
(r 0 , tr ) 3 0
(r, t) =
d r,
(3.296)
40
|r r 0 |
etc. But, is this the most general solution? Suppose that we define the advanced
time.
ta = t + |r r 0 |/c.
(3.297)
This is the time a light signal emitted at time t from position r would reach
position r 0 . It turns out that we can also write a solution to Maxwells equations
in terms of advanced potentials:
Z
(r 0 , ta ) 3 0
1
(r, t) =
d r,
(3.298)
40
|r r 0 |
etc. In fact, this is just as good a solution to Maxwells equation as the one
involving retarded potentials. To get some idea what is going on let us examine
the Greens function corresponding to our retarded potential solution:
(r 0 , t0 ) (t t0 |r r 0 |/c)
(r, t) =
,
40
|r r 0 |
136
(3.299)
with a similar equation for the vector potential. This says that the charge density
present at position r 0 and time t0 emits a spherical wave in the scalar potential
which propagates forwards in time. The Greens function corresponding to our
advanced potential solution is
(r 0 , t0 ) (t t0 + |r r 0 |/c)
.
(r, t) =
40
|r r 0 |
(3.300)
This says that the charge density present at position r 0 and time t0 emits a
spherical wave in the scalar potential which propagates backwards in time. But,
hang on a minute, you might say, everybody knows that electromagnetic waves
cant travel backwards in time. If they did then causality would be violated.
Well, you know that electromagnetic waves do not propagate backwards in time,
I know that electromagnetic waves do not propagate backwards in time, but the
question is do Maxwells equations know this? Consider the wave equation for
the scalar potential:
2
1
2 2 2 = .
(3.301)
c t
0
This equation is manifestly symmetric in time (i.e., it is invariant under the transformation t t). Thus, backward traveling waves are just as good a solution to
this equation as forward traveling waves. The equation is also symmetric in space
(i.e., it is invariant under the transformation x x). So, why do we adopt the
Greens function (3.299) which is symmetric in space (i.e., it is invariant under
x x) but asymmetric in time (i.e., it is not invariant under t t)? Would
it not be better to use the completely symmetric Greens function
(r 0 , t0 ) 1 (t t0 |r r 0 |/c) (t t0 + |r r 0 |/c)
(r, t) =
+
? (3.302)
40 2
|r r 0 |
|r r 0 |
In other words, a charge emits half of its waves running forwards in time (i.e.,
retarded waves) and the other half running backwards in time (i.e., advanced
waves). This sounds completely crazy! However, in the 1940s Richard P. Feynman and John A. Wheeler pointed out that under certain circumstances this
prescription gives the right answer. Consider a charge interacting with the rest
of the universe, where the rest of the universe denotes all of the distant charges
in the universe and is, by implication, an awful long way away from our original
137
charge. Suppose that the rest of the universe is a perfect reflector of advanced
waves and a perfect absorbed of retarded waves. The waves emitted by the charge
can be written schematically as
F =
1
1
(retarded) + (advanced).
2
2
(3.303)
1
1
(retarded) (advanced).
2
2
(3.304)
This is illustrated in the space-time diagram below. Here, A and R denote the
rest of universe
charge
time
A
a
aa
R
space
advanced and retarded waves emitted by the charge, respectively. The advanced
wave travels to the rest of the universe and is reflected; i.e., the distant charges
oscillate in response to the advanced wave and emit a retarded wave a, as shown.
The retarded wave a is spherical wave which converges on the original charge,
passes through the charge, and then diverges again. The divergent wave is denoted aa. Note that a looks like a negative advanced wave emitted by the charge,
whereas aa looks like a positive retarded wave emitted by the charge. This is
138
essentially what Eq. (3.304) says. The retarded waves R and aa are absorbed by
the rest of the universe.
If we add the waves emitted by the charge to the response of the rest of the
universe we obtain
F 0 = F + R = (retarded).
(3.305)
Thus, charges appear to emit only retarded waves, which agrees with our everyday
experience. Clearly, in this model we have side-stepped the problem of a time
asymmetric Greens function by adopting time asymmetric boundary conditions
to the universe; i.e., the distant charges in the universe absorb retarded waves
and reflect advanced waves. This is possible because the absorption takes place
at the end of the universe (i.e., at the big crunch, or whatever) and the reflection takes place at the beginning of the universe (i.e., at the big bang). It
is quite plausible that the state of the universe (and, hence, its interaction with
electromagnetic waves) is completely different at these two times. It should be
pointed out that the Feynman-Wheeler model runs into trouble when one tries to
combine electromagnetism with quantum mechanics. These difficulties have yet
to be resolved, so at present the status of this model is that it is an interesting
idea but it is still not fully accepted into the canon of physics.
3.23
Retarded fields
= A.
(3.306)
R = r r0 ,
(3.307)
It is helpful to write
where R = |r r 0 |. The retarded time becomes tr = t R/c, and a general
retarded quantity is written [F (r, t)] F (r, tr ). Thus, we can write the retarded
139
Z
[/t]
[](R1 ) +
=
tr dV 0
R
Z
[]
1
[/t]
=
R
R dV 0 ,
3
2
40
R
cR
1
40
(3.309)
R
,
R
(R1 ) =
R
,
R3
tr =
R
.
cR
(3.310)
Likewise,
Z
[j/t]
r
(R1 ) [j] +
A =
dV 0
R
Z
R [j] R [j/t]
0
=
+
dV 0 .
3
2
4
R
cR
0
4
Z
R
1
[j/t]
R
[] 3 +
E=
dV 0 ,
2
2
40
R
t cR
c R
which is the time dependent generalization of Coulombs law, and
Z
[j] R [j/t] R
0
B=
+
dV 0 ,
3
2
4
R
cR
140
(3.311)
(3.312)
(3.313)
(3.314)
so the difference between retarded time and standard time is relatively small.
This allows us to expand retarded quantities in a Taylor series. Thus,
1 2
[] ' +
(tr t) +
(tr t) + ,
t
2 t2
(3.315)
giving
R 1 2 R2
+
+ .
t c
2 t2 c2
Expansion of the retarded quantities in the near field region yields
Z
1
j/t
R 1 2 R
E '
2
+ dV 0 ,
3
2
2
40
R
2 t c R
c R
Z
j R 1 ( 2 j/t2 ) R
0
B '
+ dV 0 .
3
2
4
R
2
c R
[] '
(3.316)
(3.317a)
(3.317b)
In Eq. (3.317a) the first term on the right-hand side corresponds to Coulombs
law, the second term is the correction due to retardation effects, and the third
term corresponds to Faraday induction. In Eq. (3.317b) the first term on the
right-hand side is the Biot-Savart law and the second term is the correction due
to retardation effects. Note that the retardation corrections are only of order
(R/R0 )2 . We might suppose, from looking at Eqs. (3.312) and (3.313), that
the corrections should be of order R/R0 , however all of the order R/R0 terms
canceled out in the previous expansion. Suppose, then, that we have a d.c.
circuit sitting on a laboratory benchtop. Let the currents in the circuit change on
a typical time-scale of one tenth of a second. In this time light can travel about
3 107 meters, so R0 30, 000 kilometers. The length-scale of the experiment is
141
about one meter, so R = 1 meter. Thus, the retardation corrections are of order
(3 107 )2 1015 . It is clear that we are fairly safe just using Coulombs law,
Faradays law, and the Biot-Savart law to analyze the fields generated by this
type of circuit.
In the far field region, R R0 , Eqs. (3.312) and (3.313) are dominated by
the terms which vary like R1 , so
Z
[j /t]
1
E '
dV 0 ,
(3.318a)
2
40
c R
Z
0
[j /t] R
B '
dV 0 ,
(3.318b)
2
4
cR
where
(j R)
R.
(3.318c)
R2
Here, use has been made of [/t] = [ j] and [ j] = [j/t] R/cR +
O(1/R2 ). Suppose that our charges and currents are localized to some region in
the vicinity of r 0 = r . Let R = r r , with R = |r r |. Suppose that
the extent of the current and charge containing region is much less than R . It
follows that retarded quantities can be written
j = j
(3.319)
j /t dV 0
,
c 2 R
R
j /t dV 0 R
1
B'
.
40
c3 R2
Note that
E
= c,
B
142
(3.320)
(3.321)
(3.322)
and
E B = 0.
(3.323)
This configuration of electric and magnetic fields is characteristic of an electromagnetic wave (see Section 3.19). Thus, Eqs. (3.322) and (3.323) describe
an electromagnetic wave propagating radially away from the charge and current
containing region. Note that the wave is driven by time varying electric currents.
Now, charges moving with a constant velocity constitute a steady current, so a
non-steady current is associated with accelerating charges. We conclude that accelerating electric charges emit electromagnetic waves. The wave fields, (3.320)
and (3.321), fall off like the inverse of the distance from the wave source. This
behaviour should be contrasted with that of Coulomb or Biot-Savart fields which
fall off like the inverse square of the distance from the source. The fact that wave
fields attenuate fairly gently with increasing distance from the source is what
makes astronomy possible. If wave fields obeyed an inverse square law then no
appreciable radiation would reach us from the rest of the universe.
In conclusion, electric and magnetic fields look simple in the near field region
(they are just Coulomb fields, etc.) and also in the far field region (they are just
electromagnetic waves). Only in the intermediate region, R R 0 , do things start
getting really complicated (so we do not look in this region!).
3.24
Summary
This marks the end of our theoretical investigation of Maxwells equations. Let
us now summarize what we have learned so far. The field equations which govern
electric and magnetic fields are written:
,
0
= 0,
= 0 j +
143
(3.324a)
(3.324b)
B
,
t
(3.324c)
1 E
.
c2 t
(3.324d)
(3.325a)
(3.325b)
I
I
E dl
B dl
=
B dS,
t S
Z
Z
1
j dS + 2
= 0
E dS.
c t S
S
(3.325c)
(3.325d)
= A.
A
,
t
(3.326a)
(3.326b)
This prescription is not unique (there are many choices of and A which generate the same fields) but we can make it unique by adopting the following conventions:
(r) 0
as |r| ,
(3.327)
and
1
+ A = 0.
c2 t
Equations (3.324a) and (3.324d) reduce to
(3.328)
,
0
(3.329a)
22 A = 0 j
(3.329b)
22 =
2
1
22 u 2 2 2 u = v.
c t
144
(3.330)
The Greens function for this equation which satisfies the boundary conditions
and is consistent with causality is
1 (t t0 |r r 0 |/c)
.
G(r, r ; t, t ) =
4
|r r 0 |
0
(3.331)
Z
[]
1
dV 0 ,
40
R
Z
[j]
0
dV 0 ,
4
R
(3.332a)
(3.332b)
Z
R
1
[j/t]
R
[] 3 +
E(r, t) =
dV 0 , (3.333a)
2
2
40
R
t cR
c R
Z
0
[j] R [j/t] R
B(r, t) =
+
dV 0 .
(3.333b)
3
2
4
R
cR
Equations (3.324)(3.333) constitute the complete theory of classical electromagnetism. We can express the same information in terms of field equations
[Eqs. (3.324) ], integrated field equations [Eqs. (3.325) ], retarded electromagnetic
potentials [Eqs. (3.332) ], and retarded electromagnetic fields [Eqs. (3.333) ]. Let
us now consider the applications of this theory.
145
4.1
Electrostatic energy
(4.1)
(4.2)
The work we would have to do against electrical forces in order to move the charge
from point P to point Q is simply
W =
Q
P
f dl = q
Q
P
E dl = q
Q
P
dl = q [(Q) (P )] .
(4.3)
The negative sign in the above expression comes about because we would have
to exert a force f on the charge in order to counteract the force exerted by the
electric field. Recall that the scalar potential generated by a point charge q 0 at
position r 0 is
q0
1
.
(4.4)
(r) =
40 |r r 0 |
Let us build up our collection of charges one by one. It takes no work to bring
the first charge from infinity, since there is no electric field to fight against. Let
us clamp this charge in position at r1 . In order to bring the second charge into
146
position at r2 we have to do work against the electric field generated by the first
charge. According to Eqs. (4.3) and Eqs. (4.4), this work is given by
W2 =
1
q1 q2
.
40 |r1 r2 |
(4.5)
Let us now bring the third charge into position. Since electric fields and scalar
potentials are superposable the work done whilst moving the third charge from infinity to r3 is simply the sum of the work done against the electric fields generated
by charges 1 and 2 taken in isolation:
q1 q3
q2 q3
1
+
.
(4.6)
W3 =
40 |r1 r3 | |r2 r3 |
Thus, the total work done in assembling the three charges is given by
q1 q2
q1 q3
q2 q3
1
+
+
W =
.
40 |r1 r2 | |r1 r3 | |r2 r3 |
(4.7)
(4.8)
The restriction that j must be greater than i makes the above summation rather
messy. If we were to sum without restriction (other than j 6= i) then each pair
of charges would be counted twice. It is convenient to do just this and then to
divide the result by two. Thus,
N N
1 1 X X qi qj
.
W =
2 40 i=1
|ri rj |
(4.9)
j=1
j6=i
This is the potential energy (i.e., the difference between the total energy and the
kinetic energy) of a collection of charges. We can think of this as the work needed
to bring static charges from infinity and assemble them in the required formation.
Alternatively, this is the kinetic energy which would be released if the collection
147
were dissolved and the charges returned to infinity. But where is this potential
energy stored? Let us investigate further.
Equation (4.9) can be written
N
1X
W =
qi i ,
2 i=1
where
N
1 X
qj
i =
40
|ri rj |
(4.10)
(4.11)
j=1
j6=i
is the scalar potential experienced by the i th charge due to the other charges in
the distribution.
Let us now consider the potential energy of a continuous charge distribution.
It is tempting to write
Z
1
d3 r,
(4.12)
W =
2
(4.13)
(4.14)
E d3 r.
(4.15)
(4.16)
However, = E, so we obtain
Z
Z
0
W =
(E ) d3 r + E 2 d3 r
2
Application of Gauss theorem gives
I
Z
0
W =
E dS +
E 2 dV ,
2
S
V
(4.17)
(4.18)
where V is some volume which encloses all of the charges and S is its bounding
surface. Let us assume that V is a sphere, centred on the origin, and let us
take the limit in which radius r of this sphere goes to infinity. We know that, in
general, the electric field at large distances from a bounded charge distribution
looks like the field of a point charge and, therefore, falls off like 1/r 2 . Likewise,
the potential falls off like 1/r. However, the surface area of the sphere increases
like r 2 . It is clear that in the limit as r the surface integral in Eq. (4.18)
falls off like 1/r and is consequently zero. Thus, Eq. (4.18) reduces to
Z
0
E 2 d3 r,
(4.19)
W =
2
where the integral is over all space. This is a very nice result! It tells us that
the potential energy of a continuous charge distribution is stored in the electric
field. Of course, we now have to assume that an electric field possesses an energy
density
0 2
U=
E .
(4.20)
2
We can easily check that Eq. (4.19) is correct. Suppose that we have a charge
Q which is uniformly distributed within a sphere of radius a. Let us imagine
building up this charge distribution from a succession of thin spherical layers of
infinitesimal thickness. At each stage we gather a small amount of charge from
infinity and spread it over the surface of the sphere in a thin layer from r to r +dr.
We continue this process until the final radius of the sphere is a. If q(r) is the
charge in the sphere when it has attained radius r, the work done in bringing a
charge dq to it is
1 q(r) dq
dW =
.
(4.21)
40
r
149
This follows from Eq. (4.5) since the electric field generated by a spherical charge
distribution (outside itself) is the same as that of a point charge q(r) located at
the origin (r = 0) (see later). If the constant charge density in the sphere is
then
4
(4.22)
q(r) = r3 ,
3
and
dq = 4r 2 dr.
(4.23)
Thus, Eq. (4.21) becomes
dW =
4 2 4
r dr.
30
(4.24)
The total work needed to build up the sphere from nothing to radius a is plainly
Z
4 2 a 4
4 2 5
a .
(4.25)
W =
r dr =
30
150
0
This can also be written in terms of the total charge Q = (4/3)a 3 as
W =
3 Q2
.
5 40 a
(4.26)
Now that we have evaluated the potential energy of a spherical charge distribution by the direct method, let us work it out using Eq. (4.19). We assume that
the electric field is radial and spherically symmetric, so E = E r (r) r. Application
of Gauss law
Z
I
1
dV,
(4.27)
E dS =
0 V
S
Er (r) =
Q r
40 a3
(4.28)
Er (r) =
Q
40 r2
(4.29)
150
for r a. Note that the electric field generated outside the charge distribution is
the same as that of a point charge Q located at the origin, r = 0. Equations (4.19),
(4.28), and (4.29) yield
Z a
Z
Q2
1
dr
4
W =
r
dr
+
,
(4.30)
80 a6 0
r2
a
which reduces to
Q2
W =
80 a
1
+1
5
3 Q2
.
5 40 a
(4.31)
N
N
X
1X
E d r=
qi i +
Wi
2 i=1
i=1
2
151
(4.33)
which enables us to reconcile Eqs. (4.10) and (4.19). Unfortunately, if our point
charges really are point charges then a 0 and the self-energy of each charge
becomes infinite. Thus, the potential energies predicted by Eqs. (4.10) and (4.19)
differ by an infinite amount. What does this all mean? We have to conclude
that the idea of locating electrostatic potential energy in the electric field is
inconsistent with the assumption of the existence of point charges. One way out
of this difficulty would be to say that all elementary charges, such as electrons, are
not points but instead small distributions of charge. Alternatively, we could say
that our classical theory of electromagnetism breaks down on very small lengthscales due to quantum effects. Unfortunately, the quantum mechanical version
of electromagnetism (quantum electrodynamics or QED, for short) suffers from
the same infinities in the self-energies of particles as the classical version. There
is a prescription, called renormalization, for steering round these infinities and
getting finite answers which agree with experiments to extraordinary accuracy.
However, nobody really understands why this prescription works. The problem
of the infinite self-energies of charged particles is still unresolved.
4.2
Ohms law
(4.34)
152
l
.
A
(4.35)
(4.37)
There is nothing special about the z-axis (in an isotropic conducting medium) so
the previous formula immediately generalizes to
E = j.
(4.38)
(4.39)
P = j E = j2,
(4.40)
(4.41)
where P is now the power dissipated per unit volume in a resistive medium.
4.3
Conductors
Most (but not all) electrical conductors obey Ohms law. Such conductors are
termed ohmic. Suppose that we apply an electric field to an ohmic conductor. What is going to happen? According to Eq. (4.38) the electric field drives
153
currents. These redistribute the charge inside the conductor until the original
electric field is canceled out. At this point, the currents stop flowing. It might
be objected that the currents could keep flowing
H in closed loops. According to
Ohms law, this would require a non-zero e.m.f., E dl, acting around each loop
(unless the conductor is a superconductor, with = 0). However, we know that
in steady-state
I
E dl = 0
(4.42)
C
around any closed loop C. This proves that a steady-state e.m.f. acting around
a closed loop inside a conductor is impossible. The only other alternative is
j=E=0
(4.43)
(4.44)
So, there are no electric charges in the interior of a conductor. But, how can
a conductor cancel out an applied electric field if it contains no charges? The
answer is that all of the charges reside on the surface of the conductor. In reality,
the charges lie within one or two atomic layers of the surface (see any textbook
on solid-state physics). The difference in scalar potential between two points P
and Q is simply
Z Q
Z Q
(Q) (P ) =
dl =
E dl.
(4.45)
P
However, if P and Q lie inside the same conductor then it is clear from Eq. (4.43)
that the potential difference between P and Q is zero. This is true no matter
where P and Q are situated inside the conductor, so we conclude that the scalar
potential must be uniform inside a conductor. A corollary of this is that the
surface of a conductor is an equipotential (i.e., = constant) surface.
Not only is the electric field inside a conductor zero. It is also possible to
demonstrate that the field within an empty cavity lying inside a conductor is also
zero, provided that there are no charges within the cavity. Let us, first of all,
integrate Gauss law over a surface S which surrounds the cavity but lies wholly in
154
Conductor
S
+
++
++
++
-Cavity C
-
the conducting material. Since the electric field is zero in a conductor, it follows
that zero net charge is enclosed by S. This does not preclude the possibility that
there are equal amounts of positive and negative charges distributed on the inner
surface of the conductor. However, we can easily rule out this possibility using
the steady-state relation
I
E dl = 0,
(4.46)
C
for any closed loop C. If there are any electric field lines inside the cavity then
they must run from the positive to the negative surface charges. Consider a loop
C which straddles the cavity and the conductor, such as the one shown above. In
the presence of field lines it is clear that the line integral of E along that portion
of the loop which lies inside the cavity is non-zero. However, the line integral of
E along that portion of the loop which runs through the conducting material is
obviously zero (since E = 0 inside a conductor). Thus, the line integral of the
field around the closed loop C is non-zero. This, clearly contradicts Eq. (4.46).
In fact, this equation implies that the line integral of the electric field along any
path which runs through the cavity, from one point on the interior surface of the
conductor to another, is zero. This can only be the case if the electric field itself
is zero everywhere inside the cavity. There is one proviso to this argument. The
electric field inside a cavity is only zero if the cavity contains no charges. If the
cavity contains charges then our argument fails because it is possible to envisage
155
that the line integral of the electric field along many different paths across the
cavity could be zero without the fields along these paths necessarily being zero
(this argument is somewhat inexact; we shall improve it later on).
We have shown that if a cavity is completely enclosed by a conductor then no
stationary distribution of charges outside can ever produce any fields inside. So,
we can shield a piece of electrical equipment from stray external electric fields by
placing it inside a metal can. Using similar arguments to those given above, we
can also show that no static distribution of charges inside a closed conductor can
ever produce a field outside it. Clearly, shielding works both ways!
Vacuum
Gaussian pill-box
A
Conductor
Let us consider some small region on the surface of a conductor. Suppose that
the local surface charge density is , and that the electric field just outside the
conductor is E. Note that this field must be directed normal to the surface of
the conductor. Any parallel component would be shorted out by surface currents.
Another way of saying this is that the surface of a conductor is an equipotential.
We know that is always perpendicular to equipotential surfaces, so E =
must be locally perpendicular to a conducting surface. Let us use Gauss law,
Z
I
1
dV,
(4.47)
E dS =
0 V
S
where V is a so-called Gaussian pill-box. This is a pill-box shaped volume
whose two ends are aligned normal to the surface of the conductor, with the
156
surface running between them, and whose sides are tangential to the surface
normal. It is clear that E is perpendicular to the sides of the box, so the sides
make no contribution to the surface integral. The end of the box which lies inside
the conductor also makes no contribution, since E = 0 inside a conductor. Thus,
the only non-zero contribution to the surface integral comes from the end lying
in free space. This contribution is simply E A, where E denotes an outward
pointing (from the conductor) normal electric field, and A is the cross-sectional
area of the box. The charge enclosed by the box is simply A, from the definition
E
A
E
of a surface charge density. Thus, Gauss law yields
E =
(4.48)
as the relationship between the normal electric field immediately outside a conductor and the surface charge density.
Let us look at the electric field generated by a sheet charge distribution a
little more carefully. Suppose that the charge per unit area is . By symmetry,
we expect the field generated below the sheet to be the mirror image of that
above the sheet (at least, locally). Thus, if we integrate Gauss law over a pillbox of cross sectional area A, as shown above, then the two ends both contribute
Esheet A to the surface integral, where Esheet is the normal electric field generated
above and below the sheet. The charge enclosed by the pill-box is just A. Thus,
157
2 0
=
2 0
above,
= +
below.
(4.49)
So, how do we get the asymmetric electric field of a conducting surface, which
is zero immediately below the surface (i.e., inside the conductor) and non-zero
immediately above it? Clearly, we have to add in an external field (i.e., a field
which is not generated locally by the sheet charge). The requisite field is
Eext =
2 0
(4.50)
both above and below the charge sheet. The total field is the sum of the field
generated locally by the charge sheet and the external field. Thus, we obtain
Etotal
= +
Etotal
= 0
above,
below,
(4.51)
2
.
=
2 0
(4.52)
the energy density of the electric field immediately outside the conductor. This is
not a coincidence. Suppose that the conductor expands by an average distance dx
due to the electrostatic pressure. The electric field is excluded from the region into
which the conductor expands. The volume of this region dV = A dx, where A is
the surface area of the conductor. Thus, the energy of the electric field decreases
by an amount dE = U dV = (0 /2) E 2 dV , where U is the energy density of the
field. This decrease in energy can be ascribed to the work which the field does
on the conductor in order to make it expand. This work is dW = p A dx, where
p is the force per unit area the field exerts on the conductor. Thus, dE = dW ,
from energy conservation, giving
p=
0 2
E .
2
(4.54)
This technique for calculating a force given an expression for the energy of a
system as a function of some adjustable parameter is called the principle of virtual
work, and is very useful.
We have seen that an electric field is excluded from the inside of the conductor,
but not from the outside, giving rise to a net outward force. We can account for
this by saying that the field exerts a negative pressure p = ( 0 /2) E 2 on the
conductor. We know that if we evacuate a metal can then the pressure difference
between the inside and the outside eventually causes it to implode. Likewise, if
we place the can in a strong electric field then the pressure difference between the
inside and the outside will eventually cause it to explode. How big a field do we
need before the electrostatic pressure difference is the same as that obtained by
evacuating the can? In other words, what field exerts a negative pressure of one
atmosphere (i.e., 105 newtons per meter squared) on conductors? The answer is
a field of strength E 108 volts per meter. Fortunately, this is a rather large
field, so there is no danger of your car exploding when you turn on the stereo!
4.4
What are the most general boundary conditions satisfied by the electric field at
the interface between two mediums; e.g., the interface between a vacuum and a
conductor? Consider an interface P between two mediums A and B. Let us, first
159
A
E perp A
E para A
loop
P
A Gaussian pill-box
E para B
E perp B
B
1
E dS =
0
S
dV,
(4.55)
over a Gaussian pill-box S of cross-sectional area A whose two ends are locally
parallel to the interface. The ends of the box can be made arbitrarily close
together. In this limit, the flux of the electric field out of the sides of the box is
obviously negligible. The only contribution to the flux comes from the two ends.
In fact,
I
E dS = (E A E B ) A,
(4.56)
S
E A E B =
(4.57)
0
at the interface; i.e., the presence of a charge sheet on an interface causes a
discontinuity in the perpendicular component of the electric field. What about
160
E dl =
B dS,
t S
C
(4.58)
around a rectangular loop C whose long sides, length l, run parallel to the interface. The length of the short sides is assumed to be arbitrarily small. The
dominant contribution to the loop integral comes from the long sides:
I
E dl = (Ek A Ek B ) l,
(4.59)
C
where Ek A is the parallel (to the interface) electric field in medium A at the
interface, etc. The flux of the magnetic field through the loop is approximately
B A, where B is the component of the magnetic field which is normal to the
loop, and A is the area of the loop. But, A 0 as the short sides of the loop are
shrunk to zero so, unless the magnetic field becomes infinite (we assume that it
does not), the flux also tends to zero. Thus,
Ek A Ek B = 0;
(4.60)
i.e., there can be no discontinuity in the parallel component of the electric field
across an interface.
4.5
Capacitors
20
=
20
above,
= +
below.
(4.61)
20
= +
20
=
above,
below.
(4.62)
Note that we are neglecting any leakage of the field at the edges of the plates.
This is reasonable if the plates are closely spaced. The total field is the sum of
the two fields generated by the upper and lower plates. Thus, the net field is
normal to the plates and of magnitude
= 0
between,
otherwise.
(4.63)
Since the electric field is uniform, the potential difference between the plates is
simply
d
V = E d =
.
(4.64)
0
It is conventional to measure the capacity of a conductor, or set of conductors, to
store charge but generate small electric fields in terms of a parameter called the
capacitance. This is usually denoted C. The capacitance of a charge storing
162
device is simply the ratio of the charge stored to the potential difference generated
by the charge. Thus,
Q
(4.65)
C= .
V
Clearly, a good charge storing device has a high capacitance. Incidentally, capacitance is measured in coulombs per volt, or farads. This is a rather unwieldy
unit since good capacitors typically have capacitances which are only about one
millionth of a farad. For a parallel plate capacitor it is clear that
C=
A
0 A
=
.
V
d
(4.66)
Note that the capacitance only depends on geometric quantities such as the area
and spacing of the plates. This is a consequence of the superposability of electric fields. If we double the charge on conductors then we double the electric
fields generated around them and we, therefore, double the potential difference
between the conductors. Thus, the potential difference between conductors is always directly proportional to the charge carried; the constant of proportionality
(the inverse of the capacitance) can only depend on geometry.
Suppose that the charge Q on each plate is built up gradually by transferring
small amounts of charge from one plate to another. If the instantaneous charge on
the plates is q and an infinitesimal amount of positive charge dq is transferred
from the negatively charged plate to the positively charge plate then the work
done is dW = V dq = q dq/C, where V is the instantaneous voltage difference
between the plates. Note that the voltage difference is such that it opposes any
increase in the charge on either plate. The total work done in charging the
capacitor is
Z
1 Q
Q2
1
W =
q dq =
= CV 2 ,
(4.67)
C 0
2C
2
where use has been made of Eq. (4.65). The energy stored in the capacitor is the
same as the work required to charge up the capacitor. Thus,
W =
1
CV 2 .
2
(4.68)
The energy of a charged parallel plate capacitor is actually stored in the electric field between the plates. This field is of approximately constant magnitude
E = V /d and occupies a region of volume A d. Thus, given the energy density
of an electric field U = (0 /2) E 2 , the energy stored in the electric field is
1
0 V 2
Ad
=
CV 2 ,
(4.69)
W =
2
2 d
2
where use has been made of Eq. (4.66). Note that Eqs. (4.67) and (4.69) agree.
We all know that if we connect a capacitor across the terminals of a battery then
a transient current flows as the capacitor charges up. The capacitor can then
be placed to one side and, some time later, the stored charge can be used, for
instance, to transiently light a bulb in an electrical circuit. What is interesting
here is that the energy stored in the capacitor is stored as an electric field, so
we can think of a capacitor as a device which either stores energy in, or extracts
energy from, an electric field.
The idea, which we discussed earlier, that an electric field exerts a negative
pressure p = (0 /2) E 2 on conductors immediately suggests that the two plates
in a parallel plate capacitor attract one another with a mutual force
0 2
1 CV 2
F =
E A=
.
2
2 d
(4.70)
Q
.
40 r2
(4.71)
The potential difference between the sphere and infinity, or, more realistically,
some large, relatively distant reservoir of charge such as the Earth, is
Q
.
40 a
(4.72)
Q
= 40 a.
V
(4.73)
V =
Thus, the capacitance of the sphere is
C=
164
a
,
a+b
Q0
Q0
b
.
a+b
(4.74)
Note that if one sphere is much smaller than the other one, e.g., b a, then the
large sphere grabs most of the charge:
a
Q
=
1.
Q0
b
(4.75)
The ratio of the electric fields generated just above the surfaces of the two spheres
follows from Eqs. (4.71) and (4.75):
Eb
a
= .
Ea
b
(4.76)
If b a then the field just above the smaller sphere is far bigger than that above
the larger sphere. Equation (4.76) is a simple example of a far more general rule.
The electric field above some point on the surface of a conductor is inversely
proportional to the local radius of curvature of the surface.
It is clear that if we wish to store significant amounts of charge on a conductor
then the surface of the conductor must be made as smooth as possible. Any sharp
spikes on the surface will inevitably have comparatively small radii of curvature.
Intense local electric fields are generated in these regions. These can easily exceed
the critical field for the break down of air, leading to sparking and the eventual
165
loss of the charge on the conductor. Sparking can also be very destructive because
the associated electric currents flow through very localized regions giving rise to
intense heating.
As a final example, consider two co-axial conducting cylinders of radii a and
b, where a < b. Suppose that the charge per unit length carried by the inner
and outer cylinders is +q and q, respectively. We can safely assume that E =
Er (r) r, by symmetry (adopting standard cylindrical polar coordinates). Let us
integrate Gauss law over a cylinder of radius r, co-axial with the conductors, and
of length l. For a < r < b we find that
2rl Er (r) =
so
Er =
ql
,
0
(4.77)
q
20 r
(4.78)
for a < r < b. It is fairly obvious that Er = 0 if r is not in the range a to b. The
potential difference between the inner and outer cylinders is
V
=
=
Z
b
inner
E dl =
q
Er dr =
20
outer
so
V =
outer
inner
E dl
dr
,
r
q
b
ln .
20 a
(4.79)
(4.80)
q
20
=
.
V
ln b/a
(4.81)
This is a particularly useful result which we shall need later on in this course.
166
4.6
Poissons equation
2 = .
0
We even know the general solution to this equation:
Z
(r 0 ) 3 0
1
d r.
(r) =
40
|r r 0 |
(4.82)
(4.83)
(4.84)
So, what else is there to say about Poissons equation? Well, consider a positive
(say) point charge in the vicinity of an uncharged, insulated, conducting sphere.
The charge attracts negative charges to the near side of the sphere and repels
positive charges to the far side. The surface charge distribution induced on the
sphere is such that it is maintained at a constant electrical potential. We now have
a problem. We cannot use formula (4.84) to work out the potential (r) around
the sphere, since we do not know how the charges induced on the conducting
surface are distributed. The only things which we know about the surface of
the sphere are that it is an equipotential and carries zero net charge. Clearly,
in the presence of conducting surfaces the solution (4.84) to Poissons equation
is completely useless. Let us now try to develop some techniques for solving
Poissons equation which allow us to solve real problems (which invariably involve
conductors).
4.7
We have already seen the great value of the uniqueness theorem for Poissons
equation (or Laplaces equation) in our discussion of Helmholtzs theorem (see
Section 3.10). Let us now examine this theorem in detail.
Consider a volume V bounded by some surface S. Suppose that we are given
the charge density throughout V and the value of the scalar potential S on S.
167
,
0
=
0
(4.85)
throughout V , and
1
= S ,
= S
(4.86)
(4.87)
2 3 = 0
(4.88)
3 = 0
(4.89)
(3 3 ) = (3 )2 + 3 2 3 .
(4.90)
throughout V , and
on S.
According to vector field theory
2
2
(3 ) + 3 3 dV =
3 3 dS.
V
(4.91)
168
Note that (3 )2 is a positive definite quantity. The only way in which the
volume integral of a positive definite quantity can be zero is if that quantity itself
is zero throughout the volume. This is not necessarily the case for a non-positive
definite quantity; we could have positive and negative contributions from various
regions inside the volume which cancel one another out. Thus, since ( 3 )2 is
positive definite it follows that
3 = constant
(4.93)
(4.94)
1 = 2
(4.95)
E1 =
,
0
(4.97)
E2 =
0
throughout V , with
I
Si
E1 dSi
Qi
,
0
Si
E2 dSi
Qi
0
(4.98)
(4.99)
N
X
Qi +
i=1
dV
(4.100)
(4.101)
It is clear that
E3 = 0
(4.102)
throughout V , and
I
Si
E3 dSi = 0
(4.103)
E3 dS = 0.
(4.104)
(4.105)
(4.106)
E32 dV =
N I
X
i=1
Si
3 E3 dSi
3 E3 dS.
(4.108)
However, 3 is a constant on the surfaces Si and S. So, making use of Eqs. (4.103)
and (4.104), we obtain
Z
V
E32 dV = 0.
171
(4.109)
Of course, E32 is a positive definite quantity, so the above relation implies that
E3 = 0
(4.110)
4.8
,
x2
y 2
z 2
0
(4.111)
172
boundary conditions
(z = 0) = 0,
(4.112)
(4.113)
analogue 0
(4.116)
and
as x2 + y 2 + z 2 . Let us forget about the real problem, for a minute, and
concentrate on a slightly different one. We refer to this as the analogue problem.
In the analogue problem we have a charge q located at (0, 0, d) and a charge q
located at (0, 0, -d) with no conductors present. We can easily find the scalar
potential for this problem, since we know where all the charges are located. We
get
(
)
1
q
q
p
analogue (x, y, z) =
.
p
40
x2 + y 2 + (z d)2
x2 + y 2 + (z + d)2
(4.114)
Note, however, that
analogue (z = 0) = 0,
(4.115)
and
as x2 + y 2 + z 2 . In addition, analogue satisfies Poissons equation for a
charge at (0, 0, d), in the region z > 0. Thus, analogue is a solution to the
problem posed earlier, in the region z > 0. Now, the uniqueness theorem tells
us that there is only one solution to Poissons equation which satisfies a given,
well-posed set of boundary conditions. So, analogue must be the correct potential
in the region z > 0. Of course, analogue is completely wrong in the region z < 0.
We know this because the grounded plate shields the region z < 0 from the point
charge, so that = 0 in this region. Note that we are leaning pretty heavily on
the uniqueness theorem here! Without this theorem, it would be hard to convince
a skeptical person that = analogue is the correct solution in the region z > 0.
Now that we know the potential in the region z > 0, we can easily work out the
distribution of charges induced on the conducting plate. We already know that
the relation between the electric field immediately above a conducting surface
and the density of charge on the surface is
E = .
(4.117)
0
173
In this case,
E = Ez (z = 0+ ) =
(z = 0+ )
analogue (z = 0+ )
=
,
z
z
so
= 0
analogue (z = 0+ )
.
z
q
(z + d)
(z d)
=
+ 2
,
z
40 [x2 + y 2 + (z d)2 ]3/2
[x + y 2 + (z + d)2 ]3/2
so
(x, y) =
qd
.
2(x2 + y 2 + d2 )3/2
(4.118)
(4.119)
(4.120)
(4.121)
Clearly, the charge induced on the plate has the opposite sign to the point charge.
The charge density on the plate is also symmetric about the z-axis, and is largest
where the plate is closest to the point charge. The total charge induced on the
plate is
Z
Q=
dS,
(4.122)
xy plane
which yields
qd
Q=
2
2r dr
,
(r2 + d2 )3/2
where r2 = x2 + y 2 . Thus,
Z
qd
1
dk
Q=
= qd
= q.
2 0 (k + d2 )3/2
(k + d2 )1/2 0
(4.123)
(4.124)
So, the total charge induced on the plate is equal and opposite to the point charge
which induces it.
Our point charge induces charges of the opposite sign on the conducting plate.
This, presumably, gives rise to a force of attraction between the charge and the
plate. What is this force? Well, since the potential, and, hence, the electric field,
in the vicinity of the point charge is the same as in the analogue problem then
174
the force on the charge must be the same as well. In the analogue problem there
are two charges q a net distance 2d apart. The force on the charge at position
(0, 0, d) (i.e., the real charge) is
F =
1
q2
z.
40 (2d)2
(4.125)
What, finally, is the potential energy of the system. For the analogue problem
this is just
1 q2
.
(4.126)
Wanalogue =
40 2d
Note that the fields on opposite sides of the conducting plate are mirror images of
one another in the analogue problem. So are the charges (apart from the change
in sign). This is why the technique of replacing conducting surfaces by imaginary
charges is called the method of images. We know that the potential energy of
a set of charges is equivalent to the energy stored in the electric field. Thus,
Z
0
W =
E 2 dV.
(4.127)
2 all space
In the analogue problem the fields on either side of the x-y plane are mirror
images of one another, so E 2 (x, y, z) = E 2 (x, y, z). It follows that
Z
0
2
dV.
(4.128)
Wanalogue = 2
Eanalogue
2 z>0
In the real problem
E(z > 0) = Eanalogue (z > 0),
E(z < 0) = 0.
So,
giving
0
W =
2
0
E dV =
2
z>0
2
z>0
2
Eanalogue
dV =
1 q2
W =
.
40 4d
175
(4.129)
1
Wanalogue ,
2
(4.130)
(4.131)
There is another method by which we can obtain the above result. Suppose
that the charge is gradually moved towards the plate along the z-axis from infinity
until it reaches position (0, 0, d). How much work is required to achieve this?
We know that the force of attraction acting on the charge is
Fz =
1 q2
.
40 4z 2
(4.132)
dz =
.
W =
=
40 4z 2
40
4z
40 4d
(4.133)
(4.134)
4.9
Complex analysis
Let us now investigate another trick for solving Poissons equation (actually it
only solves Laplaces equation). Unfortunately, this method can only be applied
in two dimensions.
The complex variable is conventionally written
z = x + iy
(4.135)
1
.
z
(4.136)
(4.137)
(4.138)
(4.139)
then
giving
U (x, y) = x2 y 2 ,
V (x, y) = 2xy.
(4.140)
We can define the derivative of a complex function in just the same manner
as we would define the derivative of a real function. Thus,
dF
=
dz
lim |z|
F (z + z) F (z)
.
z
(4.141)
However, we now have a slight problem. If F (z) is a well defined function (we
shall leave it to the mathematicians to specify exactly what being well defined
entails: suffice to say that most functions we can think of are well defined) then it
should not matter from which direction in the complex plane we approach z when
taking the limit in Eq. (4.141). There are, of course, many different directions we
could approach z from, but if we look at a regular complex function, F (z) = z 2 ,
say, then
dF
= 2z
(4.142)
dz
177
is perfectly well defined and is, therefore, completely independent of the details
of how the limit is taken in Eq. (4.141).
The fact that Eq. (4.141) has to give the same result, no matter which path
we approach z from, means that there are some restrictions on the functions U
and V in Eq. (4.137). Suppose that we approach z along the real axis, so that
z = x. Then,
dF
dz
lim |x|0
dF
dz
lim |y|0
U (x + x, y) + i V (x + x, y) U (x, y) i V (x, y)
x
V
U
+i
.
(4.143)
=
x
x
Suppose that we now approach z along the imaginary axis, so that z = i y.
Then,
= i
U
V
+
.
y
y
(4.144)
If F (z) is a well defined function then its derivative must also be well defined,
which implies that the above two expressions are equivalent. This requires that
V
,
y
U
x
V
x
U
.
y
(4.145)
These are called the Cauchy-Riemann equations and are, in fact, sufficient to
ensure that all possible ways of taking the limit (4.141) give the same answer.
So far, we have found that a general complex function F (z) can be written
F (z) = U (x, y) + i V (x, y),
(4.146)
=
178
V
,
y
V
x
U
.
y
(4.147)
But, what has all of this got to do with electrostatics? Well, we can combine the
two Cauchy-Riemann relations. We get
and
2U
V
U
V
=
=
,
=
x2
x y
y x
y y
(4.148)
U
V
U
2V
=
,
=
x2
x y
y x
y y
(4.149)
which reduce to
2U
2U
+
x2
y 2
= 0,
2V
2V
+
x2
y 2
= 0.
(4.150)
U U
U =
,
,
x y
V V
V =
,
.
x y
Now
U V =
U V
U V
+
.
x x
y y
(4.151)
(4.152)
V V
V V
= 0.
y x
x y
179
(4.153)
= x2 y 2 ,
= 2xy.
(4.154)
These are, in fact, the equations of two sets of orthogonal hyperboloids. So,
y
U = -1
V
x
U=1
U=1
U=0
U=0
U = -1
U (x, y) (the solid lines in the figure) might represent the contours of some scalar
potential and V (x, y) (the dashed lines in the figure) the associated electric field
lines, or vice versa. But, how could we actually generate a hyperboloidal potential? This is easy. Consider the contours of U at level 1. These could represent
the surfaces of four hyperboloid conductors maintained at potentials V. The
scalar potential in the region between these conductors is given by V U (x, y) and
the associated electric field lines follow the contours of V (x, y). Note that
Ex =
= V
= 2Vx
x
x
180
(4.155)
Thus, the x-component of the electric field is directly proportional to the distance
from the x-axis. Likewise, for the y-component of the field. This property can
be exploited to make devices (called quadrupole electrostatic lenses) which are
useful for focusing particle beams.
We can think of the set of all possible well defined complex functions as a
reference library of solutions to Laplaces equation in two dimensions. We have
only considered a single example but there are, of course, very many complex
functions which generate interesting potentials. For instance, F (z) = z 1/2 generates the potential around a semi-infinite, thin, grounded, conducting plate placed
in an external field, whereas F (z) = z 3/2 yields the potential outside a grounded
rectangular conducting corner under similar circumstances.
4.10
Separation of variables
The method of images and complex analysis are two rather elegant techniques for
solving Poissons equation. Unfortunately, they both have an extremely limited
range of application. The final technique we shall discuss in this course, namely,
the separation of variables, is somewhat messy but possess a far wider range of
application. Let us examine a specific example.
Consider two semi-infinite, grounded, conducting plates lying parallel to the
x-z plane, one at y = 0, and the other at y = . The left end, at x = 0, is closed
off by an infinite strip insulated from the two plates and maintained at a specified
potential 0 (y). What is the potential in the region between the plates?
y
plate
y =
x=0
y=0
x
181
(4.157a)
(x, ) = 0
(4.157b)
for 0 y , and
(0, y) = 0 (y)
(4.158)
(x, y) 0
(4.159)
as x . The latter boundary condition is our usual one for the scalar potential
at infinity.
The central assumption in the method of separation of variables is that a
multi-dimensional potential can be written as the product of one-dimensional
potentials, so that
(x, y) = X(x)Y (y).
(4.160)
The above solution is obviously a very special one, and is, therefore, only likely
to satisfy a very small subset of possible boundary conditions. However, it turns
out that by adding together lots of different solutions of this form we can match
to general boundary conditions.
Substituting (4.160) into (4.156), we obtain
Y
d2 Y
d2 Y
+
X
= 0.
dx2
dy 2
(4.161)
Let us now separate the variables; i.e., let us collect all of the x-dependent terms
on one side of the equation, and all of the y-dependent terms on the other side.
Thus,
1 d2 X
1 d2 Y
=
.
(4.162)
X dx2
Y dy 2
182
(4.163)
where f and g are general functions. The only way in which the above equation
can be satisfied, for general x and y, is if both sides are equal to the same constant.
Thus,
1 d2 X
1 d2 Y
2
=k =
.
(4.164)
X dx2
Y dy 2
The reason why we write k 2 , rather than k 2 , will become apparent later on.
Equation (4.164) separates into two ordinary differential equations:
d2 X
= k 2 X,
2
dx
d2 Y
= k 2 Y.
2
dy
(4.165)
= A exp(kx) + B exp(kx),
(4.166)
giving
= ( A exp(kx) + B exp(kx) )(C sin ky + D cos ky).
(4.167)
183
(4.169)
where B has been absorbed into C. Note that this solution is only able to satisfy
the final boundary condition (4.158) provided 0 (y) is proportional to sin ny.
Thus, at first sight, it would appear that the method of separation of variables
only works for a very special subset of boundary conditions. However, this is not
the case.
Now comes the clever bit! Since Poissons equation is linear, any linear combination of solutions is also a solution. We can therefore form a more general
solution than (4.169) by adding together lots of solutions involving different values of n. Thus,
X
Cn exp(nx) sin ny,
(4.170)
(x, y) =
n=1
where the Cn are constants. This solution automatically satisfies the boundary
conditions (4.157) and (4.159). The final boundary condition (4.158) reduces to
(0, y) =
Cn sin ny = 0 (y).
(4.171)
n=1
The question now is what choice of the Cn fits an arbitrary function 0 (y)?
To answer this question we can make use of two very useful properties of the
functions sin ny. Namely, that they are mutually orthogonal and form a complete
set. The orthogonality property of these functions manifests itself through the
relation
Z
n=1
Cn
sin ny sin n y dy =
0
184
(4.173)
X
Cn nn0 = Cn0 =
0 (y) sin n0 y dy,
2 n=1
2
0
so
2
Cn =
(4.174)
(4.175)
Thus, we now have a general solution to the problem for any driving potential
0 (y).
If the potential 0 (y) is constant then
Z
2 0
2 0
(1 cos n),
sin ny dy =
Cn =
0
n
(4.176)
giving
Cn = 0
for even n, and
Cn =
4 0
n
(4.177)
(4.178)
X exp(nx) sin nx
.
n
n=1,3,5
20
sin y
1
.
(x, y) =
tan
sinh x
(4.179)
(4.180)
In this form it is easy to check that Poissons equation is obeyed and that all of
the boundary conditions are satisfied.
In the above problem we write the potential as the product of one dimensional
functions. Some of these functions grow and decay monotonically (i.e., the exponential functions) and the others oscillate (i.e., the sinusoidal functions). The
185
4.11
Inductors
186
where dS2 is a surface element of loop 2. This flux is generally quite difficult
to calculate exactly (unless the two loops have a particularly simple geometry).
However, we can infer from the Biot-Savart law,
I
0 I 1
dl1 (r r 0 )
B1 (r) =
,
(4.183)
4 loop 1 |r r 0 |3
that the magnitude of B1 is proportional to the current I1 . This is ultimately a
consequence of the linearity of Maxwells equations. Here, dl 1 is a line element
of loop 1 located at position vector r 0 . It follows that the flux 2 must also be
proportional to I1 . Thus, we can write
2 = M21 I1 ,
(4.184)
where M21 is the constant of proportionality. This constant is called the mutual
inductance of the two loops.
Let us write the field B1 in terms of a vector potential A1 , so that
B1 = A 1 .
It follows from Stokes theorem that
Z
Z
I
2 =
B1 dS2 =
A1 dS2 =
loop 2
loop 2
(4.185)
loop 2
A1 dl2 ,
(4.186)
(4.187)
(4.188)
for j(r 0 ) = dl1 I1 /dl1 dA and d3 r 0 = dl1 dA, where dA is the cross sectional area
of loop 1. Thus,
I
I
0 I 1
dl1 dl2
2 =
,
(4.189)
4 loop 1 loop 2 |r r 0 |
187
where r is now the position vector of the line element dl2 of loop 2. The above
equation implies that
I
I
0
dl1 dl2
.
(4.190)
M21 =
4 loop 1 loop 2 |r r 0 |
In fact, mutual inductances are rarely worked out from first principlesit is
usually too difficult. However, the above formula tells us two important things.
Firstly, the mutual inductance of two loops is a purely geometric quantity, having
to do with the sizes, shapes, and relative orientations of the loops. Secondly, the
integral is unchanged of we switch the roles of loops 1 and 2. In other words,
M21 = M12 .
(4.191)
In fact, we can drop the subscripts and just call these quantities M . This is a
rather surprising result. It implies that no matter what the shapes and relative
positions of the two loops, the flux through loop 2 when we run a current I
around loop 1 is exactly the same as the flux through loop 1 when we send the
same current around loop 2.
We have seen that a current I flowing around some loop, 1, generates a magnetic flux linking some other loop, 2. However, flux is also generated through the
first loop. As before, the magnetic field, and, therefore, the flux , is proportional
to the current, so we can write
= L I.
(4.192)
The constant of proportionality L is called the self inductance. Like M it only
depends on the geometry of the loop.
Inductance is measured in S.I. units called henries (H); 1 henry is 1 volt-second
per ampere. The henry, like the farad, is a rather unwieldy unit since most real
life inductors have a inductances of order a micro-henry.
Consider a long solenoid of length l and radius r which has N turns per unit
length, and carries a current I. The longitudinal (i.e., directed along the axis of
the solenoid) magnetic field within the solenoid is approximately uniform, and is
given by
B = 0 N I.
(4.193)
188
This result is easily obtained by integrating Amp`eres law over a rectangular loop
whose long sides run parallel to the axis of the solenoid, one inside the solenoid
and the other outside, and whose short sides run perpendicular to the axis. The
magnetic flux though each turn of the loop is B r 2 = 0 N I r2 . The total flux
through the solenoid wire, which has N l turns, is
= N l 0 N I r2 .
(4.194)
= 0 N 2 r2 l.
I
(4.195)
Note that the self inductance only depends on geometric quantities such as the
number of turns in the solenoid and the area of the coils.
Suppose that the current I flowing through the solenoid changes. We have to
assume that the change is sufficiently slow that we can neglect the displacement
current and retardation effects in our calculations. This implies that the typical
time-scale of the change must be much longer than the time for a light ray to
traverse the circuit. If this is the case then the above formulae remain valid.
A change in the current implies a change in the magnetic flux linking the
solenoid wire, since = L I. According to Faradays law, this change generates
an e.m.f. in the coils. By Lenzs law, the e.m.f. is such as to oppose the change
in the currenti.e., it is a back e.m.f. We can write
V =
d
dI
= L ,
dt
dt
(4.196)
189
L
V
R
I
IR, whereas the voltage drop across the inductor (i.e., minus the back e.m.f.) is
L dI/dt. Here, I is the current flowing through the solenoid. It follows that
V = IR + L
dI
.
dt
(4.197)
(4.198)
V
+ k exp(R t/L).
R
(4.199)
The constant k is fixed by the boundary conditions. Suppose that the battery is
connected at time t = 0, when I = 0. It follows that k = V /R, so that
I(t) =
V
(1 exp(R t/L) ) .
R
(4.200)
It can be seen from the diagram that after the battery is connected the current
ramps up and attains its steady state value V /R (which comes from Ohms law)
on the characteristic time-scale
L
= .
(4.201)
R
190
->
I
V/R
0
0
L/R
t ->
This time-scale is sometimes called the time constant of the circuit, or, somewhat unimaginatively, the L/R time of the circuit.
We can now appreciate the significance of self inductance. The back e.m.f.
generated in an inductor, as the current tries to change, effectively prevents the
current from rising (or falling) much faster than the L/R time. This effect is
sometimes advantageous, but often it is a great nuisance. All circuit elements
possess some self inductance, as well as some resistance, so all have a finite L/R
time. This means that when we power up a circuit the current does not jump
up instantaneously to its steady state value. Instead, the rise is spread out over
the L/R time of the circuit. This is a good thing. If the current were to rise
instantaneously then extremely large electric fields would be generated by the
sudden jump in the magnetic field, leading, inevitably, to breakdown and electric
arcing. So, if there were no such thing as self inductance then every time you
switched an electric circuit on or off there would be a big blue flash due to arcing
between conductors. Self inductance can also be a bad thing. Suppose that we
possess a fancy power supply, and we want to use it to send an electric signal down
a wire (or transmission line). Of course, the wire or transmission line will possess
both resistance and inductance, and will, therefore, have some characteristic L/R
time. Suppose that we try to send a square wave signal down the line. Since
the current in the line cannot rise or fall faster than the L/R time, the leading
and trailing edges of the signal get smoothed out over an L/R time. The typical
191
difference between the signal fed into the wire (upper trace) and that which comes
out of the other end (lower trace) is illustrated in the diagram below. Clearly,
there is little point having a fancy power supply unless you also possess a low
inductance wire or transmission line, so that the signal from the power supply
can be transmitted to some load device without serious distortion.
V
0
V
0
Consider, now, two long thin solenoids, one wound on top of the other. The
length of each solenoid is l, and the common radius is r. Suppose that the
bottom coil has N1 turns per unit length and carries a current I1 . The magnetic
flux passing through each turn of the top coil is 0 N1 I1 r2 , and the total flux
linking the top coil is therefore 2 = N2 l 0 N1 I1 r2 , where N2 is the number of
turns per unit length in the top coil. It follows that the mutual inductance of the
two coils, defined 2 = M I1 , is given by
M = 0 N1 N2 r2 l.
(4.202)
(4.203)
L2 = 0 N2 2 r2 l.
(4.204)
192
(4.205)
Note that this result depends on the assumption that all of the flux produced
by one coil passes through the other coil. In reality, some of the flux leaks
out, so that the mutual inductance is somewhat less than that given in the above
formula. We can write
p
M = k L 1 L2 ,
(4.206)
where the constant k is called the coefficient of coupling and lies in the range
0 k 1.
Suppose that the two coils have resistances R1 and R2 . If the bottom coil has
an instantaneous current I1 flowing through it and a total voltage drop V1 , then
the voltage drop due to its resistance is I1 R. The voltage drop due to the back
e.m.f. generated by the self inductance of the coil is L1 dI1 /dt. There is also a
back e.m.f. due to the inductive coupling to the top coil. We know that the flux
through the bottom coil due to the instantaneous current I 2 flowing in the top
coil is
1 = M I 2 .
(4.207)
Thus, by Faradays law and Lenzs law the back e.m.f. induced in the bottom coil
is
dI2
.
(4.208)
V = M
dt
The voltage drop across the bottom coil due to its mutual inductance with the
top coil is minus this expression. Thus, the circuit equation for the bottom coil
is
dI2
dI1
+M
.
(4.209)
V1 = R 1 I 1 + L 1
dt
dt
Likewise, the circuit equation for the top coil is
V2 = R 2 I 2 + L 2
dI2
dI1
+M
.
dt
dt
193
(4.210)
(4.213)
M
exp(R1 t/L1 ).
L1
(4.214)
V2 = M
giving
V2 = V 1
It follows from Eq. (4.206) that
V2 = V 1 k
L2
exp(R1 t/L1 ).
L1
(4.215)
N2
exp(R1 t/L1 ).
N1
(4.216)
->
V
L1 / R1
t ->
t = 0, on a time-scale similar to the light traverse time across the circuit (i.e., the
jump is instantaneous to all intents and purposes, but the displacement current
remains finite).
Now,
V2 (t = 0)
N2
=k
,
V1
N1
(4.217)
induced in the secondary circuit. Thus, you need a reasonable number of turns
in the primary coil in order to localize the induced magnetic field so that it links
effectively with the secondary coil.
4.12
Magnetic energy
dI
+ RI.
dt
(4.218)
The power output of the battery is V I. [Every charge q that goes around the
circuit falls through a potential difference qV . In order to raise it back to the
starting potential, so that it can perform another circuit, the battery must do
work qV . The work done per unit time (i.e., the power) is nqV , where n is the
number of charges per unit time passing a given point on the circuit. But, I = nq,
so the power output is V I.] The total work done by the battery in raising the
current in the circuit from zero at time t = 0 to IT at time t = T is
Z T
W =
V I dt.
(4.219)
0
T
0
dI
I
dt + R
dt
giving
1
W = LIT2 + R
2
I 2 dt,
(4.220)
I 2 dt.
(4.221)
The second term on the right-hand side represents the irreversible conversion of
electrical energy into heat energy in the resistor. The first term is the amount
of energy stored in the inductor at time T . This energy can be recovered after the inductor is disconnected from the battery. Suppose that the battery is
196
dI
+ RI,
dt
(4.222)
giving
R
I = IT exp (t T ) ,
L
(4.223)
where we have made use of the boundary condition I(T ) = I T . Thus, the current
decays away exponentially. The energy stored in the inductor is dissipated as
heat in the resistor. The total heat energy appearing in the resistor after the
battery is disconnected is
Z
1
I 2 R dt = LIT2 ,
(4.224)
2
T
where use has been made of Eq. (4.223). Thus, the heat energy appearing in
the resistor is equal to the energy stored in the inductor. This energy is actually
stored in the magnetic field generated around the inductor.
Consider, again, our circuit with two coils wound on top of one another.
Suppose that each coil is connected to its own battery. The circuit equations are
V1
V2
dI2
dI1
+M
,
dt
dt
dI2
dI1
= R2 I2 + L
+M
,
dt
dt
= R1 I1 + L
(4.225)
where V1 is the e.m.f. of the battery in the first circuit, etc. The work done by
the two batteries in increasing the currents in the two circuits from zero at time
0 to I1 and I2 at time T , respectively, is
Z T
W =
(V1 I1 + V2 I2 ) dt
0
1
1
(R1 I12 + R2 I22 ) dt + L1 I12 + L2 I22
2
2
0
Z T
dI2
dI1
+M
I1
+ I2
dt.
dt
dt
0
197
(4.226)
Thus,
W
1
1
+ L1 I1 2 + L2 I2 2 + M I 1 I2 .
2
2
(4.227)
1
1
L1 I1 2 + L2 I2 2 + M I 1 I2 .
2
2
(4.228)
Note that the mutual inductance term increases the stored magnetic energy if
I1 and I2 are of the same signi.e., if the currents in the two coils flow in the
same direction, so that they generate magnetic fields which reinforce one another.
Conversely, the mutual inductance term decreases the stored magnetic energy if
I1 and I2 are of the opposite sign. The total stored energy can never be negative,
otherwise the coils would constitute a power source (a negative stored energy is
equivalent to a positive generated energy). Thus,
1
1
L1 I12 + L2 I22 + M I1 I2 0,
2
2
which can be written
1 p
2
2
p
p
L1 I1 + L2 I2 I1 I2 ( L1 L2 M ) 0,
(4.229)
(4.230)
(4.231)
The equality sign corresponds to the situation where all of the flux generated by
one coil passes through the other. If some of the flux misses then the inequality
sign is appropriate. In fact, the above formula is valid for any two inductively
coupled circuits.
We intimated previously that the energy stored in an inductor is actually
stored in the surrounding magnetic field. Let us now obtain an explicit formula
198
for the energy stored in a magnetic field. Consider an ideal solenoid. The energy
stored in the solenoid when a current I flows through it is
W =
1 2
LI ,
2
(4.232)
(4.233)
where N is the number of turns per unit length of the solenoid, r the radius, and
l the length. The field inside the solenoid is uniform, with magnitude
B = 0 N I,
(4.234)
(4.235)
(4.236)
Let us now examine a more general proof of the above formula. Consider
a system of N circuits (labeled i = 1 to N ), each carrying a current I i . The
magnetic flux through the ith circuit is written [cf., Eq. (4.186) ]
Z
I
i = B dSi = A dli ,
(4.237)
where B = A, and dSi and dli denote a surface element and a line element of
this circuit, respectively. The back e.m.f. induced in the ith circuit follows from
Faradays law:
di
Vi =
.
(4.238)
dt
199
The rate of work of the battery which maintains the current I i in the ith circuit
against this back e.m.f. is
di
.
(4.239)
Pi = I i
dt
Thus, the total work required to raise the currents in the N circuits from zero at
time 0 to I0 i at time T is
W =
N Z
X
i=1
Ii
0
di
dt.
dt
(4.240)
The above expression for the work done is, of course, equivalent to the total
energy stored in the magnetic field surrounding the various circuits. This energy
is independent of the manner in which the currents are set up. Suppose, for the
sake of simplicity, that the currents are ramped up linearly, so that
t
.
T
Ii = I 0 i
(4.241)
The fluxes are proportional to the currents, so they must also ramp up linearly:
i = 0 i
It follows that
W =
N Z
X
i=1
giving
1
W =
2
t
.
T
I0 i 0 i
0
(4.242)
t
dt,
T2
(4.243)
I0 i 0 i .
(4.244)
i=1
So, if instantaneous currents Ii flow in the the N circuits, which link instantaneous
fluxes i , then the instantaneous stored energy is
1
W =
2
Ii i .
i=1
200
(4.245)
1X
Ii
W =
2 i=1
A dli .
(4.246)
It is convenient, at this stage, to replace our N line currents by N current distributions of small, but finite, cross-sectional area. Equation (4.246) transforms
to
Z
1
W =
A j dV,
(4.247)
2 V
where V is a volume which contains all of the circuits. Note that for an element
of the ith circuit j = Ii dli /dli Ai and dV = dli Ai , where Ai is the cross-sectional
area of the circuit. Now, 0 j = B (we are neglecting the displacement current
in this calculation), so
Z
1
A B dV.
(4.248)
W =
20 V
According to vector field theory,
(A B) = B A A B,
(4.249)
( (A B) + B A) dV.
(4.250)
(4.251)
Since this expression is valid for any magnetic field whatsoever, we can conclude
that the energy density of a general magnetic field is given by
U=
4.13
B2
.
20
(4.253)
B2
UB =
.
20
(4.254)
(4.255)
0 E 2
B2
+
.
2
20
(4.256)
We are now in a position to demonstrate that the classical theory of electromagnetism conserves energy. We have already come across one conservation law in
electromagnetism:
+ j = 0.
(4.257)
t
This is the equation of charge conservation. Integrating over some volume V
bounded by a surface S, we obtain
Z
I
dV =
j dS.
(4.258)
t V
S
In other words, the rate of decrease of the charge contained in volume V equals
the net flux of charge across surface S. This suggests that an energy conservation
law for electromagnetism should have the form
Z
I
U dV =
u dS.
(4.259)
t V
S
202
Here, U is the energy density of the electromagnetic field and u is the flux of
electromagnetic energy (i.e., energy |u| per unit time, per unit cross-sectional
area, passes a given point in the direction of u). According to the above equation,
the rate of decrease of the electromagnetic energy in volume V equals the net flux
of electromagnetic energy across surface S.
Equation (4.259) is incomplete because electromagnetic fields can lose or gain
energy by interacting with matter. We need to factor this into our analysis. We
saw earlier (see Section 4.2) that the rate of heat dissipation per unit volume in
a conductor (the so-called ohmic heating rate) is E j. This energy is extracted
from electromagnetic fields, so theR rate of energy loss of the fields in volume V
due to interaction with matter is V E j dV . Thus, Eq. (4.259) generalizes to
Z
I
Z
U dV =
u dS +
E j dV.
(4.260)
t V
S
V
The above equation is equivalent to
U
+ u = E j.
t
(4.261)
Let us now see if we can derive an expression of this form from Maxwells equations.
We start from Amp`eres law (including the displacement current):
B = 0 j + 0 0
E
.
t
(4.262)
EB
E
.
+ 0 E
0
t
(4.263)
EB
0 E 2
E j =
+
.
0
t
2
(4.264)
(4.265)
so
EB
E j =
0
BE
0 E 2
.
+
0
t
2
EB
E j =
0
B
,
t
(4.266)
(4.267)
1
B
0 E 2
+
B
+
.
0
t
t
2
(4.268)
B2
0 E 2
+
+
.
t
2
20
(4.269)
EB
E j =
0
where
U
+ u = E j,
t
(4.270)
0 E 2
B2
U=
+
2
20
(4.271)
EB
0
(4.272)
is the electromagnetic energy flux. The latter quantity is usually called the
Poynting flux after its discoverer.
Let us see whether our expression for the electromagnetic energy flux makes
sense. We all know that if we stand in the sun we get hot (especially in Texas!).
This occurs because we absorb electromagnetic radiation emitted by the Sun.
So, radiation must transport energy. The electric and magnetic fields in electromagnetic radiation are mutually perpendicular, and are also perpendicular to
(this is a unit vector). Furthermore, B = E/c.
the direction of propagation k
204
Equation (3.232) can easily be transformed into the following relation between
the electric and magnetic fields of an electromagnetic wave:
EB =
E2
k.
c
(4.273)
u=
k = 0 cE 2 k.
0 c
(4.274)
This expression tells us that electromagnetic waves transport energy along their
direction of propagation, which seems to make sense.
The energy density of electromagnetic radiation is
B2
E2
0 E 2
0 E 2
+
+
U=
= 0 E 2 ,
=
2
2
20
2
20 c
(4.275)
using B = E/c. Note that the electric and magnetic fields have equal energy
densities. Since electromagnetic waves travel at the speed of light, we would
expect the energy flux through one square meter in one second to equal the
energy contained in a volume of length c and unit cross-sectional area; i.e., c
times the energy density. Thus,
|u| = cU = 0 cE 2 ,
(4.276)
4.14
Electromagnetic momentum
We have seen that electromagnetic waves carry energy. It turns out that they also
carry momentum. Consider the following argument, due to Einstein. Suppose
that we have a railroad car of mass M and length L which is free to move in one
dimension. Suppose that electromagnetic radiation of total energy E is emitted
from one end of the car, propagates along the length of the car, and is then
absorbed at the other end. The effective mass of this radiation is m = E/c 2
205
E
M
(from Einsteins famous relation E = mc2 ). At first sight, the process described
above appears to cause the centre of mass of the system to spontaneously shift.
This violates the law of momentum conservation (assuming the railway car is
subject to no external forces). The only way in which the centre of mass of the
system can remain stationary is if the railway car moves in the opposite direction
to the direction of propagation of the radiation. In fact, if the car moves by a
distance x then the centre of mass of the system is the same before and after the
radiation pulse provided that
M x = mL =
E
L.
c2
(4.277)
mentum p in the opposite direction, which stops the motion. The time of flight
of the radiation is L/c. So, the distance traveled by a mass M with momentum
p in this time is
p L
,
(4.278)
x = vt =
M c
giving
c
E
p = Mx = .
(4.279)
L
c
Thus, the momentum carried by electromagnetic radiation equals its energy divided by the speed of light. The same result can be obtained from the well known
relativistic formula
E 2 = p2 c2 + m 2 c4
(4.280)
relating the energy E, momentum p, and mass m of a particle. According to
quantum theory, electromagnetic radiation is made up of massless particles called
photons. Thus,
E
p=
(4.281)
c
for individual photons, so the same must be true of electromagnetic radiation as
a whole. If follows from Eq. (4.281) that the momentum density g of electromagnetic radiation equals its energy density over c, so
|u|
0 E 2
U
= 2 =
.
g=
c
c
c
(4.282)
It is reasonable to suppose that the momentum points along the direction of the
energy flow (this is obviously the case for photons), so the vector momentum
density (which gives the direction as well as the magnitude, of the momentum
per unit volume) of electromagnetic radiation is
g=
u
.
c2
(4.283)
0 E02
,
2
c0 E02
= c hU ik,
2
hU i
0 E02
k=
k,
2c
c
(4.284)
where the factor 1/2 comes from averaging cos2 t. Here, E0 is the peak amplitude
of the electric field associated with the wave.
Since electromagnetic radiation possesses momentum then it must exert a
force on bodies which absorb (or emit) radiation. Suppose that a body is placed
in a beam of perfectly collimated radiation, which it absorbs completely. The
amount of momentum absorbed per unit time, per unit cross-sectional area, is
simply the amount of momentum contained in a volume of length c and unit crosssectional area; i.e., c times the momentum density g. An absorbed momentum
per unit time, per unit area, is equivalent to a pressure. In other words, the
radiation exerts a pressure cg on the body. Thus, the radiation pressure is
given by
0 E 2
p=
= hU i.
(4.285)
2
So, the pressure exerted by collimated electromagnetic radiation is equal to its
average energy density.
Consider a cavity filled with electromagnetic radiation. What is the radiation
pressure exerted on the walls? In this situation the radiation propagates in all
directions with equal probability. Consider radiation propagating at an angle
to the local normal to the wall. The amount of such radiation hitting the wall
per unit time, per unit area, is proportional to cos . Moreover, the component
of momentum normal to the wall which the radiation carries is also proportional
to cos . Thus, the pressure exerted on the wall is the same as in Eq. (4.285),
except that it is weighted by the average of cos2 over all solid angles in order to
take into account the fact that obliquely propagating radiation exerts a pressure
208
which is cos2 times that of normal radiation. The average of cos 2 over all solid
angles is 1/3, so for isotropic radiation
p=
hU i
.
3
(4.286)
Clearly, the pressure exerted by isotropic radiation is one third of its average
energy density.
The power incident on the surface of the Earth due to radiation emitted by
the Sun is about 1300 W/m2 . So, what is the radiation pressure? Since,
h|u|i = c hU i = 1300 Wm2 ,
(4.287)
(4.288)
then
Here, the radiation is assumed to be perfectly collimated. Thus, the radiation
pressure exerted on the Earth is minuscule (one atmosphere equals about 10 5
N/m2 ). Nevertheless, this small pressure due to radiation is important in outer
space, since it is responsible for continuously sweeping dust particles out of the
solar system. It is quite common for comets to exhibit two separate tails. One
(called the gas tail) consists of ionized gas, and is swept along by the solar
wind (a stream of charged particles and magnetic field lines emitted by the Sun).
The other (called the dust tail) consists of uncharged dust particles, and is
swept radially outward from the Sun by radiation pressure. Two separate tails
are observed if the local direction of the solar wind is not radially outward from
the Sun (which is quite often the case).
The radiation pressure from sunlight is very weak. However, that produced
by laser beams can be enormous (far higher than any conventional pressure which
has ever been produced in a laboratory). For instance, the lasers used in Inertial
Confinement Fusion (e.g., the NOVA experiment in Laurence Livermore National
Laboratory) typically have energy fluxes of 1018 Wm2 . This translates to a
radiation pressure of about 104 atmospheres. Obviously, it would not be a good
idea to get in the way of one of these lasers!
209
4.15
dq
= I0 cos t,
dt
(4.290)
(4.292)
Suppose that the wire is aligned along the z-axis and extends from z = l/2 to
z = l/2. For a wire of negligible thickness we can replace j(r 0 , t |r r 0 |/c) d3 r 0
Thus, A(r, t) = Az (r, t) z and
by I(r 0 , t |r r 0 |/c) dz 0 z.
0
Az (r, t) =
4
In the region r l
l/2
l/2
I(z 0 , t |r z 0 z|/c)
dz 0 .
0
|r z z|
'r
|r z 0 z|
210
(4.293)
(4.294)
and
' t r/c.
t |r z 0 z|/c
(4.295)
The maximum error in the latter approximation is t l/c. This error (which
is a time) must be much less than a period of oscillation of the emitted radiation,
otherwise the phase of the radiation will be wrong. So
l
2
,
(4.296)
c
(4.299)
A = 0 0 .
t
Now,
Az
0 l I(t r/c) z
1
A=
'
2 +O
(4.300)
z
4
t
r c
r2
to leading order in r 1 . Thus,
(r, t) '
l z I(t r/c)
.
40 c r
r
(4.301)
Given the vector and scalar potentials, Eqs. (4.298) and (4.301), respectively,
we can evaluate the associated electric and magnetic fields using
E
A
,
t
211
= A.
(4.302)
Note that we are only interested in radiation fields, which fall off like r 1 with
increasing distance from the source. It is easily demonstrated that
sin[(t r/c)]
lI0
sin
2
40 c
r
(4.303)
lI0
sin[(t r/c)]
.
sin
3
40 c
r
(4.304)
E'
and
B'
Here, (r, , ) are standard spherical polar coordinates aligned along the z-axis.
The above expressions for the far field (i.e., r ) electromagnetic fields generated by a localized oscillating current are also easily derived from Eqs. (3.320) and
(3.321). Note that the fields are symmetric in the azimuthal angle . There is no
radiation along the axis of the oscillating dipole (i.e., = 0), and the maximum
emission is in the plane perpendicular to this axis (i.e., = /2).
The average power crossing a spherical surface S (whose radius is much greater
than ) is
I
Prad =
hui dS,
(4.305)
where the average is over a single period of oscillation of the wave, and the
Poynting flux is given by
It follows that
EB
2 l 2 I0 2
sin2
2
u=
=
sin [(t r/c)] 2 r.
0
16 2 0 c3
r
(4.306)
2 l2 I02 sin2
hui =
r.
32 2 0 c3 r2
(4.307)
Note that the energy flux is radially outwards from the source. The total power
flux across S is given by
Z 2
Z
sin2 2 2
2 l 2 I0 2
r sin d.
(4.308)
Prad =
d
32 2 0 c3 0
r2
0
212
Thus,
Prad =
2 l 2 I0 2
.
120 c3
(4.309)
1 2
I R,
2 0
(4.310)
Prad
,
I02 /2
so that
Rrad
2
=
30 c
2
l
,
(4.311)
(4.312)
2
l
ohms.
= 789
(4.313)
is the half-wave antenna, for which l = /2. This can be analyzed as a series
of Hertzian dipole antennas stacked on top of one another, each slightly out of
phase with its neighbours. The characteristic radiation resistance of a half-wave
antenna is
2.44
= 73 ohms.
(4.314)
Rrad =
40 c
Antennas can be used to receive electromagnetic radiation. The incoming
wave induces a voltage in the antenna which can be detected in an electrical circuit
connected to the antenna. In fact, this process is equivalent to the emission of
electromagnetic waves by the antenna viewed in reverse. It is easily demonstrated
that antennas most readily detect electromagnetic radiation incident from those
directions in which they preferentially emit radiation. Thus, a Hertzian dipole
antenna is unable to detect radiation incident along its axis, and most efficiently
detects radiation incident in the plane perpendicular to this axis. In the theory
of electrical circuits, a receiving antenna is represented as an e.m.f in series with
a resistor. The e.m.f., V0 cos t, represents the voltage induced in the antenna by
the incoming wave. The resistor, Rrad , represents the power re-radiated by the
antenna (here, the real resistance of the antenna is neglected). Let us represent
the detector circuit as a single load resistor Rload connected in series with the
antenna. The question is: how can we choose Rload so that the maximum power
is extracted from the wave and transmitted to the load resistor? According to
Ohms law:
V0 cos t = I0 cos t (Rrad + Rload ),
(4.315)
where I = I0 cos t is the current induced in the circuit.
The power input to the circuit is
V0 2
Pin = hV Ii =
.
2(Rrad + Rload )
(4.316)
Rload V0 2
= hI Rload i =
.
2(Rrad + Rload )2
2
214
(4.317)
Rrad V0 2
= hI Rrad i =
.
2(Rrad + Rload )2
2
(4.318)
Note that Pin = Pload + Prad . The maximum power transfer to the load occurs
when
V0 2
Rload Rrad
Pload
=
= 0.
(4.319)
Rload
2 (Rrad + Rload )3
Thus, the maximum transfer rate corresponds to
Rload = Rres .
(4.320)
In other words, the resistance of the load circuit must match the radiation resistance of the antenna. For this optimum case,
Pload = Prad =
Pin
V0 2
=
.
8Rrad
2
(4.321)
So, in the optimum case half of the power absorbed by the antenna is immediately
re-radiated. Clearly, an antenna which is receiving electromagnetic radiation is
also emitting it. This is how the BBC catch people who do not pay their television
license fee in England. They have vans which can detect the radiation emitted by
a TV aerial whilst it is in use (they can even tell which channel you are watching!).
For a Hertzian dipole antenna interacting with an incoming wave whose electric field has an amplitude E0 we expect
V0 = E0 l.
(4.322)
Here, we have used the fact that the wavelength of the radiation is much longer
than the length of the antenna. We have also assumed that the antenna is properly
aligned (i.e., the radiation is incident perpendicular to the axis of the antenna).
The Poynting flux of the incoming wave is
0 cE02
,
huin i =
2
215
(4.323)
E02 l2
=
.
8Rrad
(4.324)
(4.325)
The quantity Aeff is called the effective area of the antenna; it is the area of
the idealized antenna which absorbs as much net power from the incoming wave
as the actual antenna. Thus,
Pload
E02 l2
0 cE02
Aeff ,
=
=
8Rrad
2
(4.326)
giving
l2
3 2
Aeff =
=
.
(4.327)
40 cRrad
8
It is clear that the effective area of a Hertzian dipole antenna is of order the
wavelength squared of the incoming radiation.
For a properly aligned half-wave antenna
Aeff = 0.13 2 .
(4.328)
Thus, the antenna, which is essentially one dimensional with length /2, acts as
if it is two dimensional, with width 0.26 , as far as its absorption of incoming
electromagnetic radiation is concerned.
4.16
AC circuits
Alternating current (AC) circuits are made up of e.m.f. sources and three different
types of passive element; resistors, inductors, and capacitors, Resistors satisfy
Ohms law:
V = IR,
(4.329)
216
where R is the resistance, I is the current flowing through the resistor, and V is
the voltage drop across the resistor (in the direction in which the current flows).
Inductors satisfy
dL
,
(4.330)
V =L
dt
where L is the inductance. Finally, capacitors obey
Z t
q
=
I dt C,
(4.331)
V =
C
0
where C is the capacitance, q is the charge stored on the plate with the more
positive potential, and I = 0 for t < 0. Note that any passive component of a
real electrical circuit can always be represented as a combination of ideal resistors,
inductors, and capacitors.
Let us consider the classic LCR circuit, which consists of an inductor L, a
capacitor C, and a resistor R, all connected in series with an e.m.f. source V .
The circuit equation is obtained by setting the input voltage V equal to the sum
of the voltage drops across the three passive elements in the circuit. Thus,
Z t
dI
V = IR + L
+
I dt C.
(4.332)
dt
0
This is an integro-differential equation which, in general, is quite tricky to solve.
Suppose, however, that both the voltage and the current oscillate at some angular
frequency , so that
V (t) = V0 exp(i t),
I(t) = I0 exp(i t),
(4.333)
where the physical solution is understood to be the real part of the above expressions. The assumed behaviour of the voltage and current is clearly relevant to
electrical circuits powered by the mains voltage (which oscillates at 60 hertz).
Equations (4.332) and (4.333) yield
V0 exp(i t) = I0 exp(i t) R + L i I0 exp(i t) +
217
I0 exp(i t)
,
i C
(4.334)
giving
1
i L +
+R .
i C
(4.335)
V
1
= i L +
+ R.
I
i C
(4.336)
V0 = I 0
(4.337)
where the average is taken over one period of the oscillation. Let us, first of all,
calculate the power using real (rather than complex) voltages and currents. We
can write
V (t) = V0 cos t,
I(t) = I0 cos(t ),
(4.338)
where is the phase lag of the current with respect to the voltage. It follows that
Z t=2
d(t)
P = V 0 I0
cos t cos(t )
2
t=0
Z t=2
d(t)
= V 0 I0
cos t (cos t cos + sin t sin )
, (4.339)
2
t=0
giving
1
V0 I0 cos ,
(4.340)
2
since hcos t sin ti = 0 and hcos t cos ti = 1/2. In complex representation,
the voltage and the current are written
P =
(4.341)
(4.342)
It follows that
1
1
(V I + V I) = Re(V I ).
4
2
Making use of Eq. (4.336), we find that
P =
P =
1
1 Re(Z) |V |2
Re(Z) |I|2 =
.
2
2
|Z|2
(4.343)
(4.344)
Note that power dissipation is associated with the real part of the impedance.
For the special case of an LCR circuit,
P =
1
RI02 .
2
(4.345)
It is clear that only the resistor dissipates energy in this circuit. The inductor
and the capacitor both store energy, but they eventually return it to the circuit
without dissipation.
According to Eq. (4.336), the amplitude of the current which flows in an LCR
circuit for a given amplitude of the input voltage is given by
I0 =
V0
V0
=p
.
|Z|
(L 1/C)2 + R2
(4.346)
LC, and
The response
of
the
circuit
is
clearly
resonant,
peaking
at
=
1/
1/C
= arg(Z) = tan1
.
R
219
(4.347)
4.17
Transmission lines
220
Consider the voltage difference between two neighbouring points on the line,
located at positions x and x + x, respectively. The self-inductance of the portion
of the line lying between these two points is L x. This small section of the line
can be thought of as a conventional inductor, and therefore obeys the well-known
equation
I(x, t)
,
(4.348)
V (x, t) V (x + x, t) = L x
t
where V (x, t) is the voltage difference between the two conductors at position x
and time t, and I(x, t) is the current flowing in one of the conductors at position
x and time t [the current flowing in the other conductor is I(x, t) ]. In the limit
x 0, the above equation reduces to
V
I
= L
.
x
t
(4.349)
where t = 0 denotes a time at which the charge stored in either of the conductors
in the region x to x + x is zero. In the limit x 0, the above equation yields
I
V
= C
.
x
t
(4.351)
Equations (4.349) and (4.351) are generally known as the telegraphers equations, since an old fashioned telegraph line can be thought of as a primitive
transmission line (telegraph lines consist of a single wire; the other conductor is
the Earth.)
Differentiating Eq. (4.349) with respect to x, we obtain
2V
2I
=
L
.
x2
xt
221
(4.352)
(4.353)
(4.354)
(4.355)
The solution to the wave equation (4.354), subject to the above boundary condition, is
V (x, t) = V0 cos(t kx),
(4.356)
where k = LC. This clearly corresponds to a wave which propagates from the
generator towards the resistor. Equations (4.349) and (4.356) yield
V0
I(x, t) = p
cos(t kx).
L/C
(4.357)
For self-consistency, the resistor at the end of the line must have a particular
value;
r
L
V (l, t)
=
.
(4.358)
R=
I(l, t)
C
The so-called input impedance of the line is defined
r
V (0, t)
L
Zin =
=
.
I(0, t)
C
222
(4.359)
p
Thus, a transmission line terminated by a resistor R = L/R acts very much
like a conventional resistor R = Zin in the circuit containing the generator. In
fact, the transmission line could be replaced by an effective resistor R = Z in in
the circuit diagram for the generator circuit. The power loss due to this effective
resistor corresponds to power which is extracted from the circuit, transmitted
down the line, and absorbed by the terminating resistor.
The most commonly occurring type of transmission line is a co-axial cable,
which consists of two co-axial cylindrical conductors of radii a and b (with b > a).
We have already shown that the capacitance per unit length of such a cable is
(see Section 4.5)
20
C=
.
(4.360)
ln(b/a)
Let us now calculate the inductance per unit length. Suppose that the inner
conductor carries a current I. According to Amp`eres law, the magnetic field in
the region between the conductors is given by
B =
0 I
.
2r
(4.361)
(4.362)
=
ln(b/a).
I
2
(4.363)
1
1
=
= c.
0 0
LC
(4.364)
1/2
L
0
Z0 =
=
ln (b/a) = 60 ln (b/a) ohms.
(4.365)
C
4 2 0
223
This corresponds to a voltage wave of amplitude V0 which travels down the line
and is reflected, with reflection coefficient K, at the end of the line. It is easily
demonstrated from the telegraphers equations that the corresponding current
waveform is
I(x, t) =
KV0
V0
exp[i (t kx)]
exp[i (t + kx)].
Z0
Z0
(4.367)
(4.368)
R Z0
.
R + Z0
(4.369)
R cos kl + i Z0 sin kl
V (l, t)
= Z0
.
I(l, t)
Z0 cos kl + i R sin kl
(4.370)
Clearly, if the resistor at the end of the line is properly matched, so that
R = Z0 , then there is no reflection (i.e., K = 0), and the input impedance of
the line is Z0 . If the line is short circuited, so that R = 0, then there is total
reflection at the end of the line (i.e., K = 1), and the input impedance becomes
Zin = i Z0 tan kl.
(4.371)
This impedance is purely imaginary, implying that the transmission line absorbs
no net power from the generator circuit. In fact, the line acts rather like a pure
224
inductor or capacitor in the generator circuit (i.e., it can store, but cannot absorb,
energy). If the line is open circuited, so that R , then there is again total
reflection at the end of the line (i.e., K = 1), and the input impedance becomes
Zin = i Z0 tan(kl /2).
(4.372)
Thus, the open circuited line acts like a closed circuited line which is shorter by
one quarter of a wavelength. For the special case where the length of the line is
exactly one quarter of a wavelength (i.e., kl = /2), we find
Z0 2
.
Zin =
R
(4.373)
Thus, a quarter wave line looks like a pure resistor in the generator circuit.
Finally, if the length of the line is much less than the wavelength (i.e., kl 1)
then we enter the constant phase regime, and Zin ' R (i.e., we can forget about
the transmission line connecting the terminating resistor to the generator circuit).
Suppose that we want to build a radio transmitter. We can use a half wave
antenna to emit the radiation. We know that in electrical circuits such an antenna
acts like a resistor of resistance 73 ohms (it is more usual to say that the antenna
has an impedance of 73 ohms). Suppose that we buy a 500 kW generator to supply
the power to the antenna. How do we transmit the power from the generator to
the antenna? We use a transmission line, of course. (It is clear that if the distance
between the generator and the antenna is of order the dimensions of the antenna
(i.e., /2) then the constant phase approximation breaks down, so we have to
use a transmission line.) Since the impedance of the antenna is fixed at 73 ohms
we need to use a 73 ohm transmission line (i.e., Z0 = 73 ohms) to connect the
generator to the antenna, otherwise some of the power we send down the line
is reflected (i.e., not all of the power output of the generator is converted into
radio waves). If we wish to use a co-axial cable to connect the generator to the
antenna, then it is clear from Eq. (4.365) that the radii of the inner and outer
conductors need to be such that b/a = 3.38.
Suppose, finally, that we upgrade our transmitter to use a full wave antenna
(i.e., an antenna whose length equals the wavelength of the emitted radiation).
A full wave antenna has a different impedance than a half wave antenna. Does
this mean that we have to rip out our original co-axial cable and replace it by
225
one whose impedance matches that of the new antenna? Not necessarily. Let Z 0
be the impedance of the co-axial cable, and Z1 the impedance of the antenna.
Suppose that we place a quarter wave transmission line (i.e., one
whose length is
one quarter of a wavelength) of characteristic impedance Z 1/4 = Z0 Z1 between
the end of the cable and the antenna. According to Eq. (4.373) (with Z 0
4.18
Epilogue
226
227