The passage of time is probably an illusion.
For one, time is relative. Or perhaps I should say, simultaneity is relative. If I stand midway between two poles, with a right-angled mirror that allows me to observe both, and if a lightning bolt hits each pole "simultaneously", I will indeed observe the simultaneity of the two lightning strikes. Light from each event will reach my eyes at the same time. For another observer, moving rapidly past me just as the lightning strikes, the events will not be simultaneous. As he's moving towards one of the poles and away from the other, the light from the pole he's moving towards will reach his eyes before the light from the pole he's moving away from. This is due, of course, to the shorter distance traveled by the light from the pole he's moving towards. But neither observer can claim absolute authority as to the simultaneity of the events; neither person is more special than the other. Time, here, is relative. If asked to signal the arrival of the lightning bolt on the pole toward which the moving observer is approaching, the moving observer will signal before the stationary observer will. The moving observer's "present" is the stationary observer's "future". Therefore, it doesn't make sense to confer special status on the present moment, because whose "present" would that moment refer to?
Another argument against the passage of time is the fact that nothing in known physics corresponds to its passage. The equations upon which physics rest work equally well whether time runs forwards or backwards. The present moment has no special significance. It seems that time is laid out in its entirety, with all times equally real. Our perception of the past, present, and future is not a result of time passing over us but instead of the way our brains work.
Why do we perceive time to move in one direction: from past to present to future? We're confusing the passage of time with the "arrow of time." The arrow of time points towards an asymmetry between past and future. (We, by convention, label the direction in which the arrow points as toward the "future".) A drinking glass dropped on the floor shatters, but a shattered glass never automatically reassembles itself and returns to your hand. If you were to see such a thing in a movie, you'd think that the film was being played in reverse. This forward-pointing arrow of time seems to be related to the second law of thermodynamics, which basically states that the entropy (or, roughly, disorder) of a closed system will increase in time. A shattered glass is definitely less ordered than an intact glass. But the fact that the arrow is pointing forward does not mean that it is moving forward. Some physicists speculate that the unidirectionality inherent in the formation of memories - new memories add information and raise the entropy of the brain - might lead to our perception of the flow of time. Others speculate that this perception may have something to do with quantum mechanics.
Sean Carroll (2006) proposes that the reason our arrow of time points towards the "future" and not the "past" is just a quirk of chance. That is, there's nothing fundamental about it. It could have just as well pointed in the opposite direction. Our universe just happens to be moving from a low-entropy state to a high-entropy state, but other universes may be moving in the reverse direction. Supposedly, though, on an ultra-large-scale, the entirety of all universes would be moving towards increased entropy, for the simple reason that there are more ways to be high entropy (disordered) than low entropy (ordered).
And another thing about time. Recent studies suggest that our perception of the world may not be continuous but might instead be a series of discrete snapshots like frames in a film. Actually, "it seems that each separate neural process that governs our perception might be recorded in its own stream of discrete frames" (Fox, 2009). And these streams (which need not all progress at the same rate) are then fit together in a separate process within the brain that produces a consistent picture of the world.
Not everyone agrees on the ideas presented above.
Sources:
"That Mysterious Flow" by Paul Davies, Scientific American, Volume 16, Number 1, 2006.
"The Time Before Time" by Sean Carroll, Seed, Volume 2, Number 6, September 2006.
"The time machine in your head" by Douglas Fox, NewScientist, Volume 204, Number 2731, 24 October 2009.
Friday, December 18, 2009
Sunday, November 22, 2009
Weightlessness
Are astronauts, in Earth orbit, without weight?
First, I should mention that there are two types of weight: actual weight and apparent weight. An Earth-bound object's actual weight is the downward force exerted upon it by the Earth's gravity. The object's apparent weight is the upward force, typically transmitted through the ground, that opposes gravity and prevents the object from falling through the floor or ground (towards the center of the Earth). When you stand on your bathroom scale, it's measuring your apparent weight (i.e. how hard it's having to push up on you to prevent you from accelerating downwards through the scale, crushing it). This doesn't necessarily have to be equal to your actual weight. In your bathroom, your apparent weight is equal to your actual weight, but if you carry the scale into an elevator and weigh yourself while accelerating upwards, the scale will register an apparent weight that is greater than your actual weight. Since a cable is pulling the elevator up rapidly, the elevator's floor is pushing up on the scale, and the scale is, in turn, pushing up on your feet. In order to force you upwards, against gravity, the scale is having to push on you harder than if it (and you) were stationary. Your actual weight doesn't change here, because it's dependent on your mass (which isn't changing) and your distance from the center of the Earth (which is changing only a negligible amount). But to the scale, it feels like you're growing heavier, because it's having to not only support you but push you (accelerate you) upwards. So it registers a heavier "weight", which we now know to be your apparent weight. What about when the elevator accelerates downwards? Your apparent weight becomes less than your actual weight. And now, what if the elevator's cable were to break and the elevator, scale, and you were all to freefall towards the ground below? The scale would be falling at the same rate you were falling, and so it wouldn't be supporting any of your weight. It would indicate a weight of zero. That is, your apparent weight would be zero. This is the definition of weightlessness. Weightlessness means without apparent weight; it has nothing to do with your actual weight. So ... an astronaut in orbit, in a constant state of freefall, kept to a near circular path around the Earth by gravity, is weightless, but only in the sense that he or she has no apparent weight. Astronauts most definitely do have actual weight!
Apparent weight can change fairly easily, we see. All you have to do is take a ride on an elevator, or rollercoaster, or some other device that accelerates you in a vertical direction. Does actual weight ever change (ignoring the effects of food)? Yes, it does. The force of gravity on you (which determines your actual weight) is dependent on your mass, the mass of the Earth, the gravitational constant G, and the distance between you and the center of the Earth. Assuming your mass is held constant, you can reduce your actual weight by increasing your distance from the center of the Earth. So astronauts have actual weight, but an actual weight that is slightly less than their actual weight back on Earth. How much less? In orbit around 300 km (185 miles) above the surface of the Earth, astronauts' actual weight will be about 8.8% less than back on Earth.
What makes you feel weightless when you're falling, even though you still have an actual weight? Or one could ask, what makes you feel heavy (or with weight) when you're standing on the ground? It's not gravity. It's the force of the surface you're standing on, pushing against you. If you're standing on a sidewalk, the concrete is pressing against your feet, which are in turn pressing against your ankles, which are pressing against your lower legs, which are pressing against your upper legs, and on and on. The feet are supporting your entire mass. Your chest, for example, only supports the mass of the body above the chest. You don't feel pressure evenly distributed throughout your body. (Well, you're used to the feeling of standing on a surface, and so you may have a hard time sensing this uneven pressure distribution, but it's there.) Your sense of weight also comes from your arms pulling down on your shoulders. When in freefall, this pressure gradient (or change over space) disappears. Each section of your body, each cell, is falling at the same rate. Therefore, your upper body isn't pushing on your lower body. Your ankles aren't pushing on your feet. There is no pushing at all. Neither are your arms pulling down on your shoulders. The absence of these sensations is what one equates to feeling weightless.
How does NASA simulate a weightless environment for astronaut training? They could put their astronauts in an elevator, take it to the top of a tall building, cut the supporting cable, and allow the elevator and its inhabitants to freefall for several seconds. But the impact upon hitting the ground would be extreme and most unpleasant. Instead, NASA sends its astronauts up in an airplane, and the airplane flies in the parabolic trajectories of freely falling objects. Soaring over the Gulf of Mexico, pilots level off at about 26,000 feet. They then shoot the plane upward at about a 45-degree angle. At this point, the apparent weight of the people inside the nearly empty, padded fuselage increases to about 1.8 times their actual weight. Half a minute later, pilots push the aircraft's nose over the top of this "parabola", and the plane falls some 8,000 feet or so until its pointing downward at about 30 degrees. During this freefall, the aircraft's acceleration matches Earth's acceleration of gravity, making everything inside weightless for 17 to 25 seconds. (Parts of the movie Apollo 13 were filmed on this aircraft.) Over a two-hour flight, the aircraft may fly through some 40 of these parabolas. NASA used two KC-135 Stratotanker aircraft for these sessions from 1973 until 2005, when they were retired and replaced with a McDonnell Douglas C-9. The plane, not too surprisingly, earned the nickname "Vomit Comet."
First, I should mention that there are two types of weight: actual weight and apparent weight. An Earth-bound object's actual weight is the downward force exerted upon it by the Earth's gravity. The object's apparent weight is the upward force, typically transmitted through the ground, that opposes gravity and prevents the object from falling through the floor or ground (towards the center of the Earth). When you stand on your bathroom scale, it's measuring your apparent weight (i.e. how hard it's having to push up on you to prevent you from accelerating downwards through the scale, crushing it). This doesn't necessarily have to be equal to your actual weight. In your bathroom, your apparent weight is equal to your actual weight, but if you carry the scale into an elevator and weigh yourself while accelerating upwards, the scale will register an apparent weight that is greater than your actual weight. Since a cable is pulling the elevator up rapidly, the elevator's floor is pushing up on the scale, and the scale is, in turn, pushing up on your feet. In order to force you upwards, against gravity, the scale is having to push on you harder than if it (and you) were stationary. Your actual weight doesn't change here, because it's dependent on your mass (which isn't changing) and your distance from the center of the Earth (which is changing only a negligible amount). But to the scale, it feels like you're growing heavier, because it's having to not only support you but push you (accelerate you) upwards. So it registers a heavier "weight", which we now know to be your apparent weight. What about when the elevator accelerates downwards? Your apparent weight becomes less than your actual weight. And now, what if the elevator's cable were to break and the elevator, scale, and you were all to freefall towards the ground below? The scale would be falling at the same rate you were falling, and so it wouldn't be supporting any of your weight. It would indicate a weight of zero. That is, your apparent weight would be zero. This is the definition of weightlessness. Weightlessness means without apparent weight; it has nothing to do with your actual weight. So ... an astronaut in orbit, in a constant state of freefall, kept to a near circular path around the Earth by gravity, is weightless, but only in the sense that he or she has no apparent weight. Astronauts most definitely do have actual weight!
Apparent weight can change fairly easily, we see. All you have to do is take a ride on an elevator, or rollercoaster, or some other device that accelerates you in a vertical direction. Does actual weight ever change (ignoring the effects of food)? Yes, it does. The force of gravity on you (which determines your actual weight) is dependent on your mass, the mass of the Earth, the gravitational constant G, and the distance between you and the center of the Earth. Assuming your mass is held constant, you can reduce your actual weight by increasing your distance from the center of the Earth. So astronauts have actual weight, but an actual weight that is slightly less than their actual weight back on Earth. How much less? In orbit around 300 km (185 miles) above the surface of the Earth, astronauts' actual weight will be about 8.8% less than back on Earth.
What makes you feel weightless when you're falling, even though you still have an actual weight? Or one could ask, what makes you feel heavy (or with weight) when you're standing on the ground? It's not gravity. It's the force of the surface you're standing on, pushing against you. If you're standing on a sidewalk, the concrete is pressing against your feet, which are in turn pressing against your ankles, which are pressing against your lower legs, which are pressing against your upper legs, and on and on. The feet are supporting your entire mass. Your chest, for example, only supports the mass of the body above the chest. You don't feel pressure evenly distributed throughout your body. (Well, you're used to the feeling of standing on a surface, and so you may have a hard time sensing this uneven pressure distribution, but it's there.) Your sense of weight also comes from your arms pulling down on your shoulders. When in freefall, this pressure gradient (or change over space) disappears. Each section of your body, each cell, is falling at the same rate. Therefore, your upper body isn't pushing on your lower body. Your ankles aren't pushing on your feet. There is no pushing at all. Neither are your arms pulling down on your shoulders. The absence of these sensations is what one equates to feeling weightless.
How does NASA simulate a weightless environment for astronaut training? They could put their astronauts in an elevator, take it to the top of a tall building, cut the supporting cable, and allow the elevator and its inhabitants to freefall for several seconds. But the impact upon hitting the ground would be extreme and most unpleasant. Instead, NASA sends its astronauts up in an airplane, and the airplane flies in the parabolic trajectories of freely falling objects. Soaring over the Gulf of Mexico, pilots level off at about 26,000 feet. They then shoot the plane upward at about a 45-degree angle. At this point, the apparent weight of the people inside the nearly empty, padded fuselage increases to about 1.8 times their actual weight. Half a minute later, pilots push the aircraft's nose over the top of this "parabola", and the plane falls some 8,000 feet or so until its pointing downward at about 30 degrees. During this freefall, the aircraft's acceleration matches Earth's acceleration of gravity, making everything inside weightless for 17 to 25 seconds. (Parts of the movie Apollo 13 were filmed on this aircraft.) Over a two-hour flight, the aircraft may fly through some 40 of these parabolas. NASA used two KC-135 Stratotanker aircraft for these sessions from 1973 until 2005, when they were retired and replaced with a McDonnell Douglas C-9. The plane, not too surprisingly, earned the nickname "Vomit Comet."
Labels:
actual weight,
apparent weight,
gravity,
weight,
weightless
Monday, November 2, 2009
Why Do Golf Balls Have Dimples?
Why do golf balls have dimples? The dimples enable the ball to fly much farther through the air. A swing, driving a smooth golf ball 70 yards, could drive a dimpled ball perhaps 250 yards. Why?
First, air pressure is the force exerted by air molecules divided by the area on which the force is exerted. That is, force per unit area. The force comes from the countless collisions of the air molecules (i.e. nitrogen and oxygen molecules, as well as a very small number of carbon dioxide molecules and argon atoms) against the surface in question. Keep in mind that a net force on an object causes that object to accelerate (or decelerate). If the net force acts in the direction of the object's motion, it accelerates the object; acting in the direction opposite the object's motion, it decelerates the object.
Daniel Bernoulli was born in the Dutch Republic (now known as The Netherlands) in the year 1700. He's perhaps best known for discovering a relationship between the pressure, velocity (speed in a certain direction), and height (above some arbitrary reference level) of an incompressible fluid in perfect steady-state flow. Water being pumped through a pipe can fit this description. It's virtually incompressible, and the pump can keep it moving at a steady rate through the pipe. Air, while not incompressible, is close enough to an incompressible fluid in steady-state flow under certain conditions (velocity less than 300 km/h and no pressure differences of more than one tenth of an atmosphere) that we can use Bernoulli's equation to understand its behavior. So what's the relationship? For a fluid as described above, the pressure, plus one-half times the density times the velocity squared, plus the density times the acceleration due to gravity times the height above some arbitrary reference level, is constant. In equation form, P + 1/2 dv2 + dgh = constant. So what happens if I increase the velocity (v) of the fluid? Either the pressure (P) must decrease or the height (h) must decrease, so that the left side of the equation remains equal to the constant. You should take from this equation the following: for an incompressible, steady-state flow liquid, of a particular density (d), and at a set height (that doesn't change), pressure and velocity always move in opposite directions. If pressure decreases, velocity increases. If pressure increases, velocity decreases.
When the path of a fluid in steady-state flow bends, the pressure on the outside of the bend is always higher than the pressure on the inside of the bend. It's this pressure imbalance that causes the fluid to bend. This pressure change indicates a change in the fluid's velocity. So does the fluid on the outside of the bend speed up or slow down? It slows down. And the fluid on the inside of the bend? It speeds up, of course.
When a ball is hurtling through the air, the air it encounters is forced to flow around it. Some of the air flows over the top of the ball, some flows beneath the ball, and some air flows around each side. Air pressure above, beneath, and aside the ball is not everywhere the same. As the air encounters the front of the ball, it bends away from the ball, moving out of the way. (The ball is on the outside of the bend.) This creates a high-pressure zone in front of the ball. And the air here slows down. The air then curves back towards the ball, on all sides of the ball, hugging its surface as it moves towards the back of the ball. This puts the ball on the inside of many curved paths (or bends). Therefore, the air around the ball's middle is at low pressure and high speed. As the air reaches the back of the ball, it peels away from the ball and straightens back out. This bending of the air away from the ball creates a high-pressure zone behind the ball. Low speed air. Now you ask, how can the low-pressure air along the sides of the ball move into the high-pressure zone behind the ball? Doesn't air always move from a high-pressure zone into a low-pressure zone? Normally, yes. Here, the low-pressure air is definitely moving against the tide, so to speak. It's fighting its way into the high-pressure zone, slowing down (decelerating) as the high-pressure air pushes on it. But it has enough energy to successfully make the trip. It does reach the back of the ball. Now, these pressure imbalances are symmetric about the ball; they balance one another and produce no net force on the ball. They don't accelerate or decelerate the ball itself. Air resistance does exist, but it's a result of air near the ball's surface rubbing against the surface, producing a type of friction. Viscous drag, it's called. The air resistance is not a result of the pressure variations just described. Okay, now for a qualifier! The behavior of the air about the ball, as described in this paragraph, applies to balls traveling at slow speeds. This is important. The air behaves differently when it encounters a ball moving at high speed.
To describe the path of air flowing around a fast-moving ball, I must introduce the term boundary layer. A thin layer of air moving very close to the surface of the ball is called the boundary layer, and it behaves differently from air farther from the surface. It moves more slowly and has less total energy than the freely flowing air farther out. Why? Because friction with the ball's surface (i.e. viscous drag) slows it down and robs it of energy.
Hmmm. So you're thinking, it's hard for the air along the sides of the ball to push into the high-pressure zone behind the ball. Okay. But it sounds like it can do it anyways. Guess it has enough energy to do so. And that boundary layer. It has less energy than the air just a bit farther out. But, well, it seems that it, too, is able to push into the high-pressure zone. At least when the ball is moving slowly. (Good. You're right so far.) And so does this change when the ball is moving rapidly? Yes. When the ball is moving rapidly, this lower-energy boundary layer of air is no longer able to push into the high-pressure zone behind the ball. In fact, it is pushed back towards the sides of the ball by the adverse pressure gradient, cutting like a wedge between the ball and the freely flowing air outside this boundary layer. No longer does the air curve around behind the ball. This leaves us with an air pocket behind the ball; a turbulent wake, in other words. In this wake, the air pressure is roughly atmospheric. There goes the symmetry of pressure forces on the ball. Now there is no high-pressure zone behind the ball to cancel the high-pressure zone in front of the ball. There is a large pressure drag, a force on the ball in the direction of downwind, slowing the ball down. Decelerating it. This pressure drag is what limits the range of a smooth golf ball. Yes, there is also viscous drag, but it's not nearly as significant as the large pressure drag caused by the turbulent wake.
(turbulent wake behind ball, which is moving to the left)
So dimpled golf balls travel farther than smooth golf balls. Do the dimples somehow reduce the size (and severity) of this turbulent wake, reducing the pressure drag on the ball, preventing the ball from slowing so much as it arcs through the air? Yes, indeed. The dimples, or surface irregularities, cause the air in the boundary layer to tumble about. This tumbling about gives the boundary-layer air more energy, and more forward momentum. It now has a much better chance of pushing around to the back side of the ball, into the high-pressure zone. Alas, it still doesn't make it, but it comes much closer. It travels partially around the back of the ball before its progress is stopped and it separates from the surface. The air outside the boundary layer, following along, hugs the ball for a longer time, as well. It separates from the ball at the same spot where the boundary layer separates, this being a fair ways down the backside of the ball. The result is a smaller air pocket. A small turbulent wake. A less dramatic variation in air pressure between the front of the ball and the back of the ball. A more modest force of pressure drag. And this reduction in pressure drag is what enables the dimpled ball to soar some 200 yards farther than a smooth ball.
First, air pressure is the force exerted by air molecules divided by the area on which the force is exerted. That is, force per unit area. The force comes from the countless collisions of the air molecules (i.e. nitrogen and oxygen molecules, as well as a very small number of carbon dioxide molecules and argon atoms) against the surface in question. Keep in mind that a net force on an object causes that object to accelerate (or decelerate). If the net force acts in the direction of the object's motion, it accelerates the object; acting in the direction opposite the object's motion, it decelerates the object.
Daniel Bernoulli was born in the Dutch Republic (now known as The Netherlands) in the year 1700. He's perhaps best known for discovering a relationship between the pressure, velocity (speed in a certain direction), and height (above some arbitrary reference level) of an incompressible fluid in perfect steady-state flow. Water being pumped through a pipe can fit this description. It's virtually incompressible, and the pump can keep it moving at a steady rate through the pipe. Air, while not incompressible, is close enough to an incompressible fluid in steady-state flow under certain conditions (velocity less than 300 km/h and no pressure differences of more than one tenth of an atmosphere) that we can use Bernoulli's equation to understand its behavior. So what's the relationship? For a fluid as described above, the pressure, plus one-half times the density times the velocity squared, plus the density times the acceleration due to gravity times the height above some arbitrary reference level, is constant. In equation form, P + 1/2 dv2 + dgh = constant. So what happens if I increase the velocity (v) of the fluid? Either the pressure (P) must decrease or the height (h) must decrease, so that the left side of the equation remains equal to the constant. You should take from this equation the following: for an incompressible, steady-state flow liquid, of a particular density (d), and at a set height (that doesn't change), pressure and velocity always move in opposite directions. If pressure decreases, velocity increases. If pressure increases, velocity decreases.
When the path of a fluid in steady-state flow bends, the pressure on the outside of the bend is always higher than the pressure on the inside of the bend. It's this pressure imbalance that causes the fluid to bend. This pressure change indicates a change in the fluid's velocity. So does the fluid on the outside of the bend speed up or slow down? It slows down. And the fluid on the inside of the bend? It speeds up, of course.
When a ball is hurtling through the air, the air it encounters is forced to flow around it. Some of the air flows over the top of the ball, some flows beneath the ball, and some air flows around each side. Air pressure above, beneath, and aside the ball is not everywhere the same. As the air encounters the front of the ball, it bends away from the ball, moving out of the way. (The ball is on the outside of the bend.) This creates a high-pressure zone in front of the ball. And the air here slows down. The air then curves back towards the ball, on all sides of the ball, hugging its surface as it moves towards the back of the ball. This puts the ball on the inside of many curved paths (or bends). Therefore, the air around the ball's middle is at low pressure and high speed. As the air reaches the back of the ball, it peels away from the ball and straightens back out. This bending of the air away from the ball creates a high-pressure zone behind the ball. Low speed air. Now you ask, how can the low-pressure air along the sides of the ball move into the high-pressure zone behind the ball? Doesn't air always move from a high-pressure zone into a low-pressure zone? Normally, yes. Here, the low-pressure air is definitely moving against the tide, so to speak. It's fighting its way into the high-pressure zone, slowing down (decelerating) as the high-pressure air pushes on it. But it has enough energy to successfully make the trip. It does reach the back of the ball. Now, these pressure imbalances are symmetric about the ball; they balance one another and produce no net force on the ball. They don't accelerate or decelerate the ball itself. Air resistance does exist, but it's a result of air near the ball's surface rubbing against the surface, producing a type of friction. Viscous drag, it's called. The air resistance is not a result of the pressure variations just described. Okay, now for a qualifier! The behavior of the air about the ball, as described in this paragraph, applies to balls traveling at slow speeds. This is important. The air behaves differently when it encounters a ball moving at high speed.
To describe the path of air flowing around a fast-moving ball, I must introduce the term boundary layer. A thin layer of air moving very close to the surface of the ball is called the boundary layer, and it behaves differently from air farther from the surface. It moves more slowly and has less total energy than the freely flowing air farther out. Why? Because friction with the ball's surface (i.e. viscous drag) slows it down and robs it of energy.
Hmmm. So you're thinking, it's hard for the air along the sides of the ball to push into the high-pressure zone behind the ball. Okay. But it sounds like it can do it anyways. Guess it has enough energy to do so. And that boundary layer. It has less energy than the air just a bit farther out. But, well, it seems that it, too, is able to push into the high-pressure zone. At least when the ball is moving slowly. (Good. You're right so far.) And so does this change when the ball is moving rapidly? Yes. When the ball is moving rapidly, this lower-energy boundary layer of air is no longer able to push into the high-pressure zone behind the ball. In fact, it is pushed back towards the sides of the ball by the adverse pressure gradient, cutting like a wedge between the ball and the freely flowing air outside this boundary layer. No longer does the air curve around behind the ball. This leaves us with an air pocket behind the ball; a turbulent wake, in other words. In this wake, the air pressure is roughly atmospheric. There goes the symmetry of pressure forces on the ball. Now there is no high-pressure zone behind the ball to cancel the high-pressure zone in front of the ball. There is a large pressure drag, a force on the ball in the direction of downwind, slowing the ball down. Decelerating it. This pressure drag is what limits the range of a smooth golf ball. Yes, there is also viscous drag, but it's not nearly as significant as the large pressure drag caused by the turbulent wake.
So dimpled golf balls travel farther than smooth golf balls. Do the dimples somehow reduce the size (and severity) of this turbulent wake, reducing the pressure drag on the ball, preventing the ball from slowing so much as it arcs through the air? Yes, indeed. The dimples, or surface irregularities, cause the air in the boundary layer to tumble about. This tumbling about gives the boundary-layer air more energy, and more forward momentum. It now has a much better chance of pushing around to the back side of the ball, into the high-pressure zone. Alas, it still doesn't make it, but it comes much closer. It travels partially around the back of the ball before its progress is stopped and it separates from the surface. The air outside the boundary layer, following along, hugs the ball for a longer time, as well. It separates from the ball at the same spot where the boundary layer separates, this being a fair ways down the backside of the ball. The result is a smaller air pocket. A small turbulent wake. A less dramatic variation in air pressure between the front of the ball and the back of the ball. A more modest force of pressure drag. And this reduction in pressure drag is what enables the dimpled ball to soar some 200 yards farther than a smooth ball.
Labels:
air pressure,
Bernoulli,
golf ball,
turbulent wake,
viscous drag
Wednesday, October 14, 2009
Superconductivity
When Thomas Edison went about providing electric power to New York City in the late 1800s, he knew that energy was dissipated as heat in the wires that delivered electric current to his customers. This reduced the amount of power that made it to the homes of his customers, and presented Edison with the problem of trying to minimize this power loss. (I talked about this in a previous blog entry.)
The issue faced by Edison was one of electrical resistance, which is a measure of the degree to which an object opposes an electric current through it. When current flows through an object with resistance, electrical energy is converted to heat at a rate equal to the square of the current times the resistance. This rate is a measure of power loss.
While Edison had means to lessen this loss of power, he couldn't escape it completely. That's because conductors (i.e. materials that conduct electricity) naturally heat up as an electric current moves through them. The electrons that comprise this current, as they snake forward through the material, are constantly bumping into the atoms (ions) of the conductor. At each collision, an electron loses a bit of kinetic energy to an ion, increasing the kinetic energy of the ion, generating heat and increasing the temperature of the conductor. While conductors exhibit less resistance at lower temperatures, ordinary conductors can never be cooled enough to achieve zero resistance.
It was in 1911 that a scientist, Heike Kamerlingh Onnes, discovered that certain unordinary conductors, under certain conditions, do possess zero electrical resistance. That is, passing an electric current through these materials does not result in the heating of the materials and, therefore, no power is lost in them. The reason why no one had seen such behavior before: it only takes place in certain materials, and these materials have to be unimaginably cold. It was only just prior to 1911 that such cold temperatures were achieved in the laboratory (by Onnes). Onnes had taken helium gas and got it so cold (down to 4.2 degrees above absolute zero) that it condensed into a liquid. Using this liquid helium as a refrigerant, he tested the electrical resistance of mercury and was amazed to find that it actually dropped to ZERO! Such behavior was a completely new phenomenon, never before witnessed. Onnes labeled it "superconductivity."
Onnes didn't understand what was going on inside the superconducting material. How could the electrons avoid bumping into the material's ions, passing kinetic energy to them? Why did such behavior occur only below a certain temperature, labeled the critical temperature? Twenty-two years later, in 1933, the answer was still unknown. But in this year, Walter Meissner and Robert Ochsenfeld made an important new discovery about superconducting materials (which, as a class, had expanded to include materials other than mercury). They found that superconductors expelled applied magnetic fields. Magnetic field lines that passed through a sample of material were, in a sense, pushed out of the material (or more accurately, cancelled within the material) when the material was cooled below its critical temperature. This finding, now known as the Meissner effect, provided evidence that superconductivity was, most fundamentally, a magnetic phenomenon. Such a finding also changed the mindset that the fundamental property of a superconductor was zero resistance.
A theory explaining the phenomenon of superconductivity was proposed in 1957 by John Bardeen, Leon Cooper, and Robert Schrieffer. It became known as the BCS Theory, after their initials. It had to do with phonons (not photons) and Cooper pairs. Phonons are quantized crystal lattice vibrations. What does this mean? Certain materials exist as crystals, which means "the constituent atoms, molecules or ions [which are atoms or molecules with a net electric charge] are packed in a regularly ordered, repeating pattern in all three spatial dimensions." (Wikipedia) The graphic below is an example of a unit cell, which is periodically repeated in three dimensions to form a crystal. Each sphere represents an atom and the tubes represent bonds between atoms.

A lattice is a sort of framework upon which, at each point, there exists a unit cell like you see pictured above. So the crystal looks the same when viewed from any lattice point. As an electron moves through a crystal, it exerts a force (i.e. it pulls) on the positively charged lattice ions, distorting them towards its (the electron's) path. As the electron then moves away from that point on the lattice, the lattice ions return to their original position. Because all atoms in a crystal are connected, "the displacement of one or more atoms from their equilibrium positions will give rise to a set of vibration waves propagating through the lattice." (Wikipedia) Finally, these vibration waves are quantized, which means they can't possess just any amount of energy but only certain discrete numerical values.
What happens as an electron moves through a crystal, generating a phonon? Let's picture an electric current flowing through the material. One electron after another. An electron zips past a point in the crystal lattice, distorting the lattice through the creation of a phonon. The lattice is pulled inward towards the negatively-charged electron, but the electron quickly moves away, faster than the lattice can relax back to its original position. This creates a region of positive charge, as the lattice ions that are pulled inward are positively charged. Here's the cool part. A second electron can be attracted to the region of positive charge along the path of the first electron. And these two electrons, which would normally repel one another (because they are both negatively charged), can become bound to one another. "If this binding energy is higher than the energy provided by kicks from oscillating atoms in the conductor (which is true at low temperatures), then the electron pair will stick together and resist all kicks, thus not experiencing resistance." (Wikipedia) These electron pairs are called Cooper pairs, and they lie at the heart of the BCS Theory. They are what allow for superconductivity; they carry the superconducting current. But, as noted just above, the temperature has to be low. Above a critical temperature, the atoms in the crystal are jostling around too much, bumping into the electron pairs with enough force to knock them apart. This breaking apart of the Cooper pairs destroys superconductivity in the material, and the material becomes "normal." What's the highest temperature at which a known material will superconduct? A special ceramic material comprised of many different atoms has been observed to superconduct at -135 degrees C. Notice the negative sign. The holy grail of those working in the field is to find a material that superconducts at room temperature. (Obviously, no material yet identified would have helped Edison ... although there are techniques, which I addressed in a previous blog entry, that lessen the problem.)
The material that superconducts at -135 degrees C (or 138 K), like all materials that superconduct above around -243 degrees C (or 30 K), is called a "high-temperature" superconductor. This is obviously a relative term. Such materials are not consistent with the BCS Theory and there is no good theory to describe how these high-temperature superconductors work.
The issue faced by Edison was one of electrical resistance, which is a measure of the degree to which an object opposes an electric current through it. When current flows through an object with resistance, electrical energy is converted to heat at a rate equal to the square of the current times the resistance. This rate is a measure of power loss.
While Edison had means to lessen this loss of power, he couldn't escape it completely. That's because conductors (i.e. materials that conduct electricity) naturally heat up as an electric current moves through them. The electrons that comprise this current, as they snake forward through the material, are constantly bumping into the atoms (ions) of the conductor. At each collision, an electron loses a bit of kinetic energy to an ion, increasing the kinetic energy of the ion, generating heat and increasing the temperature of the conductor. While conductors exhibit less resistance at lower temperatures, ordinary conductors can never be cooled enough to achieve zero resistance.
It was in 1911 that a scientist, Heike Kamerlingh Onnes, discovered that certain unordinary conductors, under certain conditions, do possess zero electrical resistance. That is, passing an electric current through these materials does not result in the heating of the materials and, therefore, no power is lost in them. The reason why no one had seen such behavior before: it only takes place in certain materials, and these materials have to be unimaginably cold. It was only just prior to 1911 that such cold temperatures were achieved in the laboratory (by Onnes). Onnes had taken helium gas and got it so cold (down to 4.2 degrees above absolute zero) that it condensed into a liquid. Using this liquid helium as a refrigerant, he tested the electrical resistance of mercury and was amazed to find that it actually dropped to ZERO! Such behavior was a completely new phenomenon, never before witnessed. Onnes labeled it "superconductivity."
Onnes didn't understand what was going on inside the superconducting material. How could the electrons avoid bumping into the material's ions, passing kinetic energy to them? Why did such behavior occur only below a certain temperature, labeled the critical temperature? Twenty-two years later, in 1933, the answer was still unknown. But in this year, Walter Meissner and Robert Ochsenfeld made an important new discovery about superconducting materials (which, as a class, had expanded to include materials other than mercury). They found that superconductors expelled applied magnetic fields. Magnetic field lines that passed through a sample of material were, in a sense, pushed out of the material (or more accurately, cancelled within the material) when the material was cooled below its critical temperature. This finding, now known as the Meissner effect, provided evidence that superconductivity was, most fundamentally, a magnetic phenomenon. Such a finding also changed the mindset that the fundamental property of a superconductor was zero resistance.
A theory explaining the phenomenon of superconductivity was proposed in 1957 by John Bardeen, Leon Cooper, and Robert Schrieffer. It became known as the BCS Theory, after their initials. It had to do with phonons (not photons) and Cooper pairs. Phonons are quantized crystal lattice vibrations. What does this mean? Certain materials exist as crystals, which means "the constituent atoms, molecules or ions [which are atoms or molecules with a net electric charge] are packed in a regularly ordered, repeating pattern in all three spatial dimensions." (Wikipedia) The graphic below is an example of a unit cell, which is periodically repeated in three dimensions to form a crystal. Each sphere represents an atom and the tubes represent bonds between atoms.
A lattice is a sort of framework upon which, at each point, there exists a unit cell like you see pictured above. So the crystal looks the same when viewed from any lattice point. As an electron moves through a crystal, it exerts a force (i.e. it pulls) on the positively charged lattice ions, distorting them towards its (the electron's) path. As the electron then moves away from that point on the lattice, the lattice ions return to their original position. Because all atoms in a crystal are connected, "the displacement of one or more atoms from their equilibrium positions will give rise to a set of vibration waves propagating through the lattice." (Wikipedia) Finally, these vibration waves are quantized, which means they can't possess just any amount of energy but only certain discrete numerical values.
What happens as an electron moves through a crystal, generating a phonon? Let's picture an electric current flowing through the material. One electron after another. An electron zips past a point in the crystal lattice, distorting the lattice through the creation of a phonon. The lattice is pulled inward towards the negatively-charged electron, but the electron quickly moves away, faster than the lattice can relax back to its original position. This creates a region of positive charge, as the lattice ions that are pulled inward are positively charged. Here's the cool part. A second electron can be attracted to the region of positive charge along the path of the first electron. And these two electrons, which would normally repel one another (because they are both negatively charged), can become bound to one another. "If this binding energy is higher than the energy provided by kicks from oscillating atoms in the conductor (which is true at low temperatures), then the electron pair will stick together and resist all kicks, thus not experiencing resistance." (Wikipedia) These electron pairs are called Cooper pairs, and they lie at the heart of the BCS Theory. They are what allow for superconductivity; they carry the superconducting current. But, as noted just above, the temperature has to be low. Above a critical temperature, the atoms in the crystal are jostling around too much, bumping into the electron pairs with enough force to knock them apart. This breaking apart of the Cooper pairs destroys superconductivity in the material, and the material becomes "normal." What's the highest temperature at which a known material will superconduct? A special ceramic material comprised of many different atoms has been observed to superconduct at -135 degrees C. Notice the negative sign. The holy grail of those working in the field is to find a material that superconducts at room temperature. (Obviously, no material yet identified would have helped Edison ... although there are techniques, which I addressed in a previous blog entry, that lessen the problem.)
The material that superconducts at -135 degrees C (or 138 K), like all materials that superconduct above around -243 degrees C (or 30 K), is called a "high-temperature" superconductor. This is obviously a relative term. Such materials are not consistent with the BCS Theory and there is no good theory to describe how these high-temperature superconductors work.
Labels:
Cooper pair,
Meissner effect,
phonon,
superconductivity
Sunday, October 4, 2009
Blackbodies
What is a blackbody?
A blackbody is an idealized type of object that absorbs ALL electromagnetic radiation that falls on it. It therefore reflects no light. Now let's hold that the blackbody is in thermal equilibrium with its surroundings. (This means that it's at the same temperature as its surroundings.) With a bit of physics background, it becomes apparent that the object must not only absorb all radiation incident upon it (which is what makes it a blackbody) but it must also emit radiation at an equal rate, otherwise the net inflow or outflow of radiation would cause its temperature to change. (It should seem reasonable that radiation incident on an object can alter its temperature ... think of a microwave oven.) This emission of radiation may be in the form of visible light, so we acknowledge that even though no light is reflected from the blackbody, the body may still give off light. In other words, it may not actually be black in color.
(If you're familiar with the term "electromagnetic radiation" and you know what wavelength is, you can skip this paragraph.) Electromagnetic radiation is the collective term for radiation in its many guises. Microwaves, radio waves, visible light, X-rays. These are all examples of radiation, which can be viewed as a wave, with electric and magnetic components, propagating through space, carrying along energy. Waves can vary from one another in various ways, with a notable example being wavelength, or the distance between two adjacent crests or troughs. (More precisely, wavelength is the distance between adjacent maximums in the oscillating electric field.) Frequency, which is inversely proportional to wavelength, is another distinguishing characteristic of waves. It's a measure of the rate of oscillation of the wave. As a wave passes through a point in space, the shorter the wavelength, the more rapidly crests (or troughs) pass through that point. And vice-versa. This inverse proportionality holds, by the way, because radiation travels at a constant speed -- namely, the speed of light. In what wavelengths can radiation come? Any and all. A millionth of a centimeter, or two centimeters, or two meters, or bigger or smaller. People have arbitrarily divided this wavelength continuum into sections and given names to the different regions. Radiation that has a wavelength anywhere between 1 mm and 10 cm is called microwave radiation. Radiation with a wavelength between 400 and 700 nanometers (or one billionth of a meter) is called visible light. And, within the range of visible light, wavelength furthermore determines the color. As an example, light at 500 nm is green. You get the idea.
Funny thing about a blackbody in thermal equilibrium: it will emit a specific radiation spectrum (or a specific distribution of energy spread over all the possible wavelengths of radiation) that is characteristic NOT of the shape or size of the object, or even of what it is made of, but ONLY of the temperature of the object. Therefore, any two objects at 300 K (I'm using the Kelvin temperature scale, where the temperature in Kelvin is always 273.15 degrees higher than what it is in Celsius) will have the same radiation spectrum, and two objects at 3000 K will also share the same radiation spectrum, though one that is different than that shared by the objects at 300 K. (The radiation spectrum is generally referred to as a thermal spectrum, but I'll stick with the first term.) Here's a graph of spectrums at 4 different temperatures. Each has a peak, with a rather sharp tapering of the shorter-wavelength side and a more gradual tapering of the longer-wavelength side.

Quick question: How hot should the filament in a light bulb be, so that is will produce the same "white" light spectrum produced by the Sun? Answer: the same temperature as the surface of the Sun, which is about 5800 K. From the graph above, we can see that something with this temperature (or 6000 K) produces most of its radiation in and around the visible portion of the spectrum but also produces X-rays and microwaves and other types of waves in lesser quantities. And because the spectrum peak at about 5800 K is in the middle of the visible region, we get from the Sun fairly equal amounts of all different colors of visible light. The colors of the rainbow (in roughly equal amounts) blend to form a nice "white" light. (But, you object, the Sun is yellow! Actually, it only looks yellow from the surface of the Earth because of the distorting effects of the atmosphere.) What would we see, however, if the Sun's surface was only 3000 K? Well, the Sun would then emit a lot more red light (which corresponds to the right side of the visible band) than blue light (which is nearer the left side of the band), and sunlight would have a reddish hue (though I don't think it would be very obvious to the naked eye). No doubt you've seen something glowing "red hot", like, say, the heating element on the stove. The color is an indication that the stove is hot enough to produce a radiation spectrum that has a sufficient bit of energy allocated to the visible red region but little or no energy allocated to the other, shorter-wavelength colors of light, which would make the stove more orange or even white in appearance, depending on its exact temperature. For the same stove, guess which section of the radiation spectrum is best represented, so to speak. That would be infrared radiation, which our bodies perceive as heat. (A hot stove burner that is dull red in color is about 800 K; if you can raise the temperature high enough, it will turn orange at about 1150 K.)

(Another graph to look at.)
Normal incandescent bulbs don't get close to the temperature of the Sun, so they fall short of producing the same pleasant white light that emanates from our star. These bulbs contain tungsten filaments that reach temperatures of about 2500 K. So the light coming from them is redder than that of the Sun. Tungsten has the highest melting point of all metals. So to better mimic the color spectrum of the Sun in a light bulb, we can't try to heat a metal filament to 5800 K. It would simply melt well before reaching that high temperature. We must turn to a bulb that produces light not by getting hot, but by a different mechanism. Fluorescent bulbs are a case in point.
Is a blackbody black in color? It can be, but it doesn't have to be, as we saw in the first paragraph. The Sun is very nearly a blackbody, meaning it absorbs very nearly all radiation incident upon it. It also remains fairly constant in temperature because, even though its emitting a lot of radiation, its creating more of it deep within its core. And it's most definitely not black in color. Actually, to a reasonable approximation, all matter in thermal equilibrium behaves like a blackbody. A book does. A car does. Even a person does. As an example, the actress Halle Berry is a blackbody. Now, if she were a true ideal blackbody, she wouldn't reflect light (and she wouldn't have made the movie Catwoman), so we wouldn't be able to see her in color. She would appear pitch black. She would absorb all light and emit a radiation spectrum characteristic of something at 98.6 F (or 310 K). But she, and all people, approximate blackbodies. We reflect some of the light falling on us, which makes us visible, and absorb the rest. And we all emit a radiation spectrum that peaks in the middle of the infrared region of the spectrum (like the stove), producing such little visible light (as well as certain other wavelengths) that we don't shine in a dark room (unlike the stove, which is hotter). Want to be able to "see" someone that isn't reflecting visible light? Try infrared goggles. These pick up the infrared radiation produced by the person (and other warm things around the person) and convert it to visible light that your eyes can detect.
A blackbody is an idealized type of object that absorbs ALL electromagnetic radiation that falls on it. It therefore reflects no light. Now let's hold that the blackbody is in thermal equilibrium with its surroundings. (This means that it's at the same temperature as its surroundings.) With a bit of physics background, it becomes apparent that the object must not only absorb all radiation incident upon it (which is what makes it a blackbody) but it must also emit radiation at an equal rate, otherwise the net inflow or outflow of radiation would cause its temperature to change. (It should seem reasonable that radiation incident on an object can alter its temperature ... think of a microwave oven.) This emission of radiation may be in the form of visible light, so we acknowledge that even though no light is reflected from the blackbody, the body may still give off light. In other words, it may not actually be black in color.
(If you're familiar with the term "electromagnetic radiation" and you know what wavelength is, you can skip this paragraph.) Electromagnetic radiation is the collective term for radiation in its many guises. Microwaves, radio waves, visible light, X-rays. These are all examples of radiation, which can be viewed as a wave, with electric and magnetic components, propagating through space, carrying along energy. Waves can vary from one another in various ways, with a notable example being wavelength, or the distance between two adjacent crests or troughs. (More precisely, wavelength is the distance between adjacent maximums in the oscillating electric field.) Frequency, which is inversely proportional to wavelength, is another distinguishing characteristic of waves. It's a measure of the rate of oscillation of the wave. As a wave passes through a point in space, the shorter the wavelength, the more rapidly crests (or troughs) pass through that point. And vice-versa. This inverse proportionality holds, by the way, because radiation travels at a constant speed -- namely, the speed of light. In what wavelengths can radiation come? Any and all. A millionth of a centimeter, or two centimeters, or two meters, or bigger or smaller. People have arbitrarily divided this wavelength continuum into sections and given names to the different regions. Radiation that has a wavelength anywhere between 1 mm and 10 cm is called microwave radiation. Radiation with a wavelength between 400 and 700 nanometers (or one billionth of a meter) is called visible light. And, within the range of visible light, wavelength furthermore determines the color. As an example, light at 500 nm is green. You get the idea.
Funny thing about a blackbody in thermal equilibrium: it will emit a specific radiation spectrum (or a specific distribution of energy spread over all the possible wavelengths of radiation) that is characteristic NOT of the shape or size of the object, or even of what it is made of, but ONLY of the temperature of the object. Therefore, any two objects at 300 K (I'm using the Kelvin temperature scale, where the temperature in Kelvin is always 273.15 degrees higher than what it is in Celsius) will have the same radiation spectrum, and two objects at 3000 K will also share the same radiation spectrum, though one that is different than that shared by the objects at 300 K. (The radiation spectrum is generally referred to as a thermal spectrum, but I'll stick with the first term.) Here's a graph of spectrums at 4 different temperatures. Each has a peak, with a rather sharp tapering of the shorter-wavelength side and a more gradual tapering of the longer-wavelength side.
Quick question: How hot should the filament in a light bulb be, so that is will produce the same "white" light spectrum produced by the Sun? Answer: the same temperature as the surface of the Sun, which is about 5800 K. From the graph above, we can see that something with this temperature (or 6000 K) produces most of its radiation in and around the visible portion of the spectrum but also produces X-rays and microwaves and other types of waves in lesser quantities. And because the spectrum peak at about 5800 K is in the middle of the visible region, we get from the Sun fairly equal amounts of all different colors of visible light. The colors of the rainbow (in roughly equal amounts) blend to form a nice "white" light. (But, you object, the Sun is yellow! Actually, it only looks yellow from the surface of the Earth because of the distorting effects of the atmosphere.) What would we see, however, if the Sun's surface was only 3000 K? Well, the Sun would then emit a lot more red light (which corresponds to the right side of the visible band) than blue light (which is nearer the left side of the band), and sunlight would have a reddish hue (though I don't think it would be very obvious to the naked eye). No doubt you've seen something glowing "red hot", like, say, the heating element on the stove. The color is an indication that the stove is hot enough to produce a radiation spectrum that has a sufficient bit of energy allocated to the visible red region but little or no energy allocated to the other, shorter-wavelength colors of light, which would make the stove more orange or even white in appearance, depending on its exact temperature. For the same stove, guess which section of the radiation spectrum is best represented, so to speak. That would be infrared radiation, which our bodies perceive as heat. (A hot stove burner that is dull red in color is about 800 K; if you can raise the temperature high enough, it will turn orange at about 1150 K.)
(Another graph to look at.)
Normal incandescent bulbs don't get close to the temperature of the Sun, so they fall short of producing the same pleasant white light that emanates from our star. These bulbs contain tungsten filaments that reach temperatures of about 2500 K. So the light coming from them is redder than that of the Sun. Tungsten has the highest melting point of all metals. So to better mimic the color spectrum of the Sun in a light bulb, we can't try to heat a metal filament to 5800 K. It would simply melt well before reaching that high temperature. We must turn to a bulb that produces light not by getting hot, but by a different mechanism. Fluorescent bulbs are a case in point.
Is a blackbody black in color? It can be, but it doesn't have to be, as we saw in the first paragraph. The Sun is very nearly a blackbody, meaning it absorbs very nearly all radiation incident upon it. It also remains fairly constant in temperature because, even though its emitting a lot of radiation, its creating more of it deep within its core. And it's most definitely not black in color. Actually, to a reasonable approximation, all matter in thermal equilibrium behaves like a blackbody. A book does. A car does. Even a person does. As an example, the actress Halle Berry is a blackbody. Now, if she were a true ideal blackbody, she wouldn't reflect light (and she wouldn't have made the movie Catwoman), so we wouldn't be able to see her in color. She would appear pitch black. She would absorb all light and emit a radiation spectrum characteristic of something at 98.6 F (or 310 K). But she, and all people, approximate blackbodies. We reflect some of the light falling on us, which makes us visible, and absorb the rest. And we all emit a radiation spectrum that peaks in the middle of the infrared region of the spectrum (like the stove), producing such little visible light (as well as certain other wavelengths) that we don't shine in a dark room (unlike the stove, which is hotter). Want to be able to "see" someone that isn't reflecting visible light? Try infrared goggles. These pick up the infrared radiation produced by the person (and other warm things around the person) and convert it to visible light that your eyes can detect.
Monday, September 21, 2009
A Bit on Work: Part III of III
Hold a permanent magnet above a paper clip and the paper clip "jumps" up to the magnet. How does this happen given that the magnetic field is not doing any work on the paper clip?
First off, the paper clip is made of steel, which contains a large amount of iron. Iron is a ferromagnetic material, meaning it can become magnetized when placed in a magnetic field and remain magnetized when removed from that field. Non ferromagnetic materials, on the other hand, would lose their magnetization upon removal of the external magnetic field. (In this case, we're not removing the magnetic field, but it's still nice that we're working with a ferromagnetic material. You'll see why later.) When we place a permanent magnet above a paper clip, the magnetic field produced by the magnet induces magnetism in the paper clip by applying a torque to the magnetic dipoles in the iron, lining them up.
What's a magnetic dipole? A small current loop (say, electrons flowing around a tiny loop of wire) is, more or less, a magnetic dipole. We call the small current loop a magnetic dipole because it produces a magnetic field, at some distance, that is strictly "dipolar" in nature. Not all systems produce a magnetic field that is dipolar in nature. Some systems produce a field that is not dipolar at all, but perhaps "quadrupolar" or "octopolar." Other systems might produce fields that are largely dipolar but a little quadrupolar, too. What does it mean for the field to be strictly "dipolar" in nature? It means that, as you move away from the system, the strength of the magnetic field drops off as one over the distance cubed. There's no component of the field that drops off as one over the distance or one over the distance to the fourth power, etc. Not too many systems can actually produce a field that is strictly dipolar. It's easy to produce one that is largely dipolar, but quite difficult to produce one that is strictly dipolar. A tiny current loop does the trick, however. But it has to be really tiny, as in infinitesimally small. How do you make such a thing in the lab? You don't. Luckily, I suppose, they already exist in nature as electrons whizzing about nuclei inside of atoms. (No wires necessary.) Previously, I said that the magnetic field of a permanent magnet applies a torque to the magnetic dipoles in a sample of iron, lining them up. Now it should be fairly clear that it is atomic electrons (acting in their capacity as magnetic dipoles) that are doing the lining up. Note: not every electron in a sample of iron experiences this torque. Only the unpaired electrons do. (Each iron atom has 4 unpaired electrons.)
OK, so how can an electron point in a particular direction? Really, it can't. What I mean is that this whirrling electron, this magnetic dipole, points its magnetic dipole moment in a particular direction. This so-called dipole moment is a property of the dipole and can act to represent the physical dipole. It's a vector (with, of course, a magnitude and a direction) that quantifies the contribution of a system's internal magnetism to the external dipolar magnetic field produced by the system. That is, a measure of how much what's going on inside the system is effecting the magnetic field observed outside the system. The moment may be non-physical, but it often proves useful to picture an electron as a little vector when doing calculations or thinking through problems like the one we're addressing here. Therefore, to say that dipoles are lined up is to say that their dipole moments are lined up (or parallel to one another).
Magnets come in different strengths, which we quantify through the concept of magnetization. Something with a large magnetization is both strongly affected by external magnetic fields and the source of its own strong magnetic field. We define magnetization as the amount of magnetic dipole moment per unit volume. Therefore, given a unit volume, we perform a vector sum of all the little moments (vectors) in that volume, and we see how strong our magnet is. Two vectors of equal magnitude pointing in opposite directions sum to zero. Likewise, a large number of arbitrarily directed vectors also sums to zero. This explains why the paper clip, before being magnetized by the permanent magnet, isn't magnetic. It has plenty of little moments (or vectors) inside, but they are arbitrarily directed (well, sorta) and so the net sum of these moments, per unit volume, is pretty close to zero. Once the permanent magnet acts on the dipole moments in the paper clip and lines them up, the vector sum no longer equals zero. Rather, it is now rather large, and the paper clip now has a large magnetization and acts outwardly like a magnet.
Electrons, bound to atoms, move in two ways. This leads to two magnetic dipoles or, better put, two contributions to a single magnetic dipole. (It's a simple vector sum.) Firstly, an electron "orbits" the nucleus. Even though it's not accurate, people often picture this motion as being like a planet orbiting the sun. That's a good way to think of it at the present. Secondly, an electron "spins." You might think of this as an electron spinning about its own axis, just as the Earth spins about its axis once every 24 hours. But this is a rather horrible and misleading analogy, because this "spin" is not really a physical rotation about an axis. The electron is a point particle with no physical size, so there's really no way it could have a component off-axis that could move around some central point. For this reason, physicists say the electron has an intrinsic magnetic dipole moment that originates with its spin. It just exists.
Let's step back now and look at what we have. A permanent magnet (which itself is comprised of magnetic dipoles, with the moments all pointing in the same direction, hence its large magnetization) produces a magnetic field that exists in the space around the magnet. This space includes the paper clip, sitting on a table. The magnetic field interacts with the electrons in the iron/steel paper clip, changing the magnetic dipole moments of these electrons, and inducing magnetism in the paper clip. How exactly?
The magnetic field produced by the permanent magnet has the property, determined through experimentation, that it can exert a force on a moving charged particle, like our atomic electrons. (This force, called the Lorentz force, was defined in the previous blog entry.) Acting on each magnetic dipole, this force (actually, torque) acts to twist the dipole moments such that they line up parallel to the field. The result: countless magnetic dipole moments in our paper clip are now pointing in the direction of the field. (It is at this point we appreciate the paper clip being made of iron, a ferromagnetic material. If it was not, the magnetic force would have a more difficult time turning all of the dipole moments and would ultimately manage to turn only some of them, diminishing the strength of the magnetization induced in the paper clip.) The paper clip is now a magnet.
How else does the magnetic field interact with the atomic electrons? Surprisingly, it can change the speed with which the electrons orbit their nuclei! Before the magnetic field enters the picture, the electrons are held in their orbits by electrical forces alone. (Unlike charges, i.e. protons and electrons, attract.) When the magnetic field shows up, it produces a force that acts in the opposite direction as the electrical force, at the location of each orbiting electron, and serves to weaken the pull on the electron towards the center of the orbit. The electron no longer needs to travel so quickly to maintain its orbital radius and it slows down. Now, think back to the previous blog entry and the example of you holding on to a string, at the other end of which is attached a ball. You're whirling this ball about your head. In this case, there is a centripetal force pulling the ball towards the center of its circular path, thereby acting, at all times, perpendicular to the direction of the ball's motion. (This is like the electric force holding the electron in its orbit about the atom's nucleus.) Likewise, the magnetic force, once introduced, acts perpendicular to the circling electron. The magnetic force, however, is acting not towards the center of the circle but radially outward, away from the center. As stated before, this diminished centripetal force causes the electron to slow down. We'll soon see that this slowing of the electrons in the atoms of the paper clip is key to the lifting of the paper clip by the permanent magnet.
Now is the time to point out that the magnetic field produced by the permanent magnet is non-uniform. It generally is pointing downwards, assuming the north pole of the magnet is nearer the paper clip than the south pole, but it also flares out. It's the vertical component of the field which acts to slow down the electrons but the horizontal component, existing in the plane of the orbiting electrons, that provides an upward force. Adding together these two components we end up with a force (represented by a vector) that is pointing up and out (away from the center of the electron's circular orbit). We know this force must be perpendicular to the motion of the electron, and indeed it is, as the electron begins to move upwards along a helical path.
The motion of an electron in its orbit constitutes stored kinetic energy. It's this energy that is tapped to lift the paper clip off the table. As the paper clip (acting as a magnet) rises, the unpaired electrons inside slow down and the stored kinetic energy decreases. The magnetic force redirects this energy into lifting the paper clip/magnet against the force of gravity. The net magnetic force, like the normal force mentioned in the previous blog entry, is responsible for the vertical motion of the object (previously a box and now an electron) despite the fact that it doesn't do work on that object. Both of these forces (the magnetic and the normal) are redirecting work done by another agent. In the case of the box being pushed up the incline, the other agent is a person. And in this case, it's ... Well, who or what is this agent?
Uhhhhhh. Well, the agent is whatever got all those electrons circling around all those nuclei to begin with. Whatever it was, it did work and imparted kinetic energy to each little electron. Trying to trace the formation of these iron atoms back to the original source of energy would lead us back to the Big Bang. So it was God, I guess. "God" lifted the paper clip.
First off, the paper clip is made of steel, which contains a large amount of iron. Iron is a ferromagnetic material, meaning it can become magnetized when placed in a magnetic field and remain magnetized when removed from that field. Non ferromagnetic materials, on the other hand, would lose their magnetization upon removal of the external magnetic field. (In this case, we're not removing the magnetic field, but it's still nice that we're working with a ferromagnetic material. You'll see why later.) When we place a permanent magnet above a paper clip, the magnetic field produced by the magnet induces magnetism in the paper clip by applying a torque to the magnetic dipoles in the iron, lining them up.
What's a magnetic dipole? A small current loop (say, electrons flowing around a tiny loop of wire) is, more or less, a magnetic dipole. We call the small current loop a magnetic dipole because it produces a magnetic field, at some distance, that is strictly "dipolar" in nature. Not all systems produce a magnetic field that is dipolar in nature. Some systems produce a field that is not dipolar at all, but perhaps "quadrupolar" or "octopolar." Other systems might produce fields that are largely dipolar but a little quadrupolar, too. What does it mean for the field to be strictly "dipolar" in nature? It means that, as you move away from the system, the strength of the magnetic field drops off as one over the distance cubed. There's no component of the field that drops off as one over the distance or one over the distance to the fourth power, etc. Not too many systems can actually produce a field that is strictly dipolar. It's easy to produce one that is largely dipolar, but quite difficult to produce one that is strictly dipolar. A tiny current loop does the trick, however. But it has to be really tiny, as in infinitesimally small. How do you make such a thing in the lab? You don't. Luckily, I suppose, they already exist in nature as electrons whizzing about nuclei inside of atoms. (No wires necessary.) Previously, I said that the magnetic field of a permanent magnet applies a torque to the magnetic dipoles in a sample of iron, lining them up. Now it should be fairly clear that it is atomic electrons (acting in their capacity as magnetic dipoles) that are doing the lining up. Note: not every electron in a sample of iron experiences this torque. Only the unpaired electrons do. (Each iron atom has 4 unpaired electrons.)
OK, so how can an electron point in a particular direction? Really, it can't. What I mean is that this whirrling electron, this magnetic dipole, points its magnetic dipole moment in a particular direction. This so-called dipole moment is a property of the dipole and can act to represent the physical dipole. It's a vector (with, of course, a magnitude and a direction) that quantifies the contribution of a system's internal magnetism to the external dipolar magnetic field produced by the system. That is, a measure of how much what's going on inside the system is effecting the magnetic field observed outside the system. The moment may be non-physical, but it often proves useful to picture an electron as a little vector when doing calculations or thinking through problems like the one we're addressing here. Therefore, to say that dipoles are lined up is to say that their dipole moments are lined up (or parallel to one another).
Magnets come in different strengths, which we quantify through the concept of magnetization. Something with a large magnetization is both strongly affected by external magnetic fields and the source of its own strong magnetic field. We define magnetization as the amount of magnetic dipole moment per unit volume. Therefore, given a unit volume, we perform a vector sum of all the little moments (vectors) in that volume, and we see how strong our magnet is. Two vectors of equal magnitude pointing in opposite directions sum to zero. Likewise, a large number of arbitrarily directed vectors also sums to zero. This explains why the paper clip, before being magnetized by the permanent magnet, isn't magnetic. It has plenty of little moments (or vectors) inside, but they are arbitrarily directed (well, sorta) and so the net sum of these moments, per unit volume, is pretty close to zero. Once the permanent magnet acts on the dipole moments in the paper clip and lines them up, the vector sum no longer equals zero. Rather, it is now rather large, and the paper clip now has a large magnetization and acts outwardly like a magnet.
Electrons, bound to atoms, move in two ways. This leads to two magnetic dipoles or, better put, two contributions to a single magnetic dipole. (It's a simple vector sum.) Firstly, an electron "orbits" the nucleus. Even though it's not accurate, people often picture this motion as being like a planet orbiting the sun. That's a good way to think of it at the present. Secondly, an electron "spins." You might think of this as an electron spinning about its own axis, just as the Earth spins about its axis once every 24 hours. But this is a rather horrible and misleading analogy, because this "spin" is not really a physical rotation about an axis. The electron is a point particle with no physical size, so there's really no way it could have a component off-axis that could move around some central point. For this reason, physicists say the electron has an intrinsic magnetic dipole moment that originates with its spin. It just exists.
Let's step back now and look at what we have. A permanent magnet (which itself is comprised of magnetic dipoles, with the moments all pointing in the same direction, hence its large magnetization) produces a magnetic field that exists in the space around the magnet. This space includes the paper clip, sitting on a table. The magnetic field interacts with the electrons in the iron/steel paper clip, changing the magnetic dipole moments of these electrons, and inducing magnetism in the paper clip. How exactly?
The magnetic field produced by the permanent magnet has the property, determined through experimentation, that it can exert a force on a moving charged particle, like our atomic electrons. (This force, called the Lorentz force, was defined in the previous blog entry.) Acting on each magnetic dipole, this force (actually, torque) acts to twist the dipole moments such that they line up parallel to the field. The result: countless magnetic dipole moments in our paper clip are now pointing in the direction of the field. (It is at this point we appreciate the paper clip being made of iron, a ferromagnetic material. If it was not, the magnetic force would have a more difficult time turning all of the dipole moments and would ultimately manage to turn only some of them, diminishing the strength of the magnetization induced in the paper clip.) The paper clip is now a magnet.
How else does the magnetic field interact with the atomic electrons? Surprisingly, it can change the speed with which the electrons orbit their nuclei! Before the magnetic field enters the picture, the electrons are held in their orbits by electrical forces alone. (Unlike charges, i.e. protons and electrons, attract.) When the magnetic field shows up, it produces a force that acts in the opposite direction as the electrical force, at the location of each orbiting electron, and serves to weaken the pull on the electron towards the center of the orbit. The electron no longer needs to travel so quickly to maintain its orbital radius and it slows down. Now, think back to the previous blog entry and the example of you holding on to a string, at the other end of which is attached a ball. You're whirling this ball about your head. In this case, there is a centripetal force pulling the ball towards the center of its circular path, thereby acting, at all times, perpendicular to the direction of the ball's motion. (This is like the electric force holding the electron in its orbit about the atom's nucleus.) Likewise, the magnetic force, once introduced, acts perpendicular to the circling electron. The magnetic force, however, is acting not towards the center of the circle but radially outward, away from the center. As stated before, this diminished centripetal force causes the electron to slow down. We'll soon see that this slowing of the electrons in the atoms of the paper clip is key to the lifting of the paper clip by the permanent magnet.
Now is the time to point out that the magnetic field produced by the permanent magnet is non-uniform. It generally is pointing downwards, assuming the north pole of the magnet is nearer the paper clip than the south pole, but it also flares out. It's the vertical component of the field which acts to slow down the electrons but the horizontal component, existing in the plane of the orbiting electrons, that provides an upward force. Adding together these two components we end up with a force (represented by a vector) that is pointing up and out (away from the center of the electron's circular orbit). We know this force must be perpendicular to the motion of the electron, and indeed it is, as the electron begins to move upwards along a helical path.
The motion of an electron in its orbit constitutes stored kinetic energy. It's this energy that is tapped to lift the paper clip off the table. As the paper clip (acting as a magnet) rises, the unpaired electrons inside slow down and the stored kinetic energy decreases. The magnetic force redirects this energy into lifting the paper clip/magnet against the force of gravity. The net magnetic force, like the normal force mentioned in the previous blog entry, is responsible for the vertical motion of the object (previously a box and now an electron) despite the fact that it doesn't do work on that object. Both of these forces (the magnetic and the normal) are redirecting work done by another agent. In the case of the box being pushed up the incline, the other agent is a person. And in this case, it's ... Well, who or what is this agent?
Uhhhhhh. Well, the agent is whatever got all those electrons circling around all those nuclei to begin with. Whatever it was, it did work and imparted kinetic energy to each little electron. Trying to trace the formation of these iron atoms back to the original source of energy would lead us back to the Big Bang. So it was God, I guess. "God" lifted the paper clip.
Labels:
ferromagnetism,
Lorentz force,
magnetic dipole,
magnetic force,
magnetism,
paper clip,
work
Wednesday, September 9, 2009
A Bit on Work: Part II of III
This entry logically follows the preceding entry, so you should read that one first unless you already have a good understanding of the physicist's concept of work.
In that preceding entry we spoke of a person pushing a box up an incline, such that the person's arms were always horizontal or parallel to the ground (and not the incline). That is, the force imparted by the person on the box was entirely horizontal. To find the work done by the person on the box, we had to find the component of the force in the direction of the box's motion. That is, we had to break up (mathematically) the force into a component parallel to the incline and a component perpendicular to the incline, and then we took the component parallel to the incline and multiplied that by the distance up the incline that the box moved. We also reasoned that we could just as easily multiply the total horizontal force applied by the person by the horizontal displacement of the box and arrive at the same answer.
But if we take the second approach for calculating the work done on the box, how do we explain the increase in the vertical position of the box? Was work done on the box in moving it higher above the ground? And by what force? (Really, we face the same question in the first approach to calculating work, but it's easier to conceptualize using the second approach.) Indeed, work was done, and it was done by the person. But it was not the force imparted by the person that moved the box higher. Here, the normal force (if you don't know what the normal force is, you should look it up before continuing), while doing no work itself, redirects the efforts of the person from horizontal to vertical. We say the normal force does no work, because the normal force is always perpendicular to the incline and the box never moves off the inclined plane. But the normal force does have a vertical component that lifts the box by redirecting some of the work done by the person. This is a bit tricky, but understanding it will come in handy later.
The question at the end of the last entry was: What force never does any work? And here is the answer: the magnetic force. There is a law, called the Lorentz force law, which tells us how to compute the magnetic force on a charged particle. It says to take the value of the charge (say, the charge associated with a single electron) and to multiply that value by a vector that is perpendicular to both the magnetic field at the location of the particle and to the velocity vector describing the instantaneous motion of the particle. The magnitude of this mutually-perpendicular vector is equal to the magnitude of the velocity times the magnitude of the magnetic field times sine of the angle between the two vectors. This long description can be written simply in mathematical notation: F = q (v x B), where q represents charge and the "x" signifies the cross product between the velocity v and the magnetic field B. (F, v, and B are all vectors, here.) If you're not familiar with a "cross product," it's more or less explained in the wordy description above, so just digest that and ignore the formula.
In summary, the magnetic force on a charged particle is perpendicular to the particle's direction of motion (as well as the direction of the magnetic field). In other words, no component of the force is ever in the same direction as that in which the particle is moving, and therefore no work can ever be done by this force. (This makes perfect sense if you remember the definition of work, which was stated in the previous blog entry.) As an example, if you had an electron (somehow made visible) that was traveling from the front towards the back of your desk, and then you turned on a magnetic field that was uniform and pointing in the direction of "down", i.e. it is perpendicular to the desk and points down right through the table top, then the electron would immediately get tugged towards the left (while continuing to move forward) and fall into a circular path. Note that as soon as the electron moves a bit forward and to the left along this circular path, the force is not longer exclusively left but is now left and a bit down. I don't know if I've explained this clearly or not, so I'll continue a bit and then move on. As an analogy, picture yourself whirling a ball tied to a string around your head. You hold on to one end of the string and raise your arm and get the ball swinging about in a circular path. The force here, exerted by the string on the ball, like in the case of the magnetic field acting on the electron, is always perpendicular to the direction of motion of the ball. It's a centripetal force, in other words. It does alter the direction of the electron, but it certainly doesn't do work on it. The math here is straightforward, but the idea can seem hard to swallow in certain physical examples.
Let's say I take a magnet from the refrigerator and place it over a paper clip lying on my desk. The paper clip "jumps" to the magnet and sticks to it. Is it really true that the magnet's magnetic force did no work in moving the paper clip the few centimeters from the desk top to the magnet in my hand? Yes, because magnetic forces never do work. So then what did the work? What we're going to find is that, like the normal force mentioned earlier, the magnetic force redirects work done by another agent. But what is this other agent and how does the magnetic force redirect its work?
I'm now going to attempt an explanation of what it is that actually does the work in this example. Now, no good teacher would ever introduce a concept and then quickly jump to a difficult and confusing example involving that concept. They would cover some easy examples first and work their way up to a non-intuitive, challenging problem. But I want to jump right into the difficult explanation as to how the paper clip is pulled upwards towards the magnet. I'll begin this explanation in the next blog entry because this one is long enough.
In that preceding entry we spoke of a person pushing a box up an incline, such that the person's arms were always horizontal or parallel to the ground (and not the incline). That is, the force imparted by the person on the box was entirely horizontal. To find the work done by the person on the box, we had to find the component of the force in the direction of the box's motion. That is, we had to break up (mathematically) the force into a component parallel to the incline and a component perpendicular to the incline, and then we took the component parallel to the incline and multiplied that by the distance up the incline that the box moved. We also reasoned that we could just as easily multiply the total horizontal force applied by the person by the horizontal displacement of the box and arrive at the same answer.
But if we take the second approach for calculating the work done on the box, how do we explain the increase in the vertical position of the box? Was work done on the box in moving it higher above the ground? And by what force? (Really, we face the same question in the first approach to calculating work, but it's easier to conceptualize using the second approach.) Indeed, work was done, and it was done by the person. But it was not the force imparted by the person that moved the box higher. Here, the normal force (if you don't know what the normal force is, you should look it up before continuing), while doing no work itself, redirects the efforts of the person from horizontal to vertical. We say the normal force does no work, because the normal force is always perpendicular to the incline and the box never moves off the inclined plane. But the normal force does have a vertical component that lifts the box by redirecting some of the work done by the person. This is a bit tricky, but understanding it will come in handy later.
The question at the end of the last entry was: What force never does any work? And here is the answer: the magnetic force. There is a law, called the Lorentz force law, which tells us how to compute the magnetic force on a charged particle. It says to take the value of the charge (say, the charge associated with a single electron) and to multiply that value by a vector that is perpendicular to both the magnetic field at the location of the particle and to the velocity vector describing the instantaneous motion of the particle. The magnitude of this mutually-perpendicular vector is equal to the magnitude of the velocity times the magnitude of the magnetic field times sine of the angle between the two vectors. This long description can be written simply in mathematical notation: F = q (v x B), where q represents charge and the "x" signifies the cross product between the velocity v and the magnetic field B. (F, v, and B are all vectors, here.) If you're not familiar with a "cross product," it's more or less explained in the wordy description above, so just digest that and ignore the formula.
In summary, the magnetic force on a charged particle is perpendicular to the particle's direction of motion (as well as the direction of the magnetic field). In other words, no component of the force is ever in the same direction as that in which the particle is moving, and therefore no work can ever be done by this force. (This makes perfect sense if you remember the definition of work, which was stated in the previous blog entry.) As an example, if you had an electron (somehow made visible) that was traveling from the front towards the back of your desk, and then you turned on a magnetic field that was uniform and pointing in the direction of "down", i.e. it is perpendicular to the desk and points down right through the table top, then the electron would immediately get tugged towards the left (while continuing to move forward) and fall into a circular path. Note that as soon as the electron moves a bit forward and to the left along this circular path, the force is not longer exclusively left but is now left and a bit down. I don't know if I've explained this clearly or not, so I'll continue a bit and then move on. As an analogy, picture yourself whirling a ball tied to a string around your head. You hold on to one end of the string and raise your arm and get the ball swinging about in a circular path. The force here, exerted by the string on the ball, like in the case of the magnetic field acting on the electron, is always perpendicular to the direction of motion of the ball. It's a centripetal force, in other words. It does alter the direction of the electron, but it certainly doesn't do work on it. The math here is straightforward, but the idea can seem hard to swallow in certain physical examples.
Let's say I take a magnet from the refrigerator and place it over a paper clip lying on my desk. The paper clip "jumps" to the magnet and sticks to it. Is it really true that the magnet's magnetic force did no work in moving the paper clip the few centimeters from the desk top to the magnet in my hand? Yes, because magnetic forces never do work. So then what did the work? What we're going to find is that, like the normal force mentioned earlier, the magnetic force redirects work done by another agent. But what is this other agent and how does the magnetic force redirect its work?
I'm now going to attempt an explanation of what it is that actually does the work in this example. Now, no good teacher would ever introduce a concept and then quickly jump to a difficult and confusing example involving that concept. They would cover some easy examples first and work their way up to a non-intuitive, challenging problem. But I want to jump right into the difficult explanation as to how the paper clip is pulled upwards towards the magnet. I'll begin this explanation in the next blog entry because this one is long enough.
Tuesday, September 1, 2009
A Bit on Work: Part I of III
The concept of work has a specific meaning in the sciences. It is best described by stating how it is calculated, which is the force applied to an object in the direction of its motion times the distance it moves. A force is, more or less, a push or pull. Newton stated that force, mass, and acceleration are linked through the equation F = m a. (He stated it in different terms, but this is how it is normally written today.) Some prefer to write it as a = F/m, to emphasize that a force causes acceleration and not the other way around.
If I push on something (say, a piece of paper taped to a wall) and it does not move, am I applying a force to the paper? Yes, but in this case I am not the only thing applying a force. The wall is pushing on the paper just as hard as I am but in the opposite direction. The net force F on the paper is zero and both the left and right sides of the equation a = F/m are zero and all is well.
Am I doing work on the paper? I'm applying a force but there is no displacement of the paper, so the answer is no. If I manage to push the paper through the wall then indeed I have done work (and I will feel very stupid). New example. Let's say a heavy box is at rest on an incline. I push on it with my arms parallel to the ground (not the incline), and it moves a bit up the incline. Let's say I push with a force of 20 newtons. (Newtons are the SI units of force. One newton is equal to the amount of force required to give a mass of one kilogram an acceleration of one meter per second squared.) Let's also say that the box moves up the incline a meter. Is the work done equal to 20 newtons x 1 meter? No, it's not, because we are concerned not with the overall force but with the force in the direction of the object's motion. We therefore need to mathematically subdivide this overall force (exerted by myself) into a component in the direction of the object's motion (which is of interest to us) and one in the direction perpendicular to this motion (which is not of interest to us). We then multiply the force component in the direction of the object's motion (which is going to be the overall force times cosine of the angle of the incline, with respect to the ground) with 1 meter to get the amount of work done. Yes, you can also calculate work by taking the total force I exert and multiplying this by the horizontal displacement of the object, which will be less than 1 meter. By the way, the SI unit of work is the joule, which is equal to, of course, a newton times a meter.
Let's see if I can confuse you. You push a bag of flour across the kitchen counter. According to Newton's third law, the bag of flour pushes back on you with an equal (in magnitude) but oppositely directed force. Why does the bag of flour move if the forces cancel out?
If you need a hint, take another look at the equation a = F/m. The forces exerted by you and by the bag may be equal in magnitude but your masses surely are not. Therefore you should have different accelerations, which indeed you do. This problem is complicated by the force of friction, which is robbing you of your acceleration and reducing the acceleration of the bag of flour. If you could get rid of the friction, you too would accelerate backwards (but not very quickly because of your relatively large mass). Do you see the difference between this example and the paper against the wall?
I want to talk more about work and a special kind of force that, oddly enough, never does any work. There aren't many forces in nature, so perhaps you can figure out which one I'm talking about. I'll discuss it in my next blog entry. Thanks for reading.
If I push on something (say, a piece of paper taped to a wall) and it does not move, am I applying a force to the paper? Yes, but in this case I am not the only thing applying a force. The wall is pushing on the paper just as hard as I am but in the opposite direction. The net force F on the paper is zero and both the left and right sides of the equation a = F/m are zero and all is well.
Am I doing work on the paper? I'm applying a force but there is no displacement of the paper, so the answer is no. If I manage to push the paper through the wall then indeed I have done work (and I will feel very stupid). New example. Let's say a heavy box is at rest on an incline. I push on it with my arms parallel to the ground (not the incline), and it moves a bit up the incline. Let's say I push with a force of 20 newtons. (Newtons are the SI units of force. One newton is equal to the amount of force required to give a mass of one kilogram an acceleration of one meter per second squared.) Let's also say that the box moves up the incline a meter. Is the work done equal to 20 newtons x 1 meter? No, it's not, because we are concerned not with the overall force but with the force in the direction of the object's motion. We therefore need to mathematically subdivide this overall force (exerted by myself) into a component in the direction of the object's motion (which is of interest to us) and one in the direction perpendicular to this motion (which is not of interest to us). We then multiply the force component in the direction of the object's motion (which is going to be the overall force times cosine of the angle of the incline, with respect to the ground) with 1 meter to get the amount of work done. Yes, you can also calculate work by taking the total force I exert and multiplying this by the horizontal displacement of the object, which will be less than 1 meter. By the way, the SI unit of work is the joule, which is equal to, of course, a newton times a meter.
Let's see if I can confuse you. You push a bag of flour across the kitchen counter. According to Newton's third law, the bag of flour pushes back on you with an equal (in magnitude) but oppositely directed force. Why does the bag of flour move if the forces cancel out?
If you need a hint, take another look at the equation a = F/m. The forces exerted by you and by the bag may be equal in magnitude but your masses surely are not. Therefore you should have different accelerations, which indeed you do. This problem is complicated by the force of friction, which is robbing you of your acceleration and reducing the acceleration of the bag of flour. If you could get rid of the friction, you too would accelerate backwards (but not very quickly because of your relatively large mass). Do you see the difference between this example and the paper against the wall?
I want to talk more about work and a special kind of force that, oddly enough, never does any work. There aren't many forces in nature, so perhaps you can figure out which one I'm talking about. I'll discuss it in my next blog entry. Thanks for reading.
Wednesday, August 5, 2009
Intro to Electricity: Part II
(continues from part 1)...
What to do about the need to locate a power plant near the plant's customers, so that a sufficient amount of energy survives the journey by wire and arrives, ready for use, at the customer's house? Who wants to live near a power plant? Thomas Edison, in trying to electrify New York City, faced this problem. As mentioned in the previous entry, Edison knew that the power reaching the homes of his customers was a product of the current and the drop in voltage (or electric potential) experienced by that current. A small current (or flow of electric charges) subject to a large voltage drop could produce the same amount of power as a large current subject to a small voltage drop. Edison also knew that power was lost (as heat) in the wires in proportion to the square of the current, so power was best delivered using a low current (to minimize power loss during transmission) and a large voltage drop. But, as mentioned in the previous entry, large voltage drops were dangerous things. If a person was to come into contact with a large enough voltage, it could propel a current through the person's body and disrupt the heart's rhythm. Thus the dilemma.
Fortunately, a solution to this problem was found. It goes by the name of alternating current, as opposed to the direct current that Edison worked with, and Edison was quite aware of it. First, what is alternating current? Second, how is it a solution to the problem mentioned previously? And third, why didn't Edison embrace alternating current (AC) as the savior it was?
Alternating current is a current (or flow of electric charges) that periodically reverses direction. Electrons rush right, slow down, stop, then rush left, slow down, stop, then rush right, etc. Power plants get current to do this by alternating the voltage drop that propels the electrons on their way. (So, yes, AC is what power plants deliver today.)
Alternating current's saving grace: it makes use of a special property of electricity (and magnetism) that allows for easy transfer of power from one AC circuit to another. This means the power plant and the electrical outlet in a customer's home don't have to be a part of the same circuit. (Direct current requires everything to be a part of the same circuit. This is of fundamental importance.) Here's the benefit: the various circuits that comprise the AC power distribution system can operate at different voltages with different currents. We can divide the distribution system into three circuits: one that originates within the power plant and terminates just outside the facility, a second that starts where the first ends and stretches for miles, terminating outside a customer's home, and a third that picks up where the second ends and carries power inside the customer's home. Maintaining a near constant level of power throughout the system, we can send high-current, low-voltage power through the first circuit, low-current, high-voltage power through the long second circuit, and then switch back to high-current, low-voltage power for the third and final circuit. The low voltages inside the power plant and inside the home are safe to be around, and even though current is high in these two circuits, the circuits are short and so little power is lost to heating the wires. The long second circuit, carried in wires high off the ground (or beneath the ground), can operate at a very high voltage and, therefore, a very low current, and so loses little power to heating the wires. There's no need to locate a power plant near a customer's home if you can locate the plant far away and transmit power to the home without losing much power along the way.
The physical device that is used to link two AC circuits, and that has the ability to alter the voltage (and current) from one circuit to the next, is called a transformer. You see these on the side of the road all the time. Transformers that increase the voltage from one circuit to the next are called step-up transformers; those that decrease the voltage (and so increase the current) are called step-down transformers. I won't go into how a transformer works in detail, but I'll mention that it relies on a fundamental relationship between electricity and magnetism. In short, accelerating electric charges (like electrons) produce magnetic fields that change in time. And magnetic fields that change in time, in turn, produce electric fields that generate currents in wires. Because of the alternating nature of the current in AC, electrons are constantly accelerating. (Not so in direct current.) And as electrons enter a transformer and accelerate through a coil of wire, they create a time-varying magnetic field that propels a current in a nearby coil of wire, which is part of a second circuit that exits the transformer. When the number of turns (of wire) in the secondary coil is greater than the number of turns in the primary coil, the voltage in the second circuit is stepped-up (i.e. it becomes higher than that in the first, or primary, circuit). Likewise, if the secondary coil has fewer turns than the primary coil, the voltage will decrease from the primary to secondary circuit, and we'll have a step-down transformer. (Further discussion will make things less clear, I think. It's really complicated how this works exactly.)
So why didn't Thomas Edison embrace AC, which would have allowed him to distribute power over great distances with little power lost during the transmission? He viewed its fluctuating nature as exotic and dangerous. Furthermore, he noticed that as the current reverses direction, there is a brief moment in time in which it's stopped. That is, there is a brief moment, many times each second, in which there is no power being delivered. (Is it possible to change your car's motion from 5 mph forward to 5 mph reverse without momentarily stopping?) It also came down to money. Not only were AC transmission lines much more expensive than DC lines, but Edison (and his backers) already had a lot of money invested in DC technology. It, therefore, fell to other pioneers of the late 1800s to develop AC power transmission and make it available to customers.
So what of those moments in time in which no current is flowing (and no power is available)? This is a real problem for most electronic and some electric devices. Not only are many devices sensitive to the direction of electron flow, but many devices need constant power and can't handle moments without it. A simple lamp is not one of these devices. It doesn't matter in which direction current flows as it moves through the lamp's filament. And the lamp survives quite well during those moments when it is without power; it just briefly stops producing light. These moments come and go so quickly the human eye can't detect the flickering, and so it's inconsequential. A radio, on the other hand, is different. Its more-sophisticated interior requires constant power and a current that travels in one direction. That is, it needs direct current, which it can obtain from a power adaptor. Most electronics require power adaptors (either internal or external) for this very same reason: the need to change the AC to DC and, usually, to lower the voltage as well (via a transformer).
What to do about the need to locate a power plant near the plant's customers, so that a sufficient amount of energy survives the journey by wire and arrives, ready for use, at the customer's house? Who wants to live near a power plant? Thomas Edison, in trying to electrify New York City, faced this problem. As mentioned in the previous entry, Edison knew that the power reaching the homes of his customers was a product of the current and the drop in voltage (or electric potential) experienced by that current. A small current (or flow of electric charges) subject to a large voltage drop could produce the same amount of power as a large current subject to a small voltage drop. Edison also knew that power was lost (as heat) in the wires in proportion to the square of the current, so power was best delivered using a low current (to minimize power loss during transmission) and a large voltage drop. But, as mentioned in the previous entry, large voltage drops were dangerous things. If a person was to come into contact with a large enough voltage, it could propel a current through the person's body and disrupt the heart's rhythm. Thus the dilemma.
Fortunately, a solution to this problem was found. It goes by the name of alternating current, as opposed to the direct current that Edison worked with, and Edison was quite aware of it. First, what is alternating current? Second, how is it a solution to the problem mentioned previously? And third, why didn't Edison embrace alternating current (AC) as the savior it was?
Alternating current is a current (or flow of electric charges) that periodically reverses direction. Electrons rush right, slow down, stop, then rush left, slow down, stop, then rush right, etc. Power plants get current to do this by alternating the voltage drop that propels the electrons on their way. (So, yes, AC is what power plants deliver today.)
Alternating current's saving grace: it makes use of a special property of electricity (and magnetism) that allows for easy transfer of power from one AC circuit to another. This means the power plant and the electrical outlet in a customer's home don't have to be a part of the same circuit. (Direct current requires everything to be a part of the same circuit. This is of fundamental importance.) Here's the benefit: the various circuits that comprise the AC power distribution system can operate at different voltages with different currents. We can divide the distribution system into three circuits: one that originates within the power plant and terminates just outside the facility, a second that starts where the first ends and stretches for miles, terminating outside a customer's home, and a third that picks up where the second ends and carries power inside the customer's home. Maintaining a near constant level of power throughout the system, we can send high-current, low-voltage power through the first circuit, low-current, high-voltage power through the long second circuit, and then switch back to high-current, low-voltage power for the third and final circuit. The low voltages inside the power plant and inside the home are safe to be around, and even though current is high in these two circuits, the circuits are short and so little power is lost to heating the wires. The long second circuit, carried in wires high off the ground (or beneath the ground), can operate at a very high voltage and, therefore, a very low current, and so loses little power to heating the wires. There's no need to locate a power plant near a customer's home if you can locate the plant far away and transmit power to the home without losing much power along the way.
The physical device that is used to link two AC circuits, and that has the ability to alter the voltage (and current) from one circuit to the next, is called a transformer. You see these on the side of the road all the time. Transformers that increase the voltage from one circuit to the next are called step-up transformers; those that decrease the voltage (and so increase the current) are called step-down transformers. I won't go into how a transformer works in detail, but I'll mention that it relies on a fundamental relationship between electricity and magnetism. In short, accelerating electric charges (like electrons) produce magnetic fields that change in time. And magnetic fields that change in time, in turn, produce electric fields that generate currents in wires. Because of the alternating nature of the current in AC, electrons are constantly accelerating. (Not so in direct current.) And as electrons enter a transformer and accelerate through a coil of wire, they create a time-varying magnetic field that propels a current in a nearby coil of wire, which is part of a second circuit that exits the transformer. When the number of turns (of wire) in the secondary coil is greater than the number of turns in the primary coil, the voltage in the second circuit is stepped-up (i.e. it becomes higher than that in the first, or primary, circuit). Likewise, if the secondary coil has fewer turns than the primary coil, the voltage will decrease from the primary to secondary circuit, and we'll have a step-down transformer. (Further discussion will make things less clear, I think. It's really complicated how this works exactly.)
So why didn't Thomas Edison embrace AC, which would have allowed him to distribute power over great distances with little power lost during the transmission? He viewed its fluctuating nature as exotic and dangerous. Furthermore, he noticed that as the current reverses direction, there is a brief moment in time in which it's stopped. That is, there is a brief moment, many times each second, in which there is no power being delivered. (Is it possible to change your car's motion from 5 mph forward to 5 mph reverse without momentarily stopping?) It also came down to money. Not only were AC transmission lines much more expensive than DC lines, but Edison (and his backers) already had a lot of money invested in DC technology. It, therefore, fell to other pioneers of the late 1800s to develop AC power transmission and make it available to customers.
So what of those moments in time in which no current is flowing (and no power is available)? This is a real problem for most electronic and some electric devices. Not only are many devices sensitive to the direction of electron flow, but many devices need constant power and can't handle moments without it. A simple lamp is not one of these devices. It doesn't matter in which direction current flows as it moves through the lamp's filament. And the lamp survives quite well during those moments when it is without power; it just briefly stops producing light. These moments come and go so quickly the human eye can't detect the flickering, and so it's inconsequential. A radio, on the other hand, is different. Its more-sophisticated interior requires constant power and a current that travels in one direction. That is, it needs direct current, which it can obtain from a power adaptor. Most electronics require power adaptors (either internal or external) for this very same reason: the need to change the AC to DC and, usually, to lower the voltage as well (via a transformer).
Labels:
alternating current,
direct current,
edison,
electricity,
power,
voltage
Wednesday, July 29, 2009
Intro to Electricity: Part I
Electricity is defined by the Microsoft Encarta dictionary as (1) the energy created by moving charged particles and (2) electric current. (These two definitions speak of different things, however. Electric current is not energy, although it can be related to energy by way of a potential difference and time.)
Nevertheless, electricity is most commonly experienced as the flow of electrons in a wire (i.e. an electric current). Prior to the discovery of electrons, scientists thought of this mysterious flow of charge as perhaps that of a fluid. Benjamin Franklin, who died around one hundred years prior to the discovery of the electron, thought of electricity as the flow of a single type of fluid. To him, an electrically neutral object contained just the right amount of this fluid, while a "positively" charged object contained an excess of the fluid and a "negatively" charged object contained a deficiency. Franklin himself arbitrarily chose the labels "positive" and "negative". Other scientists adopted this terminology. Unfortunately, Franklin got it backwards. The "positively" charged object is not positive because it has an excess of "fluid", but because it is deficient in negatively-charged electrons. When a conductor with excess "fluid" comes into contact with a "fluid"-deficient conductor, it is not a flow of positive charge that seeks to equalize the charge of the two conductors but is instead the flow of negatively-charged electrons in the opposite direction. Today, we still (theoretically) consider current to flow from the positive to the negative terminal of a battery, which assumes a positively charged current. But we know that the electrons, which are the primary charge carriers, actually travel in the reverse direction, from negative to positive.
In the United States, ordinary wall sockets provide a source of power to run our appliances and electronics. We plug a lamp into an outlet and turn the lamp on. A circuit is completed and electrons travel through the lamp and its light bulb. As they do so, they lose energy to the lamp, powering the lamp. (Power is the transfer of energy per unit time.) So what kind of energy are they losing? It's potential energy, or energy of position. And it comes about because, as we all know, like charges repel one another and unlike charges attract one another. Two electrons positioned next to one another try to move so as to distance themselves from one another. There's a certain energy associated with their proximity to one another, and it's this energy that enables them to start moving apart. As they accelerate away from one another, they trade this energy of position for the energy of motion (i.e. kinetic energy). Energy is, of course, conserved. How this creates a current is easy to understand in terms of a battery. A chemical reaction in the battery causes electrons to accumulate on the negative terminal of the battery. Connecting the two terminals with a wire allows the electrons to distance themselves from one another. They rush away from the negative terminal, heading down the wire towards the positive terminal with its deficiency of electrons. This flow is what we call an electric current. As the electrons do this, they lose their energy of position. (In this case, mostly to energy in the form of heat.)
So does the power plant that provides our electricity send these electrons to our wall sockets, where they wait for us? No.
The power plant sells us not electrons but the ability to move electrons. It sells us a force field that pushes electrons along. It sells us energy of position, or electric potential energy. The potential at a point in space is often called voltage. Say we have two points in space, one with a voltage of 10 volts and one with a voltage of 5 volts. An electron situated at the first point will have a different energy of position than an electron situated at the second point. Take away both electrons, then place one at the point in space that is associated with an energy of position of 5 volts. It will spontaneously move towards the point with a voltage of 10 volts, just like a ball spontaneously rolls down a hill. (A positive charge would move in the opposite direction.) This voltage difference or gradient has another name: electric field. An electric field is a voltage drop per unit distance. Take a 9 V battery, with a distance of 0.005 m between its positive and negative terminals. It's called a 9 volt battery because the potential energy of one terminal is 9 volts less than the potential energy of the other terminal. Dividing the 9 volt voltage drop by the distance between the terminals (0.005 m), we get an electric field of 1800 volts per meter in the space between the terminals. It's this electric field or voltage gradient that we pay the power plant for.
Back to the lamp. Electrons already exist in the wires that run through the lamp and down its cord. Plugging the lamp into the wall and turning it on simply gets these electrons moving. And not very fast. They drift along barely at walking speed (because they keep running in to the atoms of the wire). But electrons are small, and lots of them fit into a very small bit of wire. When one amp of current is flowing through a wire, 6 quintillion (a 6 with 18 zeros behind it) electrons pass a given point in one second. (An amp is the basic unit used in the measurement of current.)
We've talked of voltage gradients (or differences in electric potential energy) and currents. Both are needed to define power. Power is the product of the two: a voltage difference times a current. One amp of current dropping one volt is equal to one watt of power. So we see there are two ways to increase power to a device. We can increase the voltage gradient across it ... or increase the current that flows through it. In other words, we can increase the amount of energy each electron loses as it passes through the device ... or we can increase the number of electrons passing through that device. (Or both.) This leads us to Thomas Edison.
Edison began to electrify New York City in 1882. He built power plants in the city that would send out a current through one wire and return it to his generators through another. The current flowed in one direction around the loops of copper wire he had laid between his plant and the houses of his customers. That is, it was direct current. But Edison quickly became aware of a fundamental problem with the setup. Power being sent out along the copper wires was being lost as heat (heating the wires), reducing the amount of power reaching the homes of his customers. The longer the wires (i.e. the farther the customer was from the power plant), the more power loss. Edison needed a way to increase the power reaching these distant customers. As we've already seen, he had two options: increase the voltage gradient or increase the current. But both had drawbacks. It was known that the power loss was proportional to the square of the current passing through the wires, so increasing the current caused even more power to be lost. Doubling the current quadrupled the power wasted as heat. Edison did try increasing the current by using thicker wires, but he realized that increasing the voltage gradient that pushed the electrons along was a better option. And so he did this. But high voltages were quite dangerous. They tended to create sparks and nasty shocks. So Edison could only raise the voltage gradient to a certain level before it became just too dangerous to send through someone's home. Foiled on both accounts, Edison did the best he could: he used thick wires, used the highest voltages that safety would allow, and he built power plants near his customers so the transmission lines wouldn't need to be very long. But who wants to live next to a power plant?
to be continued ...
Nevertheless, electricity is most commonly experienced as the flow of electrons in a wire (i.e. an electric current). Prior to the discovery of electrons, scientists thought of this mysterious flow of charge as perhaps that of a fluid. Benjamin Franklin, who died around one hundred years prior to the discovery of the electron, thought of electricity as the flow of a single type of fluid. To him, an electrically neutral object contained just the right amount of this fluid, while a "positively" charged object contained an excess of the fluid and a "negatively" charged object contained a deficiency. Franklin himself arbitrarily chose the labels "positive" and "negative". Other scientists adopted this terminology. Unfortunately, Franklin got it backwards. The "positively" charged object is not positive because it has an excess of "fluid", but because it is deficient in negatively-charged electrons. When a conductor with excess "fluid" comes into contact with a "fluid"-deficient conductor, it is not a flow of positive charge that seeks to equalize the charge of the two conductors but is instead the flow of negatively-charged electrons in the opposite direction. Today, we still (theoretically) consider current to flow from the positive to the negative terminal of a battery, which assumes a positively charged current. But we know that the electrons, which are the primary charge carriers, actually travel in the reverse direction, from negative to positive.
In the United States, ordinary wall sockets provide a source of power to run our appliances and electronics. We plug a lamp into an outlet and turn the lamp on. A circuit is completed and electrons travel through the lamp and its light bulb. As they do so, they lose energy to the lamp, powering the lamp. (Power is the transfer of energy per unit time.) So what kind of energy are they losing? It's potential energy, or energy of position. And it comes about because, as we all know, like charges repel one another and unlike charges attract one another. Two electrons positioned next to one another try to move so as to distance themselves from one another. There's a certain energy associated with their proximity to one another, and it's this energy that enables them to start moving apart. As they accelerate away from one another, they trade this energy of position for the energy of motion (i.e. kinetic energy). Energy is, of course, conserved. How this creates a current is easy to understand in terms of a battery. A chemical reaction in the battery causes electrons to accumulate on the negative terminal of the battery. Connecting the two terminals with a wire allows the electrons to distance themselves from one another. They rush away from the negative terminal, heading down the wire towards the positive terminal with its deficiency of electrons. This flow is what we call an electric current. As the electrons do this, they lose their energy of position. (In this case, mostly to energy in the form of heat.)
So does the power plant that provides our electricity send these electrons to our wall sockets, where they wait for us? No.
The power plant sells us not electrons but the ability to move electrons. It sells us a force field that pushes electrons along. It sells us energy of position, or electric potential energy. The potential at a point in space is often called voltage. Say we have two points in space, one with a voltage of 10 volts and one with a voltage of 5 volts. An electron situated at the first point will have a different energy of position than an electron situated at the second point. Take away both electrons, then place one at the point in space that is associated with an energy of position of 5 volts. It will spontaneously move towards the point with a voltage of 10 volts, just like a ball spontaneously rolls down a hill. (A positive charge would move in the opposite direction.) This voltage difference or gradient has another name: electric field. An electric field is a voltage drop per unit distance. Take a 9 V battery, with a distance of 0.005 m between its positive and negative terminals. It's called a 9 volt battery because the potential energy of one terminal is 9 volts less than the potential energy of the other terminal. Dividing the 9 volt voltage drop by the distance between the terminals (0.005 m), we get an electric field of 1800 volts per meter in the space between the terminals. It's this electric field or voltage gradient that we pay the power plant for.
Back to the lamp. Electrons already exist in the wires that run through the lamp and down its cord. Plugging the lamp into the wall and turning it on simply gets these electrons moving. And not very fast. They drift along barely at walking speed (because they keep running in to the atoms of the wire). But electrons are small, and lots of them fit into a very small bit of wire. When one amp of current is flowing through a wire, 6 quintillion (a 6 with 18 zeros behind it) electrons pass a given point in one second. (An amp is the basic unit used in the measurement of current.)
We've talked of voltage gradients (or differences in electric potential energy) and currents. Both are needed to define power. Power is the product of the two: a voltage difference times a current. One amp of current dropping one volt is equal to one watt of power. So we see there are two ways to increase power to a device. We can increase the voltage gradient across it ... or increase the current that flows through it. In other words, we can increase the amount of energy each electron loses as it passes through the device ... or we can increase the number of electrons passing through that device. (Or both.) This leads us to Thomas Edison.
Edison began to electrify New York City in 1882. He built power plants in the city that would send out a current through one wire and return it to his generators through another. The current flowed in one direction around the loops of copper wire he had laid between his plant and the houses of his customers. That is, it was direct current. But Edison quickly became aware of a fundamental problem with the setup. Power being sent out along the copper wires was being lost as heat (heating the wires), reducing the amount of power reaching the homes of his customers. The longer the wires (i.e. the farther the customer was from the power plant), the more power loss. Edison needed a way to increase the power reaching these distant customers. As we've already seen, he had two options: increase the voltage gradient or increase the current. But both had drawbacks. It was known that the power loss was proportional to the square of the current passing through the wires, so increasing the current caused even more power to be lost. Doubling the current quadrupled the power wasted as heat. Edison did try increasing the current by using thicker wires, but he realized that increasing the voltage gradient that pushed the electrons along was a better option. And so he did this. But high voltages were quite dangerous. They tended to create sparks and nasty shocks. So Edison could only raise the voltage gradient to a certain level before it became just too dangerous to send through someone's home. Foiled on both accounts, Edison did the best he could: he used thick wires, used the highest voltages that safety would allow, and he built power plants near his customers so the transmission lines wouldn't need to be very long. But who wants to live next to a power plant?
to be continued ...
Sunday, July 19, 2009
Let There Be Light!
[ Note: I use "light" and "electromagnetic radiation" interchangeably. So when I say light, I don't just mean visible light, unless I say I do. =P ]
Light comes in one form, but it interacts with matter in such complex ways that it appears there are different forms. By one form, I mean that light is a self-sustaining fluctuation, or disturbance, in the electric and magnetic fields, together called the electromagnetic field, that is created by a moving charge, like an electron. The disturbance, called an electromagnetic wave, arises when the charge changes either how fast it's moving and/or in what direction it's moving, both cases qualifying as an acceleration. (You might picture an electron accelerating upwards within a metal antenna. Such an act would cause both the electric and magnetic fields surrounding the antenna to fluctuate, and this fluctuation or disturbance flows outwards sort of like a water wave expanding out from the point at which a stone is dropped into a pond.) Once the fluctuation starts, it won't stop until it can act on matter, i.e. matter can absorb its energy. It very well could travel through the vacuum of outer space for a billion years. (Remember, it's self-sustaining.) The disturbance, which in some cases appears wavelike and at other times acts as a particle, flies through empty space at a constant speed. In 1983, the meter was redefined such that light travels (in vacuum) exactly 299,792,458 meters in one second. Using more familiar units, this is about 186,000 miles per second. This value - more so, this concept - is special in that it is believed to be constant regardless of place, time, or the motion of the observer. Contrast this with a car traveling a steady 50 mph (as indicated on its speedometer) on the highway. Placed in the car's passenger seat is a suitcase. Relative to a stationary observer on the side of the road, the suitcase is moving 50 mph down the road, but the driver of the car observes the suitcase as stationary. We conclude that the speed of the suitcase (as well as the car) is not constant; i.e. it can vary depending on your frame of reference. Not so with light, which appears to be going the same speed no matter how fast you move or what you're doing or where you are.
When people say that the speed of light is constant, they mean it's constant in vacuum (empty space). Light actually travels at speeds different from 299,792,458 meters per second (represented by the letter c, as in E = mc2) in different materials. For example, visible light slows down when it travels through water or glass. We assign a number to each material which indicates by how much light is slowed when it travels through the material, relative to how fast it travels in vacuum (i.e. relative to c). The number, called the index of refraction (and labeled n), is the ratio of c to the speed v in the material. Glass has an index of refraction of about 1.5, which means that light travels at only 2/3rds of c (i.e. about 124,000 miles per second) within glass. (In special cases, light, though not visible light, can travel faster than c in a material!)
These fluctuations in the electromagnetic field, which make up a pulse of light traveling through space, can vary in some ways. One of these ways is in how fast the fluctuations fluctuate. Not how fast the whole "packet" is moving, which is c in vacuum, but how fast the electric and magnetic fields grow and shrink as the packet moves along. If they cycle 384,000,000,000,000 times per second, then the light looks red to us. That is, the light interacts with the eye in a way that the brain interprets as "red." If they cycle 520,000,000,000,000 times per second, then the light looks green. If they cycle 10,000,000,000 times per second, then you can't see the light but it can cook your food, as these are microwaves and they are just the right frequency required to energize the water molecules in your food, which is how food is heated in a microwave oven. Radio waves oscillate some 1,000,000 times per second (and at other rates, or frequencies, a bit above and below this number). Other types of light, or electromagnetic radiation, are gamma rays, X rays, ultraviolet light, and infrared light. Visible light falls between ultraviolet and infrared, when the categories are ordered according to frequency, as they are in the preceding list (with gamma rays having the highest frequencies). Microwaves and radio waves follow infrared in this list. When you tune your car's radio to FM 94.7, this means the electromagnetic waves delivering your music are fluctuating at 94,700,000 times per second. (Compared to the frequency of, say, red light, this isn't that fast.) These categories are arbitrary, though, and don't correspond to any natural breakpoints in what is a continuous range of frequencies from the very tiny to the enormous.
When light travels from one transparent material into a different transparent material (with a different index of refraction), it either slows down or speeds up. We already saw that light slowed down when traveling from air into glass. At the interface between the two materials, light also changes direction. This is called refraction. This is apparent when you place a drinking straw in a glass of water. The portion of the straw beneath the surface of the water does not appear to be aligned with the portion above the surface of the water, when the glass is viewed from certain angles. When you look straight down into a body of water, any object in the water appears at only 3/4ths of it true depth. This, too, is due to refraction.
As mentioned before, light sometimes acts as a collection of particles, called photons. Each photon carries an amount of energy equal to its frequency times a constant called Planck's constant, so green light is more energetic than red light because it has a higher frequency. Blue light is more energetic than red or green light, or orange or yellow, for that matter, because it has a higher frequency than any of these other colors. This plays a role in why the sky is blue. First, remember that sunlight is composed of all different colors of light. (Visible light, together with ultraviolet (UV) and infrared, make up 99% of sunlight.) The molecules of nitrogen and oxygen, etc. that make up the upper atmosphere of the Earth find it much easier to absorb the blue light component of the sunlight at its high frequency (and high energy level) than the other colors of light. (They actually prefer ultraviolet and violet light, but there's not a lot of violet or UV in sunlight. Fortunately for us, the sunlight that makes it to Earth is only about 6% UV, and ozone absorbs most of it. Some of the UV that does make it down to ground level can cause sunburn and skin cancer.) When one of these air molecules has absorbed a photon of blue light, it then immediately emits it in a random direction. So blue light is sucked up by countless air molecules as it streams in from the sun, and then it's spit out in all directions, illuminating the sky. This happens to a lesser degree with the other colors of light, which for the most part pass through the atmosphere unscattered.
Light comes in one form, but it interacts with matter in such complex ways that it appears there are different forms. By one form, I mean that light is a self-sustaining fluctuation, or disturbance, in the electric and magnetic fields, together called the electromagnetic field, that is created by a moving charge, like an electron. The disturbance, called an electromagnetic wave, arises when the charge changes either how fast it's moving and/or in what direction it's moving, both cases qualifying as an acceleration. (You might picture an electron accelerating upwards within a metal antenna. Such an act would cause both the electric and magnetic fields surrounding the antenna to fluctuate, and this fluctuation or disturbance flows outwards sort of like a water wave expanding out from the point at which a stone is dropped into a pond.) Once the fluctuation starts, it won't stop until it can act on matter, i.e. matter can absorb its energy. It very well could travel through the vacuum of outer space for a billion years. (Remember, it's self-sustaining.) The disturbance, which in some cases appears wavelike and at other times acts as a particle, flies through empty space at a constant speed. In 1983, the meter was redefined such that light travels (in vacuum) exactly 299,792,458 meters in one second. Using more familiar units, this is about 186,000 miles per second. This value - more so, this concept - is special in that it is believed to be constant regardless of place, time, or the motion of the observer. Contrast this with a car traveling a steady 50 mph (as indicated on its speedometer) on the highway. Placed in the car's passenger seat is a suitcase. Relative to a stationary observer on the side of the road, the suitcase is moving 50 mph down the road, but the driver of the car observes the suitcase as stationary. We conclude that the speed of the suitcase (as well as the car) is not constant; i.e. it can vary depending on your frame of reference. Not so with light, which appears to be going the same speed no matter how fast you move or what you're doing or where you are.
When people say that the speed of light is constant, they mean it's constant in vacuum (empty space). Light actually travels at speeds different from 299,792,458 meters per second (represented by the letter c, as in E = mc2) in different materials. For example, visible light slows down when it travels through water or glass. We assign a number to each material which indicates by how much light is slowed when it travels through the material, relative to how fast it travels in vacuum (i.e. relative to c). The number, called the index of refraction (and labeled n), is the ratio of c to the speed v in the material. Glass has an index of refraction of about 1.5, which means that light travels at only 2/3rds of c (i.e. about 124,000 miles per second) within glass. (In special cases, light, though not visible light, can travel faster than c in a material!)
These fluctuations in the electromagnetic field, which make up a pulse of light traveling through space, can vary in some ways. One of these ways is in how fast the fluctuations fluctuate. Not how fast the whole "packet" is moving, which is c in vacuum, but how fast the electric and magnetic fields grow and shrink as the packet moves along. If they cycle 384,000,000,000,000 times per second, then the light looks red to us. That is, the light interacts with the eye in a way that the brain interprets as "red." If they cycle 520,000,000,000,000 times per second, then the light looks green. If they cycle 10,000,000,000 times per second, then you can't see the light but it can cook your food, as these are microwaves and they are just the right frequency required to energize the water molecules in your food, which is how food is heated in a microwave oven. Radio waves oscillate some 1,000,000 times per second (and at other rates, or frequencies, a bit above and below this number). Other types of light, or electromagnetic radiation, are gamma rays, X rays, ultraviolet light, and infrared light. Visible light falls between ultraviolet and infrared, when the categories are ordered according to frequency, as they are in the preceding list (with gamma rays having the highest frequencies). Microwaves and radio waves follow infrared in this list. When you tune your car's radio to FM 94.7, this means the electromagnetic waves delivering your music are fluctuating at 94,700,000 times per second. (Compared to the frequency of, say, red light, this isn't that fast.) These categories are arbitrary, though, and don't correspond to any natural breakpoints in what is a continuous range of frequencies from the very tiny to the enormous.
When light travels from one transparent material into a different transparent material (with a different index of refraction), it either slows down or speeds up. We already saw that light slowed down when traveling from air into glass. At the interface between the two materials, light also changes direction. This is called refraction. This is apparent when you place a drinking straw in a glass of water. The portion of the straw beneath the surface of the water does not appear to be aligned with the portion above the surface of the water, when the glass is viewed from certain angles. When you look straight down into a body of water, any object in the water appears at only 3/4ths of it true depth. This, too, is due to refraction.
As mentioned before, light sometimes acts as a collection of particles, called photons. Each photon carries an amount of energy equal to its frequency times a constant called Planck's constant, so green light is more energetic than red light because it has a higher frequency. Blue light is more energetic than red or green light, or orange or yellow, for that matter, because it has a higher frequency than any of these other colors. This plays a role in why the sky is blue. First, remember that sunlight is composed of all different colors of light. (Visible light, together with ultraviolet (UV) and infrared, make up 99% of sunlight.) The molecules of nitrogen and oxygen, etc. that make up the upper atmosphere of the Earth find it much easier to absorb the blue light component of the sunlight at its high frequency (and high energy level) than the other colors of light. (They actually prefer ultraviolet and violet light, but there's not a lot of violet or UV in sunlight. Fortunately for us, the sunlight that makes it to Earth is only about 6% UV, and ozone absorbs most of it. Some of the UV that does make it down to ground level can cause sunburn and skin cancer.) When one of these air molecules has absorbed a photon of blue light, it then immediately emits it in a random direction. So blue light is sucked up by countless air molecules as it streams in from the sun, and then it's spit out in all directions, illuminating the sky. This happens to a lesser degree with the other colors of light, which for the most part pass through the atmosphere unscattered.
Tuesday, July 14, 2009
The Misuse of Standardized Tests in America
There seems to be a national focus on using standardized achievement tests to not only compare students' knowledge and/or skills with those of other students across the country, but to judge the quality of education in this country and to label schools as high-performing or low-performing. This is a mistake.
Standardized achievement tests are not accurate measures of educational quality for several reasons. First, there is a rather significant amount of diversity in curricula across the country, and this is based on varying ideas as to what is most important to know. Whether or not this is a good thing is debatable. Nevertheless, the American school system is designed to maximize state and local control of curricula, as opposed to there being a national standard. The problem arises when a single one-size-fits-all test is administered to students (say, 8th graders) nationwide. A single standardized test cannot possibly align with only those topics taught in the classroom, in each classroom in America, as the list of topics varies. (Perhaps it does not vary a great deal, but it certainly varies.) One may argue that there is a core set of ideas that ALL students should know, and a standardized test could test just these ideas. I would agree, but again, there are no national standards as to what students should know, and who is to say that Company X or Company Y is to make the decision as to what everyone should know and when they should know it? In any case, at present (and likely into the future) different schools have slightly different objectives and a single standardized test administered nationally is not going to measure if students have met the objectives decided upon by their local or state administrators. This makes such a test invalid for judging quality of education in specific schools or districts.
Second, standardized achievement tests are designed such that test items (or questions) are answered correctly by about half of test-takers. This is done for statistical reasons; primarily, it spreads out students' test scores, making it easier to rank the students. If a question is answered correctly by most students, the question will likely be dropped from the test, as it doesn't help differentiate between the students. But ... the questions answered correctly by most students nationally generally cover the most important topics, i.e. topics that were stressed by teachers. The company that produces and markets the standardized test has an incentive to use questions concerning less-important concepts. Does such a test truly measure educational quality?
Finally, such tests often, perhaps inadvertently, measure things that students do not learn in school. Questions often test a student's innate intelligence and out-of-school learning. Now, I'm not really comfortable asserting that some people are inherently smarter than others, but it does seem reasonable that not all people are born with the exact same capacity for math, or languages, or art. Should a school be penalized for failing to teach students something that, by definition, cannot be taught? Regarding out-of-school learning, students are born into different socioeconomic classes and are raised by different parents, both of which lead to different life experiences. If a kid has never been taken to the beach before, perhaps for financial reasons, and a test question asks something about ocean waves, he or she may be at a disadvantage compared to other students that have been to the beach. Such questions do exist on standardized tests, and they invalidate the test as a measure of educational quality. Why are they put on the test? Quite likely because it is known that not all students will be able to draw on the same experiences, and this can be exploited to find those ideal test questions that are answered correctly by only one-half of all students.
I don't think we should do away with all standardized testing, but we need to understand what it is that we are testing and not misuse test results. The current system is not working. Test results are being misused. Teachers are feeling pressured to "teach to the test." Schools are forced to hyper-focus on a single, arbitrary measurement of student knowledge for political and financial reasons. Schools shouldn't live or die based on these kinds of test results. This focus on standardized testing is not making American students any smarter. We need to either improve the tests (which cannot be left up to the companies that currently produce them) or change the way we use their results.
(BTW, I found a lot of useful information for this essay in an article written by W. James Popham for the March 1999 issue of Educational Leadership.)
Standardized achievement tests are not accurate measures of educational quality for several reasons. First, there is a rather significant amount of diversity in curricula across the country, and this is based on varying ideas as to what is most important to know. Whether or not this is a good thing is debatable. Nevertheless, the American school system is designed to maximize state and local control of curricula, as opposed to there being a national standard. The problem arises when a single one-size-fits-all test is administered to students (say, 8th graders) nationwide. A single standardized test cannot possibly align with only those topics taught in the classroom, in each classroom in America, as the list of topics varies. (Perhaps it does not vary a great deal, but it certainly varies.) One may argue that there is a core set of ideas that ALL students should know, and a standardized test could test just these ideas. I would agree, but again, there are no national standards as to what students should know, and who is to say that Company X or Company Y is to make the decision as to what everyone should know and when they should know it? In any case, at present (and likely into the future) different schools have slightly different objectives and a single standardized test administered nationally is not going to measure if students have met the objectives decided upon by their local or state administrators. This makes such a test invalid for judging quality of education in specific schools or districts.
Second, standardized achievement tests are designed such that test items (or questions) are answered correctly by about half of test-takers. This is done for statistical reasons; primarily, it spreads out students' test scores, making it easier to rank the students. If a question is answered correctly by most students, the question will likely be dropped from the test, as it doesn't help differentiate between the students. But ... the questions answered correctly by most students nationally generally cover the most important topics, i.e. topics that were stressed by teachers. The company that produces and markets the standardized test has an incentive to use questions concerning less-important concepts. Does such a test truly measure educational quality?
Finally, such tests often, perhaps inadvertently, measure things that students do not learn in school. Questions often test a student's innate intelligence and out-of-school learning. Now, I'm not really comfortable asserting that some people are inherently smarter than others, but it does seem reasonable that not all people are born with the exact same capacity for math, or languages, or art. Should a school be penalized for failing to teach students something that, by definition, cannot be taught? Regarding out-of-school learning, students are born into different socioeconomic classes and are raised by different parents, both of which lead to different life experiences. If a kid has never been taken to the beach before, perhaps for financial reasons, and a test question asks something about ocean waves, he or she may be at a disadvantage compared to other students that have been to the beach. Such questions do exist on standardized tests, and they invalidate the test as a measure of educational quality. Why are they put on the test? Quite likely because it is known that not all students will be able to draw on the same experiences, and this can be exploited to find those ideal test questions that are answered correctly by only one-half of all students.
I don't think we should do away with all standardized testing, but we need to understand what it is that we are testing and not misuse test results. The current system is not working. Test results are being misused. Teachers are feeling pressured to "teach to the test." Schools are forced to hyper-focus on a single, arbitrary measurement of student knowledge for political and financial reasons. Schools shouldn't live or die based on these kinds of test results. This focus on standardized testing is not making American students any smarter. We need to either improve the tests (which cannot be left up to the companies that currently produce them) or change the way we use their results.
(BTW, I found a lot of useful information for this essay in an article written by W. James Popham for the March 1999 issue of Educational Leadership.)
Labels:
learning,
standardized achievement tests,
teaching,
testing
Wednesday, July 8, 2009
Sound Makes Cold
There's a fairly new technology available for cooling a refrigerator or freezer. It uses sound waves to transfer heat from within the device to outside the device!
Today's refrigerators use a compressor that condenses some refrigerant (likely HFCs, or hydrofluorocarbons), increasing its pressure and temperature. A fan then blows air over pipes (condenser coils) holding this warm high-pressure gas, and as heat transfers to the outside air the refrigerant condenses into a liquid (and becomes somewhat cooler). The cooler liquid then travels through an expansion valve that allows the liquid to expand and evaporate (as its pressure decreases). During this process of evaporation, the refrigerant absorbs heat from the air inside the refrigerator, cooling it. The refrigerant, now in a low-pressure gas state and somewhat warmer, completes the cycle by flowing back to the compressor.
So how can you cool a refrigerator using sound waves? First, you have to know what sounds waves really are. They're pressure waves. In other words, they are variations in pressure in some medium (like air), over some distance (really, volume). They are alternating "bands" of high and low pressure, with lots of air molecules packed together in the high pressure "bands" and relatively few molecules of air in the low pressure "bands". They move through the air like a shock wave moves through a horizontal Slinky. When a sound wave passes through a room, it passes energy along to some air molecules, which then forward the energy to nearby air molecules, which do the same thing, and on and on. (Again, like the Slinky.) Each individual air molecule oscillates over a very short distance and does NOT follow along with the wave. Sound is not a way to transfer individual air molecules from one place to another. It IS a way to transfer a pressure variation (in the medium) from one place to another.
When you hit a drum, the membrane vibrates up and down. It moves up, pushing air molecules forward and out of its way, forcing them closer to the air molecules that were just above them, creating a thin high pressure zone that then propagates forward at some speed characteristic of the medium (in this case, air). Then the membrane moves down, creating a semi-vacuum, opening up a space with few air molecules in it, creating a low pressure zone. Then it moves back up and produces another high pressure zone. This cycle continues as long as the drum's membrane vibrates, and these alternating high and low pressure zones continue to move outwards in basically all directions at a speed of about 760 mph.
Now, when gas is compressed, not only does its pressure rise, but also its temperature. For air compressions associated with normal human speech, the temperature change is miniscule, only about one ten-thousandth of a degree Celsius. The temperature change in something like the HFCs, mentioned earlier, as they move through the refrigerator's compressor, is much greater. (The compressor is obviously much more powerful than our vocal cords.) To make the sound wave useful for transferring a significant amount of heat, it needs to be able to handle a larger temperature span. This can be accomplished in two ways. The first way is to use more intense pressure waves (i.e. crank up the volume). The second way is by putting it (i.e. the air molecules) in contact with a solid material. If a gas carrying a sound wave is placed near a solid surface, the solid will tend to absorb the heat of compression (i.e. the heat associated with the temperature increase, brought about by the pressure increase), keeping the temperature stable. The opposite is also true: The solid releases heat into the gas when the gas expands, preventing it from cooling down as much as it otherwise would.
OK, so picture a long rectangular plate (perhaps metal, perhaps plastic) with an intense sound wave traveling along its surface. (Picture the wave as traveling from left to right.) When the sound wave first reaches the plate, the phase is, say, coming off a high pressure zone. The air molecules floating near that end of the plate start to expand, forming a low pressure zone. As the gas now expands, it extracts heat from the solid with which it is in contact. This heat (energy) is then passed forward by the sound wave. The wave then enters a period of high pressure, a bit farther down the plate, and as the air molecules in that region are compressed, they pass on their heat to the solid surface. The wave has now relocated a bit of heat from one end of the plate to the other end (or at least a point a bit farther down the plate).
You should now be asking yourself, won't a high pressure zone follow the initial low pressure zone, at the front of the plate, passing on heat to that end of the plate, offsetting the transfer of heat just accomplished. You'd be right if the structure wasn't designed to alleviate this problem. Take a look at the following picture:

Even though it doesn't look like it, let's pretend that the left end of the tube is open and the right end is sealed shut. Now when a sound wave enters the tube from the left, it travels to the closed end and, having nowhere else to go, is reflected back towards the left end. The red line here is a graph of sorts. It marks the magnitude of the pressure above or below the "normal" or atmospheric pressure, which is indicated by the dashed line. The wave, entering the tube, follows the upper red line, and pressure grows until it reaches a maximum at the right end of the tube. The air is piled up at the right end of the tube, hence the high pressure zone. When the air pushes into the wall at the tube's end, the wall pushes back, and the air starts moving back towards the left. It now rushes away, creating a low pressure zone at the tube's end. We now follow the lower red line back towards the left of the tube, as the pressure difference between the wave and the "normal" pressure is reduced until they equal one another at the "node" at the left end of the tube. When successive waves are timed just right so that they always follow this pattern, we obtain a "standing wave." That is, waves reinforce one another and don't cancel out. We say the standing wave is resonant or has a resonant frequency. This property of the system allows our sound-based refrigerator to work. We can alter the frequency of the sound waves in our device until we find one that supports a standing wave, and this way we can control where along a closed tube, as well as our plate, we have a high pressure zone and where we have a low pressure zone. We can ensure that we always have an expanding pressure zone (primed for compression) at the front of the plate. And we can ensure that we always have a fully compressed zone at the end of the plate. The fact that we can change the length and position of the plate, within the tube, makes this task easier. (Im simplifying a bit here, for brevity.)
Lets restrict our focus to a single parcel of gas, situated in an enclosed tube with a plate running along some portion of the tube. Remember, each parcel of gas moves over a very small range, back and forth; parcels do not move along with the wave. As a wave [of energy] approaches, it forces a parcel of gas to expand, lowering the temperature of the gas parcel to something less than the temperature of the plate. Heat then flows into the parcel, in an attempt to equalize the temperature, and this causes further expansion of the parcel. This heat is then carried by the parcel a short distance forwards (perhaps a centimeter) and is passed along to another gas parcel. Like a bucket brigade, heat is passed along until it reaches a point in the standing wave that corresponds to high pressure, where the parcel is compressed, raising its temperature to something above the temperature of the plate at that point. Heat then moves from the gas parcel into the plate, in an attempt to equalize the temperature. This happens again and again, during each cycle of the acoustic wave, creating a cold end of the plate and a hot end of the plate. Heat exchangers can then be placed at each end of the plate. These may be pipes filled with something like antifreeze, which transfer heat into or out of the plate. At the cold end of the plate, the antifreeze is in a pipe that runs through the walls of the refrigerator box, pulling heat from the air in the refrigerator and dumping it into the plate, from which the heat is extracted and carried away by the sound wave. The hot end of the plate is placed alongside a separate pipe (also containing some fluid, such as antifreeze). The fluid here absorbs heat from the plate and carries it away, to a section of pipe that is in contact with the outside air, and over which a fan blows. Heat then flows out of the fluid-filled pipe and into the air in the room, leaving the fluid cooler and ready to return to the plate for another round of heat transfer.
Though somewhat complicated, this design allows for the construction of a refrigerator that has few moving parts and that, perhaps most importantly, doesn't rely on HFCs, which are greenhouse gases (that have the potential to find their way out of the refrigerator's pipes and into the atmosphere). Furthermore, a sound-based refrigerator is unlike a conventional compressor-based refrigerator, in that it can run at full force or less. That is, it can be adjusted in real-time to run at precisely the appropriate level for the desired temperature and heat load. A conventional refrigerator's cooling system is either on (at full-force) or off. The current drawback to thermoacoustic refrigerators (as they're called): they're not very efficient. They use lots of electricity versus conventional models. If they can be made more efficient, we may start to see them in the market place. (But don't hold your breath.)
Today's refrigerators use a compressor that condenses some refrigerant (likely HFCs, or hydrofluorocarbons), increasing its pressure and temperature. A fan then blows air over pipes (condenser coils) holding this warm high-pressure gas, and as heat transfers to the outside air the refrigerant condenses into a liquid (and becomes somewhat cooler). The cooler liquid then travels through an expansion valve that allows the liquid to expand and evaporate (as its pressure decreases). During this process of evaporation, the refrigerant absorbs heat from the air inside the refrigerator, cooling it. The refrigerant, now in a low-pressure gas state and somewhat warmer, completes the cycle by flowing back to the compressor.
So how can you cool a refrigerator using sound waves? First, you have to know what sounds waves really are. They're pressure waves. In other words, they are variations in pressure in some medium (like air), over some distance (really, volume). They are alternating "bands" of high and low pressure, with lots of air molecules packed together in the high pressure "bands" and relatively few molecules of air in the low pressure "bands". They move through the air like a shock wave moves through a horizontal Slinky. When a sound wave passes through a room, it passes energy along to some air molecules, which then forward the energy to nearby air molecules, which do the same thing, and on and on. (Again, like the Slinky.) Each individual air molecule oscillates over a very short distance and does NOT follow along with the wave. Sound is not a way to transfer individual air molecules from one place to another. It IS a way to transfer a pressure variation (in the medium) from one place to another.
When you hit a drum, the membrane vibrates up and down. It moves up, pushing air molecules forward and out of its way, forcing them closer to the air molecules that were just above them, creating a thin high pressure zone that then propagates forward at some speed characteristic of the medium (in this case, air). Then the membrane moves down, creating a semi-vacuum, opening up a space with few air molecules in it, creating a low pressure zone. Then it moves back up and produces another high pressure zone. This cycle continues as long as the drum's membrane vibrates, and these alternating high and low pressure zones continue to move outwards in basically all directions at a speed of about 760 mph.
Now, when gas is compressed, not only does its pressure rise, but also its temperature. For air compressions associated with normal human speech, the temperature change is miniscule, only about one ten-thousandth of a degree Celsius. The temperature change in something like the HFCs, mentioned earlier, as they move through the refrigerator's compressor, is much greater. (The compressor is obviously much more powerful than our vocal cords.) To make the sound wave useful for transferring a significant amount of heat, it needs to be able to handle a larger temperature span. This can be accomplished in two ways. The first way is to use more intense pressure waves (i.e. crank up the volume). The second way is by putting it (i.e. the air molecules) in contact with a solid material. If a gas carrying a sound wave is placed near a solid surface, the solid will tend to absorb the heat of compression (i.e. the heat associated with the temperature increase, brought about by the pressure increase), keeping the temperature stable. The opposite is also true: The solid releases heat into the gas when the gas expands, preventing it from cooling down as much as it otherwise would.
OK, so picture a long rectangular plate (perhaps metal, perhaps plastic) with an intense sound wave traveling along its surface. (Picture the wave as traveling from left to right.) When the sound wave first reaches the plate, the phase is, say, coming off a high pressure zone. The air molecules floating near that end of the plate start to expand, forming a low pressure zone. As the gas now expands, it extracts heat from the solid with which it is in contact. This heat (energy) is then passed forward by the sound wave. The wave then enters a period of high pressure, a bit farther down the plate, and as the air molecules in that region are compressed, they pass on their heat to the solid surface. The wave has now relocated a bit of heat from one end of the plate to the other end (or at least a point a bit farther down the plate).
You should now be asking yourself, won't a high pressure zone follow the initial low pressure zone, at the front of the plate, passing on heat to that end of the plate, offsetting the transfer of heat just accomplished. You'd be right if the structure wasn't designed to alleviate this problem. Take a look at the following picture:
Even though it doesn't look like it, let's pretend that the left end of the tube is open and the right end is sealed shut. Now when a sound wave enters the tube from the left, it travels to the closed end and, having nowhere else to go, is reflected back towards the left end. The red line here is a graph of sorts. It marks the magnitude of the pressure above or below the "normal" or atmospheric pressure, which is indicated by the dashed line. The wave, entering the tube, follows the upper red line, and pressure grows until it reaches a maximum at the right end of the tube. The air is piled up at the right end of the tube, hence the high pressure zone. When the air pushes into the wall at the tube's end, the wall pushes back, and the air starts moving back towards the left. It now rushes away, creating a low pressure zone at the tube's end. We now follow the lower red line back towards the left of the tube, as the pressure difference between the wave and the "normal" pressure is reduced until they equal one another at the "node" at the left end of the tube. When successive waves are timed just right so that they always follow this pattern, we obtain a "standing wave." That is, waves reinforce one another and don't cancel out. We say the standing wave is resonant or has a resonant frequency. This property of the system allows our sound-based refrigerator to work. We can alter the frequency of the sound waves in our device until we find one that supports a standing wave, and this way we can control where along a closed tube, as well as our plate, we have a high pressure zone and where we have a low pressure zone. We can ensure that we always have an expanding pressure zone (primed for compression) at the front of the plate. And we can ensure that we always have a fully compressed zone at the end of the plate. The fact that we can change the length and position of the plate, within the tube, makes this task easier. (Im simplifying a bit here, for brevity.)
Lets restrict our focus to a single parcel of gas, situated in an enclosed tube with a plate running along some portion of the tube. Remember, each parcel of gas moves over a very small range, back and forth; parcels do not move along with the wave. As a wave [of energy] approaches, it forces a parcel of gas to expand, lowering the temperature of the gas parcel to something less than the temperature of the plate. Heat then flows into the parcel, in an attempt to equalize the temperature, and this causes further expansion of the parcel. This heat is then carried by the parcel a short distance forwards (perhaps a centimeter) and is passed along to another gas parcel. Like a bucket brigade, heat is passed along until it reaches a point in the standing wave that corresponds to high pressure, where the parcel is compressed, raising its temperature to something above the temperature of the plate at that point. Heat then moves from the gas parcel into the plate, in an attempt to equalize the temperature. This happens again and again, during each cycle of the acoustic wave, creating a cold end of the plate and a hot end of the plate. Heat exchangers can then be placed at each end of the plate. These may be pipes filled with something like antifreeze, which transfer heat into or out of the plate. At the cold end of the plate, the antifreeze is in a pipe that runs through the walls of the refrigerator box, pulling heat from the air in the refrigerator and dumping it into the plate, from which the heat is extracted and carried away by the sound wave. The hot end of the plate is placed alongside a separate pipe (also containing some fluid, such as antifreeze). The fluid here absorbs heat from the plate and carries it away, to a section of pipe that is in contact with the outside air, and over which a fan blows. Heat then flows out of the fluid-filled pipe and into the air in the room, leaving the fluid cooler and ready to return to the plate for another round of heat transfer.
Though somewhat complicated, this design allows for the construction of a refrigerator that has few moving parts and that, perhaps most importantly, doesn't rely on HFCs, which are greenhouse gases (that have the potential to find their way out of the refrigerator's pipes and into the atmosphere). Furthermore, a sound-based refrigerator is unlike a conventional compressor-based refrigerator, in that it can run at full force or less. That is, it can be adjusted in real-time to run at precisely the appropriate level for the desired temperature and heat load. A conventional refrigerator's cooling system is either on (at full-force) or off. The current drawback to thermoacoustic refrigerators (as they're called): they're not very efficient. They use lots of electricity versus conventional models. If they can be made more efficient, we may start to see them in the market place. (But don't hold your breath.)
Labels:
pressure waves,
sound,
thermoacoustic refrigeration
Thursday, July 2, 2009
Happy July 2nd!
It was on July 2, 1776 that the Continental Congress voted on the issue and declared the American colonies independent. John Adams said, "The second day of July 1776 will be the most memorable epocha in the history of America."
Two days later, on the 4th, a second vote was taken. Again, twelve colonies voted for independence, while New York abstained. The Declaration of Independence, the actual document, was signed by only the President and Secretary of the Congress. Not until August 2nd did a final, elegant copy of the document receive the signatures of the remainder of Congress.
Two days later, on the 4th, a second vote was taken. Again, twelve colonies voted for independence, while New York abstained. The Declaration of Independence, the actual document, was signed by only the President and Secretary of the Congress. Not until August 2nd did a final, elegant copy of the document receive the signatures of the remainder of Congress.
Tuesday, June 30, 2009
Gravity: an Introduction
Gravity is one of four fundamental forces in nature. Everyone knows that it is a mutual force of attraction between two bodies that have mass. Yes, there is a gravitational attraction between all macroscopic bodies, including you and the chair you're sitting in, or your computer and its monitor. For such small objects, however, the force of gravity between them is so weak that it produces no noticeable movement. Though much stronger, the gravitational force of the Earth on you and everything in sight is still incredibly weak. If we compare the strength of the gravitational force to one of the other three fundamental forces, say electromagnetism, we find the electromagnetic force to be some 1,000,000,000,000,000,000,000,000,000,000,000,000,000 times stronger. If you lined up this many atoms, end on end, they would stretch to the edge of the universe and back a thousand times. Gravity is so weak that a small kitchen magnet can hold a photo to the refrigerator against the gravitational pull of the entire Earth. (The gravitational force between two objects is proportional to the product of the masses of the two objects and inversely proportional to the distance, squared, between them.)
Gravity's relative weakness, compared to the other fundamental forces, baffles scientists. Some speculate that some of the gravitational force must leak out into other dimensions beyond the 4 (including time) that we are familiar with. There must be 6 or 7 other dimensions out there, that we are unable to perceive, into which gravity leaks. No one knows for sure if there are indeed this many dimensions to our universe, but it's one of the theories currently circulating. (String theory incorporates this view.)
Actually, perhaps we shouldn't call gravity a force. Einstein explained gravity as the effect of warped space-time. Think of the universe as a giant rubber sheet, with different-sized balls (representing planets or massive bodies) sitting on it. These balls, especially the heavy ones, sink and create depressions in the rubber sheet. Picture a bowling ball on the rubber sheet. This could represent the sun. Now as a marble, the Earth, passes by, if it gets close enough to the bowling ball (sun), it falls into the depression created by the heavier ball. If the marble stopped moving forward all of a sudden, it would fall right into the bowling ball. But it has inertia, so that it continues to move forward, and therefore it can attempt to escape the "pull" of the bowling ball. Moving forward at the right speed, it can fall into an orbit around the bowling ball. Think of a ball spinning around the perimeter of a roulette table. It's initially going fast enough to "orbit" the center of the table. After it loses energy and slows down, it falls toward the center of the table and into a numbered compartment. But if it didn't slow down, it would continue "orbiting" indefinitely. This describes the motion of the Earth around the sun. The Earth is moving fast enough to maintain an orbit around the top of the bowl-like depression in space-time created by the massive sun. If it was slowed down enough, it could fall into the sun. If it was sped up enough, it could swing out of the depression and escape the "gravity" of the sun. This explanation of gravity is known as general relativity.
Is the "force" of gravity between two objects instantaneously transmitted? That is, if the moon were quickly moved (by some alien) to a location twice as distant from the Earth, would the force of gravity between the Earth and moon instantaneously decrease or would it take some period of time before the planets "realized" they were now farther apart and stopped pulling on each other so forcefully? Einstein said instantaneous transmission of gravity (or anything) was impossible. It violated one of the most important findings in all of physics: that nothing travels faster than the speed of light. Therefore, gravity can't travel faster than light, which travels at about 186,000 miles per second. (Since gravity is itself without mass, it can and does travel at the speed of light.) If the sun were to disappear, the Earth wouldn't notice for about 8 minutes, because it takes that long for light (and gravity) to travel here from the sun. For those 8 minutes, the Earth would continue in its orbit, completely oblivious to what was about to happen.
Gravity's relative weakness, compared to the other fundamental forces, baffles scientists. Some speculate that some of the gravitational force must leak out into other dimensions beyond the 4 (including time) that we are familiar with. There must be 6 or 7 other dimensions out there, that we are unable to perceive, into which gravity leaks. No one knows for sure if there are indeed this many dimensions to our universe, but it's one of the theories currently circulating. (String theory incorporates this view.)
Actually, perhaps we shouldn't call gravity a force. Einstein explained gravity as the effect of warped space-time. Think of the universe as a giant rubber sheet, with different-sized balls (representing planets or massive bodies) sitting on it. These balls, especially the heavy ones, sink and create depressions in the rubber sheet. Picture a bowling ball on the rubber sheet. This could represent the sun. Now as a marble, the Earth, passes by, if it gets close enough to the bowling ball (sun), it falls into the depression created by the heavier ball. If the marble stopped moving forward all of a sudden, it would fall right into the bowling ball. But it has inertia, so that it continues to move forward, and therefore it can attempt to escape the "pull" of the bowling ball. Moving forward at the right speed, it can fall into an orbit around the bowling ball. Think of a ball spinning around the perimeter of a roulette table. It's initially going fast enough to "orbit" the center of the table. After it loses energy and slows down, it falls toward the center of the table and into a numbered compartment. But if it didn't slow down, it would continue "orbiting" indefinitely. This describes the motion of the Earth around the sun. The Earth is moving fast enough to maintain an orbit around the top of the bowl-like depression in space-time created by the massive sun. If it was slowed down enough, it could fall into the sun. If it was sped up enough, it could swing out of the depression and escape the "gravity" of the sun. This explanation of gravity is known as general relativity.
Is the "force" of gravity between two objects instantaneously transmitted? That is, if the moon were quickly moved (by some alien) to a location twice as distant from the Earth, would the force of gravity between the Earth and moon instantaneously decrease or would it take some period of time before the planets "realized" they were now farther apart and stopped pulling on each other so forcefully? Einstein said instantaneous transmission of gravity (or anything) was impossible. It violated one of the most important findings in all of physics: that nothing travels faster than the speed of light. Therefore, gravity can't travel faster than light, which travels at about 186,000 miles per second. (Since gravity is itself without mass, it can and does travel at the speed of light.) If the sun were to disappear, the Earth wouldn't notice for about 8 minutes, because it takes that long for light (and gravity) to travel here from the sun. For those 8 minutes, the Earth would continue in its orbit, completely oblivious to what was about to happen.
Subscribe to:
Comments (Atom)
