Wednesday, July 29, 2009

Intro to Electricity: Part I

Electricity is defined by the Microsoft Encarta dictionary as (1) the energy created by moving charged particles and (2) electric current. (These two definitions speak of different things, however. Electric current is not energy, although it can be related to energy by way of a potential difference and time.)

Nevertheless, electricity is most commonly experienced as the flow of electrons in a wire (i.e. an electric current). Prior to the discovery of electrons, scientists thought of this mysterious flow of charge as perhaps that of a fluid. Benjamin Franklin, who died around one hundred years prior to the discovery of the electron, thought of electricity as the flow of a single type of fluid. To him, an electrically neutral object contained just the right amount of this fluid, while a "positively" charged object contained an excess of the fluid and a "negatively" charged object contained a deficiency. Franklin himself arbitrarily chose the labels "positive" and "negative". Other scientists adopted this terminology. Unfortunately, Franklin got it backwards. The "positively" charged object is not positive because it has an excess of "fluid", but because it is deficient in negatively-charged electrons. When a conductor with excess "fluid" comes into contact with a "fluid"-deficient conductor, it is not a flow of positive charge that seeks to equalize the charge of the two conductors but is instead the flow of negatively-charged electrons in the opposite direction. Today, we still (theoretically) consider current to flow from the positive to the negative terminal of a battery, which assumes a positively charged current. But we know that the electrons, which are the primary charge carriers, actually travel in the reverse direction, from negative to positive.

In the United States, ordinary wall sockets provide a source of power to run our appliances and electronics. We plug a lamp into an outlet and turn the lamp on. A circuit is completed and electrons travel through the lamp and its light bulb. As they do so, they lose energy to the lamp, powering the lamp. (Power is the transfer of energy per unit time.) So what kind of energy are they losing? It's potential energy, or energy of position. And it comes about because, as we all know, like charges repel one another and unlike charges attract one another. Two electrons positioned next to one another try to move so as to distance themselves from one another. There's a certain energy associated with their proximity to one another, and it's this energy that enables them to start moving apart. As they accelerate away from one another, they trade this energy of position for the energy of motion (i.e. kinetic energy). Energy is, of course, conserved. How this creates a current is easy to understand in terms of a battery. A chemical reaction in the battery causes electrons to accumulate on the negative terminal of the battery. Connecting the two terminals with a wire allows the electrons to distance themselves from one another. They rush away from the negative terminal, heading down the wire towards the positive terminal with its deficiency of electrons. This flow is what we call an electric current. As the electrons do this, they lose their energy of position. (In this case, mostly to energy in the form of heat.)

So does the power plant that provides our electricity send these electrons to our wall sockets, where they wait for us? No.

The power plant sells us not electrons but the ability to move electrons. It sells us a force field that pushes electrons along. It sells us energy of position, or electric potential energy. The potential at a point in space is often called voltage. Say we have two points in space, one with a voltage of 10 volts and one with a voltage of 5 volts. An electron situated at the first point will have a different energy of position than an electron situated at the second point. Take away both electrons, then place one at the point in space that is associated with an energy of position of 5 volts. It will spontaneously move towards the point with a voltage of 10 volts, just like a ball spontaneously rolls down a hill. (A positive charge would move in the opposite direction.) This voltage difference or gradient has another name: electric field. An electric field is a voltage drop per unit distance. Take a 9 V battery, with a distance of 0.005 m between its positive and negative terminals. It's called a 9 volt battery because the potential energy of one terminal is 9 volts less than the potential energy of the other terminal. Dividing the 9 volt voltage drop by the distance between the terminals (0.005 m), we get an electric field of 1800 volts per meter in the space between the terminals. It's this electric field or voltage gradient that we pay the power plant for.

Back to the lamp. Electrons already exist in the wires that run through the lamp and down its cord. Plugging the lamp into the wall and turning it on simply gets these electrons moving. And not very fast. They drift along barely at walking speed (because they keep running in to the atoms of the wire). But electrons are small, and lots of them fit into a very small bit of wire. When one amp of current is flowing through a wire, 6 quintillion (a 6 with 18 zeros behind it) electrons pass a given point in one second. (An amp is the basic unit used in the measurement of current.)

We've talked of voltage gradients (or differences in electric potential energy) and currents. Both are needed to define power. Power is the product of the two: a voltage difference times a current. One amp of current dropping one volt is equal to one watt of power. So we see there are two ways to increase power to a device. We can increase the voltage gradient across it ... or increase the current that flows through it. In other words, we can increase the amount of energy each electron loses as it passes through the device ... or we can increase the number of electrons passing through that device. (Or both.) This leads us to Thomas Edison.

Edison began to electrify New York City in 1882. He built power plants in the city that would send out a current through one wire and return it to his generators through another. The current flowed in one direction around the loops of copper wire he had laid between his plant and the houses of his customers. That is, it was direct current. But Edison quickly became aware of a fundamental problem with the setup. Power being sent out along the copper wires was being lost as heat (heating the wires), reducing the amount of power reaching the homes of his customers. The longer the wires (i.e. the farther the customer was from the power plant), the more power loss. Edison needed a way to increase the power reaching these distant customers. As we've already seen, he had two options: increase the voltage gradient or increase the current. But both had drawbacks. It was known that the power loss was proportional to the square of the current passing through the wires, so increasing the current caused even more power to be lost. Doubling the current quadrupled the power wasted as heat. Edison did try increasing the current by using thicker wires, but he realized that increasing the voltage gradient that pushed the electrons along was a better option. And so he did this. But high voltages were quite dangerous. They tended to create sparks and nasty shocks. So Edison could only raise the voltage gradient to a certain level before it became just too dangerous to send through someone's home. Foiled on both accounts, Edison did the best he could: he used thick wires, used the highest voltages that safety would allow, and he built power plants near his customers so the transmission lines wouldn't need to be very long. But who wants to live next to a power plant?

to be continued ...

Sunday, July 19, 2009

Let There Be Light!

[ Note: I use "light" and "electromagnetic radiation" interchangeably. So when I say light, I don't just mean visible light, unless I say I do. =P ]

Light comes in one form, but it interacts with matter in such complex ways that it appears there are different forms. By one form, I mean that light is a self-sustaining fluctuation, or disturbance, in the electric and magnetic fields, together called the electromagnetic field, that is created by a moving charge, like an electron. The disturbance, called an electromagnetic wave, arises when the charge changes either how fast it's moving and/or in what direction it's moving, both cases qualifying as an acceleration. (You might picture an electron accelerating upwards within a metal antenna. Such an act would cause both the electric and magnetic fields surrounding the antenna to fluctuate, and this fluctuation or disturbance flows outwards sort of like a water wave expanding out from the point at which a stone is dropped into a pond.) Once the fluctuation starts, it won't stop until it can act on matter, i.e. matter can absorb its energy. It very well could travel through the vacuum of outer space for a billion years. (Remember, it's self-sustaining.) The disturbance, which in some cases appears wavelike and at other times acts as a particle, flies through empty space at a constant speed. In 1983, the meter was redefined such that light travels (in vacuum) exactly 299,792,458 meters in one second. Using more familiar units, this is about 186,000 miles per second. This value - more so, this concept - is special in that it is believed to be constant regardless of place, time, or the motion of the observer. Contrast this with a car traveling a steady 50 mph (as indicated on its speedometer) on the highway. Placed in the car's passenger seat is a suitcase. Relative to a stationary observer on the side of the road, the suitcase is moving 50 mph down the road, but the driver of the car observes the suitcase as stationary. We conclude that the speed of the suitcase (as well as the car) is not constant; i.e. it can vary depending on your frame of reference. Not so with light, which appears to be going the same speed no matter how fast you move or what you're doing or where you are.

When people say that the speed of light is constant, they mean it's constant in vacuum (empty space). Light actually travels at speeds different from 299,792,458 meters per second (represented by the letter c, as in E = mc2) in different materials. For example, visible light slows down when it travels through water or glass. We assign a number to each material which indicates by how much light is slowed when it travels through the material, relative to how fast it travels in vacuum (i.e. relative to c). The number, called the index of refraction (and labeled n), is the ratio of c to the speed v in the material. Glass has an index of refraction of about 1.5, which means that light travels at only 2/3rds of c (i.e. about 124,000 miles per second) within glass. (In special cases, light, though not visible light, can travel faster than c in a material!)

These fluctuations in the electromagnetic field, which make up a pulse of light traveling through space, can vary in some ways. One of these ways is in how fast the fluctuations fluctuate. Not how fast the whole "packet" is moving, which is c in vacuum, but how fast the electric and magnetic fields grow and shrink as the packet moves along. If they cycle 384,000,000,000,000 times per second, then the light looks red to us. That is, the light interacts with the eye in a way that the brain interprets as "red." If they cycle 520,000,000,000,000 times per second, then the light looks green. If they cycle 10,000,000,000 times per second, then you can't see the light but it can cook your food, as these are microwaves and they are just the right frequency required to energize the water molecules in your food, which is how food is heated in a microwave oven. Radio waves oscillate some 1,000,000 times per second (and at other rates, or frequencies, a bit above and below this number). Other types of light, or electromagnetic radiation, are gamma rays, X rays, ultraviolet light, and infrared light. Visible light falls between ultraviolet and infrared, when the categories are ordered according to frequency, as they are in the preceding list (with gamma rays having the highest frequencies). Microwaves and radio waves follow infrared in this list. When you tune your car's radio to FM 94.7, this means the electromagnetic waves delivering your music are fluctuating at 94,700,000 times per second. (Compared to the frequency of, say, red light, this isn't that fast.) These categories are arbitrary, though, and don't correspond to any natural breakpoints in what is a continuous range of frequencies from the very tiny to the enormous.

When light travels from one transparent material into a different transparent material (with a different index of refraction), it either slows down or speeds up. We already saw that light slowed down when traveling from air into glass. At the interface between the two materials, light also changes direction. This is called refraction. This is apparent when you place a drinking straw in a glass of water. The portion of the straw beneath the surface of the water does not appear to be aligned with the portion above the surface of the water, when the glass is viewed from certain angles. When you look straight down into a body of water, any object in the water appears at only 3/4ths of it true depth. This, too, is due to refraction.

As mentioned before, light sometimes acts as a collection of particles, called photons. Each photon carries an amount of energy equal to its frequency times a constant called Planck's constant, so green light is more energetic than red light because it has a higher frequency. Blue light is more energetic than red or green light, or orange or yellow, for that matter, because it has a higher frequency than any of these other colors. This plays a role in why the sky is blue. First, remember that sunlight is composed of all different colors of light. (Visible light, together with ultraviolet (UV) and infrared, make up 99% of sunlight.) The molecules of nitrogen and oxygen, etc. that make up the upper atmosphere of the Earth find it much easier to absorb the blue light component of the sunlight at its high frequency (and high energy level) than the other colors of light. (They actually prefer ultraviolet and violet light, but there's not a lot of violet or UV in sunlight. Fortunately for us, the sunlight that makes it to Earth is only about 6% UV, and ozone absorbs most of it. Some of the UV that does make it down to ground level can cause sunburn and skin cancer.) When one of these air molecules has absorbed a photon of blue light, it then immediately emits it in a random direction. So blue light is sucked up by countless air molecules as it streams in from the sun, and then it's spit out in all directions, illuminating the sky. This happens to a lesser degree with the other colors of light, which for the most part pass through the atmosphere unscattered.

Tuesday, July 14, 2009

The Misuse of Standardized Tests in America

There seems to be a national focus on using standardized achievement tests to not only compare students' knowledge and/or skills with those of other students across the country, but to judge the quality of education in this country and to label schools as high-performing or low-performing. This is a mistake.

Standardized achievement tests are not accurate measures of educational quality for several reasons. First, there is a rather significant amount of diversity in curricula across the country, and this is based on varying ideas as to what is most important to know. Whether or not this is a good thing is debatable. Nevertheless, the American school system is designed to maximize state and local control of curricula, as opposed to there being a national standard. The problem arises when a single one-size-fits-all test is administered to students (say, 8th graders) nationwide. A single standardized test cannot possibly align with only those topics taught in the classroom, in each classroom in America, as the list of topics varies. (Perhaps it does not vary a great deal, but it certainly varies.) One may argue that there is a core set of ideas that ALL students should know, and a standardized test could test just these ideas. I would agree, but again, there are no national standards as to what students should know, and who is to say that Company X or Company Y is to make the decision as to what everyone should know and when they should know it? In any case, at present (and likely into the future) different schools have slightly different objectives and a single standardized test administered nationally is not going to measure if students have met the objectives decided upon by their local or state administrators. This makes such a test invalid for judging quality of education in specific schools or districts.

Second, standardized achievement tests are designed such that test items (or questions) are answered correctly by about half of test-takers. This is done for statistical reasons; primarily, it spreads out students' test scores, making it easier to rank the students. If a question is answered correctly by most students, the question will likely be dropped from the test, as it doesn't help differentiate between the students. But ... the questions answered correctly by most students nationally generally cover the most important topics, i.e. topics that were stressed by teachers. The company that produces and markets the standardized test has an incentive to use questions concerning less-important concepts. Does such a test truly measure educational quality?

Finally, such tests often, perhaps inadvertently, measure things that students do not learn in school. Questions often test a student's innate intelligence and out-of-school learning. Now, I'm not really comfortable asserting that some people are inherently smarter than others, but it does seem reasonable that not all people are born with the exact same capacity for math, or languages, or art. Should a school be penalized for failing to teach students something that, by definition, cannot be taught? Regarding out-of-school learning, students are born into different socioeconomic classes and are raised by different parents, both of which lead to different life experiences. If a kid has never been taken to the beach before, perhaps for financial reasons, and a test question asks something about ocean waves, he or she may be at a disadvantage compared to other students that have been to the beach. Such questions do exist on standardized tests, and they invalidate the test as a measure of educational quality. Why are they put on the test? Quite likely because it is known that not all students will be able to draw on the same experiences, and this can be exploited to find those ideal test questions that are answered correctly by only one-half of all students.

I don't think we should do away with all standardized testing, but we need to understand what it is that we are testing and not misuse test results. The current system is not working. Test results are being misused. Teachers are feeling pressured to "teach to the test." Schools are forced to hyper-focus on a single, arbitrary measurement of student knowledge for political and financial reasons. Schools shouldn't live or die based on these kinds of test results. This focus on standardized testing is not making American students any smarter. We need to either improve the tests (which cannot be left up to the companies that currently produce them) or change the way we use their results.

(BTW, I found a lot of useful information for this essay in an article written by W. James Popham for the March 1999 issue of Educational Leadership.)

Wednesday, July 8, 2009

Sound Makes Cold

There's a fairly new technology available for cooling a refrigerator or freezer. It uses sound waves to transfer heat from within the device to outside the device!

Today's refrigerators use a compressor that condenses some refrigerant (likely HFCs, or hydrofluorocarbons), increasing its pressure and temperature. A fan then blows air over pipes (condenser coils) holding this warm high-pressure gas, and as heat transfers to the outside air the refrigerant condenses into a liquid (and becomes somewhat cooler). The cooler liquid then travels through an expansion valve that allows the liquid to expand and evaporate (as its pressure decreases). During this process of evaporation, the refrigerant absorbs heat from the air inside the refrigerator, cooling it. The refrigerant, now in a low-pressure gas state and somewhat warmer, completes the cycle by flowing back to the compressor.

So how can you cool a refrigerator using sound waves? First, you have to know what sounds waves really are. They're pressure waves. In other words, they are variations in pressure in some medium (like air), over some distance (really, volume). They are alternating "bands" of high and low pressure, with lots of air molecules packed together in the high pressure "bands" and relatively few molecules of air in the low pressure "bands". They move through the air like a shock wave moves through a horizontal Slinky. When a sound wave passes through a room, it passes energy along to some air molecules, which then forward the energy to nearby air molecules, which do the same thing, and on and on. (Again, like the Slinky.) Each individual air molecule oscillates over a very short distance and does NOT follow along with the wave. Sound is not a way to transfer individual air molecules from one place to another. It IS a way to transfer a pressure variation (in the medium) from one place to another.

When you hit a drum, the membrane vibrates up and down. It moves up, pushing air molecules forward and out of its way, forcing them closer to the air molecules that were just above them, creating a thin high pressure zone that then propagates forward at some speed characteristic of the medium (in this case, air). Then the membrane moves down, creating a semi-vacuum, opening up a space with few air molecules in it, creating a low pressure zone. Then it moves back up and produces another high pressure zone. This cycle continues as long as the drum's membrane vibrates, and these alternating high and low pressure zones continue to move outwards in basically all directions at a speed of about 760 mph.

Now, when gas is compressed, not only does its pressure rise, but also its temperature. For air compressions associated with normal human speech, the temperature change is miniscule, only about one ten-thousandth of a degree Celsius. The temperature change in something like the HFCs, mentioned earlier, as they move through the refrigerator's compressor, is much greater. (The compressor is obviously much more powerful than our vocal cords.) To make the sound wave useful for transferring a significant amount of heat, it needs to be able to handle a larger temperature span. This can be accomplished in two ways. The first way is to use more intense pressure waves (i.e. crank up the volume). The second way is by putting it (i.e. the air molecules) in contact with a solid material. If a gas carrying a sound wave is placed near a solid surface, the solid will tend to absorb the heat of compression (i.e. the heat associated with the temperature increase, brought about by the pressure increase), keeping the temperature stable. The opposite is also true: The solid releases heat into the gas when the gas expands, preventing it from cooling down as much as it otherwise would.

OK, so picture a long rectangular plate (perhaps metal, perhaps plastic) with an intense sound wave traveling along its surface. (Picture the wave as traveling from left to right.) When the sound wave first reaches the plate, the phase is, say, coming off a high pressure zone. The air molecules floating near that end of the plate start to expand, forming a low pressure zone. As the gas now expands, it extracts heat from the solid with which it is in contact. This heat (energy) is then passed forward by the sound wave. The wave then enters a period of high pressure, a bit farther down the plate, and as the air molecules in that region are compressed, they pass on their heat to the solid surface. The wave has now relocated a bit of heat from one end of the plate to the other end (or at least a point a bit farther down the plate).

You should now be asking yourself, won't a high pressure zone follow the initial low pressure zone, at the front of the plate, passing on heat to that end of the plate, offsetting the transfer of heat just accomplished. You'd be right if the structure wasn't designed to alleviate this problem. Take a look at the following picture:



Even though it doesn't look like it, let's pretend that the left end of the tube is open and the right end is sealed shut. Now when a sound wave enters the tube from the left, it travels to the closed end and, having nowhere else to go, is reflected back towards the left end. The red line here is a graph of sorts. It marks the magnitude of the pressure above or below the "normal" or atmospheric pressure, which is indicated by the dashed line. The wave, entering the tube, follows the upper red line, and pressure grows until it reaches a maximum at the right end of the tube. The air is piled up at the right end of the tube, hence the high pressure zone. When the air pushes into the wall at the tube's end, the wall pushes back, and the air starts moving back towards the left. It now rushes away, creating a low pressure zone at the tube's end. We now follow the lower red line back towards the left of the tube, as the pressure difference between the wave and the "normal" pressure is reduced until they equal one another at the "node" at the left end of the tube. When successive waves are timed just right so that they always follow this pattern, we obtain a "standing wave." That is, waves reinforce one another and don't cancel out. We say the standing wave is resonant or has a resonant frequency. This property of the system allows our sound-based refrigerator to work. We can alter the frequency of the sound waves in our device until we find one that supports a standing wave, and this way we can control where along a closed tube, as well as our plate, we have a high pressure zone and where we have a low pressure zone. We can ensure that we always have an expanding pressure zone (primed for compression) at the front of the plate. And we can ensure that we always have a fully compressed zone at the end of the plate. The fact that we can change the length and position of the plate, within the tube, makes this task easier. (Im simplifying a bit here, for brevity.)

Lets restrict our focus to a single parcel of gas, situated in an enclosed tube with a plate running along some portion of the tube. Remember, each parcel of gas moves over a very small range, back and forth; parcels do not move along with the wave. As a wave [of energy] approaches, it forces a parcel of gas to expand, lowering the temperature of the gas parcel to something less than the temperature of the plate. Heat then flows into the parcel, in an attempt to equalize the temperature, and this causes further expansion of the parcel. This heat is then carried by the parcel a short distance forwards (perhaps a centimeter) and is passed along to another gas parcel. Like a bucket brigade, heat is passed along until it reaches a point in the standing wave that corresponds to high pressure, where the parcel is compressed, raising its temperature to something above the temperature of the plate at that point. Heat then moves from the gas parcel into the plate, in an attempt to equalize the temperature. This happens again and again, during each cycle of the acoustic wave, creating a cold end of the plate and a hot end of the plate. Heat exchangers can then be placed at each end of the plate. These may be pipes filled with something like antifreeze, which transfer heat into or out of the plate. At the cold end of the plate, the antifreeze is in a pipe that runs through the walls of the refrigerator box, pulling heat from the air in the refrigerator and dumping it into the plate, from which the heat is extracted and carried away by the sound wave. The hot end of the plate is placed alongside a separate pipe (also containing some fluid, such as antifreeze). The fluid here absorbs heat from the plate and carries it away, to a section of pipe that is in contact with the outside air, and over which a fan blows. Heat then flows out of the fluid-filled pipe and into the air in the room, leaving the fluid cooler and ready to return to the plate for another round of heat transfer.

Though somewhat complicated, this design allows for the construction of a refrigerator that has few moving parts and that, perhaps most importantly, doesn't rely on HFCs, which are greenhouse gases (that have the potential to find their way out of the refrigerator's pipes and into the atmosphere). Furthermore, a sound-based refrigerator is unlike a conventional compressor-based refrigerator, in that it can run at full force or less. That is, it can be adjusted in real-time to run at precisely the appropriate level for the desired temperature and heat load. A conventional refrigerator's cooling system is either on (at full-force) or off. The current drawback to thermoacoustic refrigerators (as they're called): they're not very efficient. They use lots of electricity versus conventional models. If they can be made more efficient, we may start to see them in the market place. (But don't hold your breath.)

Thursday, July 2, 2009

Happy July 2nd!

It was on July 2, 1776 that the Continental Congress voted on the issue and declared the American colonies independent. John Adams said, "The second day of July 1776 will be the most memorable epocha in the history of America."

Two days later, on the 4th, a second vote was taken. Again, twelve colonies voted for independence, while New York abstained. The Declaration of Independence, the actual document, was signed by only the President and Secretary of the Congress. Not until August 2nd did a final, elegant copy of the document receive the signatures of the remainder of Congress.