« August 2008 | Main | October 2008 »

September 30, 2008

Now, That's a Number!

Just writing a number with more than ten million digits would be hard enough, but finding such a number with a specific property is quite an accomplishment. The Great Internet Mersenne Prime Search (GIMPS) announced on September 15, 2008, that two ten million digit Mersenne primes were discovered by a group of computers running collaborative software that breaks a large problem into smaller pieces [1-3]. The GIMPS virtual supercomputer on which these numbers were found is a network of 100,000 computers that has a combined throughput of nearly 30 teraflops [4-5]. The larger of these two numbers, which are 11,185,272 and 12,978,189 digits long, qualifies for a $100,000 award posted by the Electronic Frontier Foundation (EFF), of which I am a member. The EFF encourages collaborative activities in our internet age, and it advocates privacy and fair use rights in this age of easy electronic communication.

Mersenne primes are numbers M that are prime and satisfy the following condition,

M = 2n - 1

in which n is an integer. Not all Mersenne numbers M are prime, as the simple example of (24 - 1) proves. The two large Mersenne primes just found, (237,156,667 - 1) and (243,112,609 - 1), are tentatively the 45th and 46th Mersenne primes. All Mersenne primes from the 35th and above have been discovered by GIMPS, which has been running since 1996. Since the GIMPS search is not sequential, there may be other Mersenne primes found between those known. In fact, the lesser of the two numbers was discovered after the larger. The last Mersenne prime to be found without the use of an electronic computer was (2127 - 1), discovered in 1876 by the mathematician, Édouard Lucas. Lucas worked for nineteen years on this task, aided by some essential mathematical tricks, some of which he devised. Lucas began his test of this Mersenne prime at age fifteen! It wasn't until 1952 that the next Mersenne prime was discovered, this one on a computer.

The 12,978,189 digit prime was discovered by a computer at UCLA, so UCLA will receive half of the $100,000 prize. $25,000 of the remainder will be given to charity, and the other $25,000 divided among the GIMPS collaboration. How hard will it be to find a 100,000,000 digit prime? The award in that case will be $150,000, but I think a cent per digit (a million dollars) sounds like a fairer amount.

References:
1. J.R. Minkel, "World record ($100,000) prime number found?" (Scientific American Blog, August 28, 2008).
2. Huge new prime number discovered (BBC News, September 28, 2008).
3. Jenny Huntington, "Largest Mersenne Prime Number Discovered" (eFluxMedia, September 28, 2008).
4. The Great Internet Mersenne Prime Search (GIMPS).
5. Great Internet Mersenne Prime Search (Wikipedia).

September 29, 2008

The Anniversary of a Famous Equation

Saturday, September 27, 2008, marked an important anniversary in the history of physics. On that date in 1905, the German physics journal, Annalen der Physik, published Albert Einstein's most famous paper. This is the E= MC2 paper entitled, "Does the Inertia of a Body Depend Upon Its Energy Content?" (vol.18, pp. 639-641). Einstein published three other papers in this same journal in 1905, and their impact on modern physics was so great that these are called the Annus Mirabilis papers. Their topics are listed below.

Photoelectric effect (vol.18, pp. 132-148). [1]
Brownian motion, (vol.18, pp. 549-560). [2]
Mass-energy equivalence (vol.18, pp. 639-641, the E= MC2 paper). [3]
Special relativity, (vol.18, pp. 891-921). [4]

In 1901, Einstein published a previous paper in the Annalen on capillary action (vol.4, pp. 513 ff.). He published three subsequent publications in the Annalen on these topics.

• The heat capacities of quantum solids (vol. 22, pp.180-190; pp. 800ff., 1907).
• Molecular motion near absolute zero (co-authored with Otto Stern, vol. 40, pp. 551-550, 1913).
General Relativity (vol. 49, pp. 769-822, 1916).

The equation, E= MC2, does not actually appear in Einstein's paper. Although not written explicitly in this form, the equation would have been L = MV2, since Einstein used L as his symbol for energy, and V for the speed of light. A symbol for mass does not appear in any equation; instead, the German word Maβe is used.

References:
1. Albert Einstein, "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt,"Annalen der Physik, vol. 17, pp. 132-148 (1905) (PDF File).
2. Albert Einstein, "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen,"Annalen der Physik, vol. 17, pp. 549-560 (1905) (PDF File).
3. Albert Einstein, "Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?,"Annalen der Physik, vol. 17, pp. 639-641 (1905) (PDF File).
4. Albert Einstein, "Zur Elektrodynamik bewegter Körper,"Annalen der Physik, vol. 17, pp. 891-921 (1905) (PDF File).

September 26, 2008

Galileo's Estimate of Stellar Distance

How do we visually estimate the distance to an object? If the object is recognizable, such as a deer in the middle of the highway, our brain compares its subtended angle with what we've observed in the past. If the deer in the roadway subtends an angle similar to the ones we've seen at the petting zoo, it's time to put on the brakes! Scientifically, we would make a careful measurement of the subtended angle of an object and apply the tangent function to its radial dimension. This task is observationally harder, but mathematically easier, when the object is far away. In cases for which the angle is small, we can approximate the tangent by the angle itself, as can be seen by the series expression for the tangent that I reviewed in a previous article (Infinite Series Expressions, August 14, 2008). This is the method Galileo, armed with one of the first telescopes, used when he estimated the distances to the stars. His assumption, of course, was that the stars were similar to the Sun, a great leap of science in itself. Using this technique, Galileo was able also to make a linear plot of stellar luminosity as a function of distance, confirming his theory that all stars were the same [1]. Unfortunately, his estimated distances were about a thousand times too short. Where did Galileo, one of the world's first physicists, and perhaps the greatest, go wrong?

The telescope was invented by Hans Lippershey, but Galileo perfected the instrument and made his astronomical observations just a few years after Lippershey's invention. The invention of the telescope lens was prior to the start of the science of physical optics, so the basic wave-optical principles we know today were not known to Galileo. Among these principles is the diffraction of light.

Christopher Graney, a physicist at Jefferson Community and Technical College (Louisville, Kentucky), does physics without the large, well-equipped laboratories common at large universities. He does this by using other people's data. One of Ganley's papers used acceleration data from automotive magazines as a teaching tool [2]. Ganley did an analysis of Galileo's observations, and a preprint of his paper appears online [3]. Ganley shows that diffraction in Galileo's telescope caused his errors. What Galileo thought was the disk of a star was actually a diffraction artifact called an Airy disk, and he was comparing the size of the Airy disk to the half-degree diameter of the sun.

An Airy disk arises when a point source is imaged through an aperture. The aperture in this case is the diameter of Galileo's telescope tube. For an aperture D and a wavelength of light λ, the angle subtended by the disk is given by the equation

sin(θ) = 1.22 λ/D

Airy diffraction has additional, faint, concentric rings around the Airy disk, but these would have been too faint for Galileo to see. Astronomers now measure the distance to nearby stars using the parallax shifting of their position when observed on opposite sides of the Earth's orbit. The German astronomer Friedrich Bessel, famous for the eponymous Bessel functions, invented this technique in 1838. As often happens in science and mathematics, the "Bessel" functions were actually invented by Daniel Bernoulli, though named for Bernoulli, their popularizer.

In the ancient days of astronomy, when astronomical objects were imaged on photographic film (actually, glass plates with a similar photosensitive emulsion), dim objects required very long exposures. Before space-based telescopes, stars could be photographed only at night, but some photographs were made with nearly a hundred hours exposure. This confused me when I first read about it in elementary school, since it seemed that an eight or ten hour exposure is all you could get in a night. I was right.

References:
1. Galileo Galilei, "Dialogue Concerning the Two Chief World Systems (Dialogo sopra i due massimi sistemi del mondo)," (Florence, 1632).
2. Christopher M. Graney, "A Treasure Trove of Physics from a Common Source - Automobile Acceleration Data," The Physics Teacher, vol. 43, no. 8 (November 2005), pp. 506-509.
3. Christopher M. Graney, "Objects In Telescope Are Farther Than They Appear: How diffraction tricked Galileo into mismeasuring the distances to the stars" (arXiv reprint, August 25, 2008).
4. Katharine Sanderson, "Galileo duped by diffraction," (Nature Online, September 2, 2008).

September 25, 2008

Genius Grants

The John D. and Catherine T. MacArthur Foundation announced its 2008 Fellowship awards this week. These awards, $500,000 to each of twenty-five individuals, are commonly called "genius grants." The Foundation, which has a $7 billion endowment and supports scientific and cultural activities, has presented 781 such awards since the start of the program in 1981. These Foundation Fellows, who are selected for their creativity and potential for future contributions in their chosen profession, must be US citizens, and they receive $100,000 per year over a five year period. These awards are unrestricted, and they pay for any activities each Fellow may choose. The money is usually well spent on things such as sabbaticals to write books or to complete projects. A Nobel Prize is worth about $1.5 million when awarded to a single recipient.

This year's MacArthur Fellows Class is the usual eclectic mix that includes an anthropologist specializing in Maya culture, a woman from South Carolina who weaves grass sculptures, and another woman who once created an art object from two million plastic drinking straws. Some physical scientists and life scientists are included, as well. These are Sally Temple, a neuroscientist; Susan Mango, a developmental biologist; Rachel Wilson, an experimental neurobiologist; Kirsten Bomblies, an evolutionary plant geneticist; Marin Soljacic, an MIT physicist specializing in the wireless transfer of power; Alexei Kitaev, physicist working in the field of quantum computation; Andrea Ghez, an astrophysicist searching for black holes; and Adam Riess, an astrophysicist specializing in elucidation of dark energy. Riess is accustomed to big money prizes. In 2006, Riess shared the million dollar Shaw Prize with Saul Perlmutter and Brian P. Schmidt.

The names of quite a few MacArthur Fellows are familiar to me.

J. Roger Angel (1996), an astronomer who designs innovative telescopes.

Timothy Berners-Lee (1998), the computer scientist who invented the World Wide Web.

Mitchell J. Feigenbaum (1984), a mathematical physicist who became interested in chaos theory and discovered the Feigenbaum constants.

Margaret Joan Geller (1990), an astrophysicist and co-discoverer of the Great Wall.

Paul Ginsparg (2002) a physicist who started the arXiv physics preprint archive.

Claire Gmachl (2005), an optical engineer working on the development of quantum cascade lasers.

John J. Hopfield (1983) a physicist best known for the Hopfield Neural Network.

Julia Robinson (1983), a mathematician known for her work on Hilbert's Tenth Problem, which involves Diophantine equations.

Richard M. Stallman (1990), a computer scientist who started the free software movement. He created the GNU Project and the Free Software Foundation.

Andrew J. Wiles (1997), a mathematician who proved Fermat's Last Theorem.

Frank Wilczek (1982), a physicist who elucidated some properties of the strong nuclear force. Wilczek shared the 2004 Nobel Prize in Physics.

Edward Witten (1982), a physicist working on String Theory. Witten was awarded the Fields Medal, the mathematics equivalent of a Nobel Prize, in 1990.

Stephen Wolfram (1981), a computer scientist and physicist famous for his work on cellular automata and creation of the Mathematica computer program [3].

So, both you and I have been doing some creative things (at least as creative as weaving grass), so how do we apply for our MacArthur Fellowship? Unfortunately, we can't just apply. There are hundreds of official nominators in many fields who do the nominations, and these nominations are passed along to a twelve member selection committee that makes recommendations to the Foundation Board of Directors [4].

References:
1. Patricia Cohen, "Twenty-Five Receive $500,000 Genius Fellowships," (New York Times Online, September 23, 2008).
2. Richard Leiby, "For Hopkins Astronomer And 24 Others, The Stars Align - MacArthur Grants Bring $500,000 to High Achievers" (Washington Post Online, September 23, 2008).
3. There are free programs with many of Mathematica's capabilities. Check out Maxima, Sage, and many other programs.
4. Twenty-Five New MacArthur Fellows Announced (Macarthur Foundation Press Release, September 23, 2008).

September 24, 2008

Synthetic Xylem

It's been known for several hundred years that suction pumps can't raise water much more than 30 feet. The problem arises as to how trees can grow as high as 120 meters, since water must somehow get fully to their tops. Another physical principle, capillarity, may help. Because of capillarity, water will rise in a tube of inside radius r to an approximate height h given by

h = (1.4 x 10-5)/ r

where h and r are in the same units. Thus, a tube of one micrometer radius will raise water to a height of about 14 meters. If there are tube-like structures in a tree of 100 nm dimension, capillarity may be what's driving water to such heights, but there's another problem. Can a water column hold itself together for such a great height?

In two previous articles [1-2], I reviewed the unusual properties of water when it's confined to nano-dimensions. When confined to nano-dimension, the mechanical properties of water differ from those of the normal, bulk liquid. Perhaps this change in water is enough to allow the tree heights we see, but one good experiment is worth a hundred theoretical papers. For that reason, Abraham Stroock and Tobias Wheeler, two chemical engineers from Cornell University, have fabricated a synthetic microfluidic system that mimics the characteristics of the water-transporting vessel of trees called xylem [3-5]. They fabricated water channels from a poly-(hydroxyethyl-methacrylate) hydrogel containing small, homogeneous pores. This structure, which mimics the actual structure of xylem, stabilizes the water column against bubble formation. Bubbles destroy capillarity. Even with the nano-scale mechanism of the xylem in place, trees cannot grow higher than about 125 meters [6-8].

While useful for trees, can this mechanism be useful also in technology? The Cornell engineers speculate that wicking devices, such as heat pipes, may benefit from this research [5].

References:
1. Room Temperature Ice (This Blog, September 10, 2008)
2. Nanoviscosity (This Blog, May 19, 2008).
3. T. D. Wheeler and A.D. Stroock, Nature, vol. 455, no. 7210 (September 11, 2008), pp. 209-212.
4. Heidi Ledford, "Tiny synthetic tree pumps water" (Nature Online, September 10, 2008).
5. "Editor's Summary: Transpiration: the pulling power of a synthetic tree," Nature, vol. 455, no. 7210 (September 11, 2008), p. ix.
6. Keay Davidson, "Redwoods: How tall can they grow?" (San Francisco Chronicle Online, April 26, 2004.
7. George W. Koch1, Stephen C. Sillett2, Gregory M. Jennings2 and Stephen D. Davis3, "The limits to tree height," Nature, vol. 428, no. 6985 (April 22, 2004), pp. 851-854.
8. 4. George Koch, Stephen Sillett, Gregg Jennings and Stephen Davis, "How Water Climbs to the Top of a 112 Meter-Tall Tree" (Plant Physiology Net).

September 23, 2008

Video Elements

Nothing is more fundamental to the practice of chemistry than the Periodic Table of the Elements. The discovery by Dmitri Mendeleev that the elements could be arranged in groups with similar chemical properties when ordered according to atomic mass, set in motion the research programs that led to the electron orbital picture of the elements and Quantum Mechanics. The Periodic Table has been represented in many ways. There's the popular rectangular chart we're all familiar with, there are spiral versions, and many others [1]. Wikipedia has a Periodic Table that links to a good description of each element, and there's the popular WebElements web site.

The Periodic Table first reached popular video during Jacob Bronowski's television series, The Ascent of Man. Bronowski's explanation of the Periodic Table is available on YouTube [2]. Now, a group of five chemists from the University of Nottingham have teamed with a BBC video journalist to make the Periodic Table interesting to the video game generation [3-4]. They've launched a web site to contain a video demonstration for each element, and these are accessible from YouTube, also. Quite a few video elements have been posted already, and their videos have had nearly 2-1/2 million viewings. They intend to include all 118 elements [5].

The project team has decided to revise their website on a continuing basis. For example, they showcased gold, silver and bronze during the Beijing Olympics. Of course, bronze is not an element, so they've extended their original concept into the province of materials science. As flashy demonstrations are envisioned, they will revise the videos already posted.

References:
1. Mark R. Leach, Database of Periodic Tables.
2. Jacob Bronowski, The Periodic Table (1973, via YouTube, and blocked by Honeywell).
3. Sandra Porter, "It's elemental: Chemistry movies on YouTube" (Science Blogs, August 22, 2008).
4. Martyn Poliakoff, Pete Licence, Stephen Liddle, Debbie Kays, Neil Barnes and Brady Haran, "Periodic Table of Videos". If YouTube access is blocked, the videos are available here.
5. Brady Haran - How the project started.

September 22, 2008

Graphene Ultracapacitors

Few electrical engineers have heard of the Dutch physicist, Pieter van Musschenbroek (1692-1761). Van Musschenbroek was the inventor of the Leyden jar, the first practical capacitor. The capacitor is the electrical circuit element that stores charge, and it facilitated the first studies of electrical phenomena. In electronic circuitry, capacitors in combination with inductors act as resonators; and capacitors in combination with resistors or inductors act as signal filters. Their fundamental property of charge storage is making capacitors useful as a battery replacement in many applications, such as a back-up power supply for computer memory. Capacitors may even serve to power automobiles.

There are a two equations that summarize the energy storage ability of capacitors. The first relates the capacity for charge storage, simply called the capacitance, to the dimensions and materials of construction of a parallel plate capacitor.

C = κ εo (A/d),

where C is the capacitance, κ is the dielectric constant of the material between the parallel plates, εo is the permittivity of free space (8.854 picoFarad per meter). It can be seen that the key to high capacitance is a large plate area and a small separation between plates. More on this, later.

The second equation relates the stored energy of a capacitor to its capacitance and its voltage

W = (1/2) C V2,

where W is the energy stored in joules and V is the voltage. High capacitance and high voltage lead to a large energy storage, but there's a limit to how high a voltage. Dielectric materials have a certain breakdown voltage, typically expressed as kV/mm, at which the insulating property of the dielectric is destroyed. Air has a dielectric strength of only 3 kV/mm, but the dielectric strength of Teflon is 60 kV/mm.

A team of scientists at the University of Texas has increased the capacitance of parallel plate capacitors by increasing the effective area of the capacitor plates through use of graphene as the plate material [1]. Graphene is a crystalline form of carbon, an extremely light element. The use of graphene allowed a high specific plate area of more than 7,500 square meters per gram. Their graphene was modified to exhibit isolated six-membered rings linked by poly-p-orbital bonds. This increased the surface area from the standard graphene value of 2,630 m2/g [2]. Whether such thin sheets will have sufficient mechanical strength and electrical conductivity for practical application remains to be seen.

This work is scheduled for publication in the October 8, 2008, issue of Nano Letters [3].

References:
1. Daniel Vargas, "New Carbon Material Shows Promise of Storing Large Quantities of Renewable Electrical Energy" (University of Texas Press Release, September 16, 2008).
2. Hee K. Chae, Diana Y. Siberio-Perez, Jaheon Kim, YongBok Go, Mohamed Eddaoudi, Adam J. Matzger, Michael O'Keeffe and Omar M. Yaghi, "A route to high surface area, porosity and inclusion of large molecules in crystals (Fig. 1)," Nature, vol. 427, no. 6974 (February 5, 2004), pp. 523-527.
3. Meryl D. Stoller, Sungjin Park, Yanwu Zhu, Jinho An, and Rodney S. Ruoff, "Graphene-Based Ultracapacitors," Nano Lett. (To Be Published, 10.1021/nl802558y).

September 19, 2008

Subatomic Free Will

Possibly the only experience most people have with scholastic philosophy is the question of how many angels can dance on the head of a pin. Framed as it is, the question seems silly, but that's the idea the opponents to scholastic philosophy wanted to convey. The actual question relates to whether or not angels are material beings (taking up space). Of course, the question can be recast into a modern day secular equivalent of how many sub-atomic particles dance on the head of a pin. The number is large, but it's also indeterminate, since some particles are "virtual particles" that zap in and out of existence on a short time scale. These particles are both there, and not there, much like Alice's Cheshire Cat. There's as much philosophy in the secular pin head as in the theological one, but even I was surprised to find an article that examines the free will of subatomic particles.

Free will, as defined by philosophers, is the ability of "rational agents" to exercise internal control over their decisions and actions based on those decisions. Physicists are not philosophers, so they bin things according to whether the outcome of an operation depends on some initial physical conditions (predestined), or it isn't (free will). Rational agents are simply objects that obey the laws of physics. If we live in a strict "clockwork universe" in which everything follows an initial plan, these objects don't have free will. In 2006, John H. Conway and Simon Kochen, two mathematicians from Princeton University, published a paper in the journal, Foundations of Physics, that muddies the distinction between these two definitions of free will [2]. They offered a proof that our having free will implies that a type of elementary particle must have free will also. Not surprisingly, quantum mechanics comes into play.

Conway and Kochen's proof of this Free Will Theorem depends on the finite speed of information transfer (the speed of light), the spin of elementary particles with spin one, and the idea of quantum entanglement of particles. I reviewed quantum entanglement, what Einstein called "spooky action at a distance," in a previous article (Quantum Weirdness, August 13, 2008). The essence of their argument is that we only learn the spin state of a particle by measuring it. Once we measure it, we know the spin state of the other particle, since it will be identical because of entanglement. According to the Schroedinger Cat model, the particle had all possible spins before the measurement, and our measurement collapsed the wave function to give the measured spin state. Conway and Kochen use a purely mathematical argument that our free choice of orienting our laboratory apparatus implies that the spin orientation of the particle cannot be predetermined by its history; ergo, the particle has free will in the physical sense of the term. If this short summary is not very satisfying, realize that their paper is thirty-one pages long [2]. Conway and Kochen have gone a step further with publication of the "Strong Free Will Theorem," in which they modified their starting axioms using concepts from Special Relativity [3].

What is my opinion of free will? I believe that humans may be free to choose a certain path at the moment, but their supposedly free choices are severely constrained by even minor things that have happened in the past. As I wrote in a previous article (Contingency, June 4, 2008), something as simple as a sneeze has tremendous consequences for the future.

References:
1. Julie Rehmeyer, "Do subatomic particles have free will?" (Science News Online, August 15, 2008).
2. J. Conway and S. Kochen, "The Free Will Theorem," Found. Phys. vol. 36 (2006), 1441-1473; available as a PDF file here.
3. John Conway and Simon Kochen, "The Strong Free Will Theorem" (arXiv reprint, July 21, 2008).
4. Free Will Theorem (Wikipedia).

September 18, 2008

Mothers and Math

What's the best predictor of a child's mathematical achievement at the end of elementary school? It's the mother's education. A recently published study [1-2] on mathematics performance of children in England revealed that an educated mother is twice as important as an educated father as a predictor of a child's mathematics performance at age ten. This is possibly because a mother spends more time with the children than the father. The study was done for a 2,558 random sample from 540,000 students, which excluded children who did not attend a preschool. These students attended 141 preschools and 960 elementary schools. There were too few students in the sample to assess differences due to ethnicity, but quite a few other factors were shown to be important, as listed below with the size of the effect.

• Mother's Education (0.50)
• Home Learning Environment (0.40)
• Primary School Effectiveness (0.39)
• Socioeconomic Status (0.36)
• Family Income (0.30)
• Preschool Effectiveness (0.26)
• Father's Education (0.23)
• Birth Weight (0.12)
• Gender (0.08)

Some general factors for a child's progress in numeracy were noted [2].

• Girls made more progress than boys.
• Children older at school entry assessment made more progress.
• Children with low birth weight made less progress than other children.
• Children with reported developmental problems in first 3 years made less progress.
• Children whose mothers have academic qualifications, especially at age 16 and 18, made more progress than children whose mothers have no qualifications.
• Children from families where the parents' highest job title is classified as either skilled non-manual or semi-skilled manual made less progress than children from professional class families.
• Children with a higher home learning environment score show more progress.
• Children who spent more months in preschool made more progress.
• Children attending pre-school centers where there were a higher proportion of children whose mothers have degrees, higher degrees or other professional qualifications made more progress.

This research was funded by the UK Department for Children, Schools and Families [3].

References:
1. Edward C. Melhuish, Kathy Sylva, Pam Sammons, Iram Siraj-Blatchford, Brenda Taggart, Mai B. Phan, and Antero Malin, "THE EARLY YEARS: Preschool Influences on Mathematics Achievement," Science, vol. 321, no. 5893 (August 29, 2008), pp. 1161f.
2. Supporting Online Material for Reference 1.
3. Dept of Early Childhood and Primary Education .

September 17, 2008

How Many Astronauts Does it Take...

...To replace a light bulb? Probably, just one. The problem is this, that a replacement bulb must be available. The Japanese science module of the International Space Station, called the "Kibo Module," has twenty-one fluorescent lamps, much like the "shop lights" you might have in your own home, and half of these have burned out. There is just one of four bulbs working in a storage room, and eleven of the seventeen in the laboratory area are still lighted. There are no spares available on the ISS, giving new meaning to the phrase, "the darkness of space."

Tetsuro Yokoyama, a spokesman for the Japan Aerospace Exploration Agency (JAXA) thinks that the vacuum inside the bulbs has deteriorated, which is surprising. Fluorescent bulbs, unlike incandescent bulbs, are not vacuum bulbs. They are filled with an ionizing gas; albeit, at a small percentage of atmospheric pressure. These gases, typically argon with additions of xenon, neon, or krypton, are not that prone to leakage.

The module has been in place for less than six months, but the fluorescent lamps were produced in 1997 by a US company, so shelf life may be an issue. JAXA is working on light-emitting diode replacements for the bulbs, but these won't be ready until 2010, or later. The problem is not that it's hard to replace such bulbs with LEDs. In fact, you wonder why LEDs weren't used in the first place. The problem is space-qualification of such hardware. Every new system must be checked for numerous failure modes, such as thermal and vibration, and flammability; and for problems, such as out-gassing, that may affect other systems. A Space Shuttle flight is scheduled for November, and six replacement bulbs will be included in the manifest.

It's perhaps not surprising that the topic, lightbulb jokes, has its own Wikipedia page.

References:
1. Number of burned-out lamps on Japanese space lab nears half (Mainichi Daily News, September 17, 2008).
2. Loretta Hidalgo Whitesides, "Darkness Threatens Japanese Space Lab Science" (Wired, September 15, 2008).

September 16, 2008

Three Quarks for Muster Mark

The first evidence that motion of electrons causes magnetism was the discovery by Hans Christian Oersted that a current-carrying wire deflects a compass needle. Of course, Oersted didn't know that this effect was caused by electrons, since the discovery of the electron by J. J. Thomson did not occur until 1897, about fifty years after Oersted's death. The magnetism of materials such as iron was known to the ancients, and it took a little time after Thomson to realize that motion of electrons caused this magnetism. The electron motion in this case is two-fold; the electrons spin, and they orbit the nuclei of their atoms. The orbital motion of electrons around the nucleus results in a net angular momentum that gives rise to a magnetic field component additional to spin.

Protons have spin, but the magnitude of the proton magnetic moment from spin (1.410 606 633 x -26 J T-2) is considerably less than that for the spin component of the electron (-928.476 x 10-26 J T-2). Proton spin is important to such analytical tools as nuclear magnetic resonance (NMR) spectroscopy, but it's been a mystery as to how proton spin arises. Electrons are not composite particles, so their spin is an intrinsic property, but protons are composed of three bound particles known as quarks. Any property of the proton must arise from properties of the quarks, so the spin of a proton might arise from the net spin of its quarks, or their net angular momentum.

The proton is one of about two hundred heavy particles (hadrons) composed of different types of quarks bound together. If the name, "quark," sounds funny, it's because it was intended to be funny. Physicists are possibly the most playful scientists, and Murray Gell-Mann, who first postulated the quark model of hadrons, said the word was the "sound atomic ducks make." The spelling came from the novel, Finnegans Wake, by James Joyce, in which the phrase, "Three quarks for Muster Mark!," appears (Book 2, Episode 4) [1]. Since three quarks make a proton, the appearance of "three" in that phrase probably helped. Gell-Mann was awarded the 1969 Nobel Prize in Physics for his discovery of quarks.

Experiments since the 1980s indicate that quark spin can explain only a third of the spin of a proton. Now, a scientist from the Thomas Jefferson National Accelerator Facility of the US Department of Energy has developed a proton model in which more than half of the spin of the proton arises from the motion of its quarks [2]. This model by Anthony Thomas builds on previous work of Thomas and Fred Myhrer of the University of South Carolina that suggested that the angular momentum of quark motion could explain the missing spin [3]. Thomas' present work, published in Physical Review Letters [4], explains the mechanism for generation of this angular momentum by the three proton quarks. The model agrees with recent experimental evidence and supercomputer calculations.

References:
1. Glenn Elert, "Quantum Chromodynamics-The Physics Hypertextbook" (1998-2008).
2. Kandice Carter, "Moving Quarks Help Solve Proton Spin Puzzle" (Thomas Jefferson National Accelerator Facility Press Release, September 11, 2008).
3. F. Myhrer and A.W. Thomas, "Resolution of the Proton Spin Problem" (arXiv preprint, September 26, 2007).
4. Anthony W. Thomas, "Interplay of Spin and Orbital Angular Momentum in the Proton," Phys. Rev. Lett. vol. 101 (September 5, 2008), 102003.

September 15, 2008

The Integrated Circuit at Fifty Years

Perhaps I shouldn't admit it, but most of my first circuit designs used vacuum tubes. Of course, I was twelve years old at the time, so I'm not as ancient as that sounds. I built my first transistor circuit at about the same time, but it didn't perform well. However, transistor circuits, which required low voltage battery power, were much safer than tube circuits with their 400 volt "B" supply. Last week marked the fiftieth anniversary of the first demonstration of the integrated circuit, the penultimate expression of transistor technology. Next month will mark the fiftieth anniversary of NASA. The evolution of NASA and the integrated circuit coincide by more than just chance, since the integrated circuit was a significant enabler for spaceflight.

It might have been possible to fly to the moon via vacuum tube if the transistor had not been invented. At the time that simple integrated circuits were selling for a hundred dollars each, RCA had made quite a few advances in miniaturizing the vacuum tube. Their nuvistor, a pencil-eraser-sized vacuum tube, was used for a time in hybrid vacuum tube-transistor circuitry, since it had some advantage over the transistor in terms of input impedance and high frequency operation. If the same R&D dollars had been placed into vacuum tube development as transistor development, electronics may have gone down a different path. Miniature cold-cathode vacuum tubes could have replaced transistors in many applications. The Spindt cathode may have allowed a vacuum tube analog of the integrated circuit, although the density of present day integrated circuits might not have been possible.

Jack Kilby built the first integrated circuit shortly after joining Texas Instruments. As the story goes, Kilby didn't have any vacation time, since he was newly hired, so he was able to work in the laboratory for an extended period without interruption. Although Kilby demonstrated the first IC, the idea of the integrated circuit was first published in 1952 by Geoffrey W.A. Dummer, who was working on radar for the British Ministry of Defence. This was just five years after the invention of the transistor. Dummer attempted to build an integrated circuit in 1956, but he was unsuccessful. Kilby demonstrated the first integrated circuit, using germanium, on September 12, 1958. Robert Noyce of Fairchild Semiconductor built the first silicon integrated circuit shortly thereafter, not knowing of Kilby's work. Kilby won the 2000 Nobel Prize in Physics for the integrated circuit, ten years after Noyce's death. The electrical engineering community has consistently recognized the IC as an independent discovery by both Kilby and Noyce, and both men were comfortable with the idea of co-inventor status.

Kilby's circuit was a phase-shift oscillator using a single transistor, resistors formed from the bulk resistance of germanium, and a few capacitors made from reverse-biased p-n junctions. The entire circuit was on a 7/16-inch by 1/16-inch germanium strip mounted on a glass slide. As typical for a laboratory experiment, the chip looked like a mess, but the oscillator gave a 1.3 MHz signal, which was enough to prove the concept.

References:
1. Ben Hardwidge, "Integrated circuit is 50 years old today" (Custompc.co.uk, September 12, 2008).
2. Ary Stuifbergen, "What If He Had Gone on Vacation," (Ary Stuifbergen's blog, September 1, 2008).

September 12, 2008

Micro-Microscope

In the early days of fracture mechanics (which coincided with my graduate education in the mechanics of materials), research sometimes involved use of a streak camera. A mechanical streak camera overcomes the problem that it's not possible to move and stop film fast enough to generate snapshot images at high speed. What's done instead is to move the film at a continuous high rate and form the images by using a rotating mirror to expose thin slits of the image at the proper places on the filmstrip. Mechanical streak cameras have been replaced by their electronic counterparts, but this imaging technique has been used to good advantage by a research team from the California Institute of Technology and the École Polytechnique Fédérale de Lausanne in a high resolution microscopic microscope for viewing cells.

The lensless microscope, which is claimed can be built for about ten dollars, is formed from a CCD imaging sensor placed behind an array of micrometer holes in a membrane. As particles in a liquid flow past this assembly, they block light at the holes. As a result, the particles are scanned, line by line, just as in a streak camera. It takes several hundred scans to image a human cell, and all these partial images are combined to yield a combined image with one micrometer resolution. The research team has tried two techniques to shuttle the cells past their microscope, gravity and electrophoresis. They've demonstrated the ability of their microscope to image objects larger than cells by imaging Caenorhabditis elegans, a millimeter length roundworm.

The microscope needs a computer to process and display the image, but the processing and display abilities of modern cellphones and PDAs are sufficient. The low cost of such a combined microscope-computer system is important for one foreseen application, diagnosis of blood parasites causing sleeping sickness (trypanosomiasis) and malaria in sub-Saharan Africa. It has the potential also to transform diagnostics of cell response to pharmaceuticals from a batch process to a flow process.

Cal-Tech is negotiating with semiconductor companies for production of the sensor, which now takes a student two days to fabricate. Further research is underway to make a color microscope (the original is just monochrome), which will allow fluorescence imaging. There is also work on automated identification of cells, and increasing the image resolution. Although the wavelength of visible light is a limiting factor of resolution in a conventional microscope, software tricks invoking sub-pixel resolution will work with this imaging technique. The team is hopeful that a commercial version will become available in about five years.

References:
1. Ira Flatow, "Micro Microscope" (Science Friday, August 1, 2008).
2. J. R. Minkel, "Lensless On-Chip Microscope Inspired by "Floaters" in the Eye" (Scientific American Online, July 29, 2008).
3. Alexis Madrigal, "Mini-Microscope Could Lead to Cell-Sorting Implants" (Wired, July 28, 2008).
4. Katherine Bourzac, "Tiny $10 Microscope" (Technology Review, July 30, 2008).
5. Xiquan Cui, Lap Man Lee, Xin Heng, Weiwei Zhong, Paul W. Sternberg, Demetri Psaltis, and Changhuei Yang, "Lensless high-resolution on-chip optofluidic microscopes for Caenorhabditis elegans and cell imaging," Proc. Natl. Acad. Sci., vol. 105, no. 31 (August 5, 2008), pp. 10670-10675.

September 11, 2008

LHC Launch

My daily commute to work is about eighteen miles, which is very close to the circumference of the Large Hadron Collider, a particle accelerator located near Geneva, Switzerland, and managed by the European Organization for Nuclear Research (CERN). As I mentioned in a previous article (Traffic Modeling, April 7, 2008), my mean commute time is 38.7 minutes with a standard deviation of 4.6 minutes. Protons accelerated in the LHC move at nearly the speed of light, so they span seventeen miles in about 90 microseconds.

The Large Hadron Collider accelerated its first beam of protons in the early morning hours of Wednesday, September 10, 2008 [1-4]. As typical, the physicists pulled an "all-nighter." The LHC is still many weeks away from its principal objective of colliding heavy elementary particles (hadrons) at these great speeds, so there are still a few anxious people who worry about possible unintended consequences of our meddling with Nature, an idea I outlined in a recent article (The Large Hadron Collider, July 14, 2008). It's possible that the LHC will create particle collisions that haven't been seen in the universe since the Big Bang.

Everything about the Large Hadron Collider is big. The LHC is capable of colliding proton beams with an energy of 14 TeV (Tera electron volts, or 1012 eV ). Not limited to just protons, the LHC can collide heavier Pb ions up to 2.8 TeV [5]. The accelerator ring incorporates 1,600 superconducting magnets, some as heavy as 27 tons, to steer the beam in a circle. These superconducting magnets require almost a hundred tons of liquid helium for operation.

The cost of construction for the LHC was $8 billion spent over fourteen years. The United States joined the consortium of about twenty nations building the LHC by contributing $531 million. The LHC will make the Tevatron at Fermilab, the main US particle accelerator, essentially obsolete. The US attempted to build its own next-generation accelerator, the Superconducting Supercollider (SSC), for about $11 billion. The SSC would have been operational in 2001, but it was cut from the government budget in 1993, a victim of lingering Reaganomics. Particle physics in the US has not been the same, and String Theory, for which no hardware is required, has now run rampant.

Accelerating particles is one thing, but detecting the debris of the collisions is another. The LHC has four principal, house-sized detectors.

ATLAS, short for "a toroidal LHC apparatus," is a general purpose detector that will look for signs of anything new. As they say, if we knew what we would find, we wouldn't need to do the research.

CMS, the "Compact Muon Solenoid," is another general purpose detector designed to search for the Higgs boson, the elementary particle thought to give all other particles their mass.

ALICE, "A Large Ion Collider Experiment," is a detector designed to investigate the quark-gluon plasma that existed at a very early stage in our universe.

LHCb, "Large Hadron Collider beauty," is a detector that will try to discover why the universe has such a large ratio of matter to antimatter. In this case, "beauty" refers to a property of the bottom quark.

Want to build your own Large Hadron Collider? Scientists have learned to document their experiments in detail, and this particle accelerator is no exception. Complete plans are available, at no charge, on the internet [6-7]. Since commissioning of the LHC is an important cultural event, PhD Comics has a five-part series devoted to the LHC [8].

References:
1. Alexander G. Higgins, "Largest particle collider conducts successful test" (Washington Post Online, September 10, 2008).
2. Dennis Overbye, "Scientists Launch Particle Collider" (New York Times Online, September 11, 2008).
3. Paul Rincon, "Big Bang experiment starts well" (BBC News, September 10, 2008).
4. Jeff Sherwood, "First beam for Large Hadron Collider" ( US DOE Press Release, September 10, 2008).
5. Lyndon Evans and Philip Bryant, "LHC Machine - The CERN Large Hadron Collider: Accelerator and Experiments" (August 14, 2008).
6. Glennda Chui, "Free online: Full documentation for the Large Hadron Collider," Symmetry Magazine, vol. 5, no. 3, (August 25, 2008).
7. The CERN Large Hadron Collider: Accelerator and Experiments.
8. PhD Comics five-part series on the LHC

September 10, 2008

Room Temperature Ice

The conventional way to change most liquids to solids at temperatures slightly above their melting point is to apply pressure. This trick doesn't work with water, since water ice has a larger volume than liquid water. A quick look at the phase diagram of water shows that application of pressure will depress the melting point. I mentioned ice-nine in a previous article (Ice-Nine and Metallic Water, October 18, 2006). Ice-nine, not to be confused with an actual phase of water called ice IX, is a fictional phase of ice in Kurt Vonnegut's novel, Cat's Cradle. In Vonnegut's novel, crystals of ice-nine, which melt at 45.8oC, act as seed crystals to spontaneously turn all of Earth's water solid. In contrast, Ice IX is a dense solid (1.16 g/cc) that exists only at very low temperatures (below 140K) at pressures from 200 - 400 MPa.

There have been some indications that water confined to nano-dimension starts to behave less like a liquid and more like a solid. In 2006, a team of scientists from the University of Minnesota and Sandia National Laboratory reported that water confined at room temperature between hydrophilic surfaces at nanometer spacing had a viscosity seven orders of magnitude greater than bulk water [1]. They attributed this result to a tetrahedral arrangement of the atoms of the water molecules, and hydrogen bonding to the surfaces. I reported on research on nano-confined water at the Georgia Institute of Technology in a previous article (Nanoviscosity, May 19, 2008). The Georgia Tech experiments showed that the viscoelastic relaxation time for confined water layers three molecules thick was comparable to that of supercooled water at 170-210 K [2].

Now, two scientists from the Kamerlingh Onnes Laboratory at Leiden University (The Netherlands) have reported the formation of ice at room temperature in thin films of water on graphite [3]. In their experiments, they scanned a sharp tungsten tip of an atomic force microscope across graphite in a humid atmosphere. At slow scanning speeds of the tip, they observed a stick-slip motion with a period of 0.38 nm, much different from the lattice periodicity of graphite. Their analysis of the data is that there is a repeated fracture and healing of an ice condensate that forms between the tungsten tip and the graphite surface.

All this talk about the unusual properties of water in nano-confinement reminds me of the "discovery" of polywater in the late 1960s. Experiments showed that water condensed in thin capillary tubes had a lower freezing point, a higher boiling point, and unusually high density and viscosity. Further investigation showed that polywater was just water with unintentional impurities. When great care was taken to eliminate impurities from the water and experimental apparatus, polywater could not be formed [4].

References:
1. R. C. Major, J. E. Houston, M. J. McGrath, J. I. Siepmann, and X.-Y. Zhu, "Viscous Water Meniscus under Nanoconfinement," Phys. Rev. Lett., vol. 96 (May 5, 2006), 177803.
2. Tai-De Li and Elisa Riedo, "Nonlinear Viscoelastic Dynamics of Nanoconfined Wetting Liquids," Phys. Rev. Lett. vol. 100 (March 13, 2008), 106102.
3. K. B. Jinesh and J. W. M. Frenken, "Experimental Evidence for Ice Formation at Room Temperature," Phys. Rev. Lett., vol. 101 (July 15, 2008), 036101
4. Polywater (Wikipedia).

September 09, 2008

Hot, Flat, and Crowded

Thomas Friedman, a three-time winner of the Pulitzer Prize, and a columnist for the New York Times, has just released a new book [1], "Hot, Flat, and Crowded: Why We Need a Green Revolution--and How It Can Renew America." Friedman's book is about the troubles that the United States faces, and how many problems can be solved by the simple expedient of mobilizing our technical resources to achieve a sustainable energy economy. The troubles, as named in the book's title, are global warming (Hot), globalization of industry and finance, and the rise of a world-wide middle-class (Flat), and rapid population growth (Crowded). Friedman is concerned also that these troubles extend beyond the first-order effects of species extinction, loss of biodiversity, energy poverty and climate change, to second-order political effects, including the strengthening of what he calls "petrodictatorships." Petrodictatorships, among which are Russia, Venezuela and Iran, are countries that use oil revenues to actively work against US interests [2-3].

I agree with much of what Friedman says, principally because I've said these same things repeatedly since the "oil crisis" of 1973. At that time, the US decided to put some of its technical expertise into achieving energy independence, although the focus was largely on mitigating the oil supply. Thus, we were able to achieve a two-fold increase in automobile mileage efficiency and started to develop other oil sources, such as oil shale and tar sands. Unfortunately, when the price of oil declined again, all this research was turned off. Ronald Reagan went as far as to having Jimmy Carter's solar panels removed from the White House roof. Now, we're left with the chant at the Republican National Convention, as led by Rudy Giuliani, "Drill, Baby, Drill." This is certainly no way to cast off our reliance on oil.

Friedman believes, as we used to say, that there are no problems, only opportunities. In a recent interview on National Public Radio [2], Friedman says that tax incentives should be put into place to encourage technical innovation in energy. He would like to see 100,000 inventors in their garage companies creating a hundred good ideas that would lead to a few great ideas for achieving energy independence. I mentioned the innovative and highly successful photovoltaic company, First Solar, in a previous article (First Solar, August 18, 2008). Friedman points out that First Solar, based in Phoenix, Arizona, decided to expand its operations, including 300 new engineering jobs, in Germany. There was no incentive for First Solar to expand in the US.

Friedman doesn't beat the usual global warming drum. He prefers, instead, to talk about "global weirding," which includes the heavier rains and heavier snowfalls caused by global warming. Citing species extinction rates a thousand times larger than those of the pre-industrial era, he says we are entering an age when we should "think like Noah" by preserving the last remaining representatives of the world's species. It's not just about energy anymore.

Lev Grossman, writing in Time Magazine, had this interesting observation [4] - "Friedman has finally decided to confront the global environmental crisis brought about by the deforestation that was necessary to print the zillions of copies of his bestselling paean to globalization, The World is Flat."

References:
1. Thomas L. Friedman, "Hot, Flat, and Crowded: Why We Need a Green Revolution--and How It Can Renew America" (Farrar, Straus and Giroux, September 8, 2008), 448 pages, ISBN-13: 978-0374166854.
2. Fresh Air from WHYY, "Thomas Friedman's Argument For Geo-Greenism" (NPR, September 8, 2008).
3. Danny Shea, "Tom Friedman Calls For Green Revolution" (Huffington Post, July 4, 2008).
4. Lev Grossman, "Hot, Flat and Crowded: Why We Need a Green Revolution — and How It Can Renew America" (Time Magazine, August 19, 2008).

September 08, 2008

Ceramic Microwave Absorber

As everyone knows, you shouldn't put metal objects into a microwave oven, since they are too efficient absorbers of microwave energy. The microwave popcorn manufacturers bend this rule somewhat by coating their popcorn bags with a thin layer of aluminum that allows additional heating in the bag to melt the butter and condiments [1]. The majority of the energy is absorbed by the corn kernels themselves, since they have a water content of slightly over ten percent. Water is an excellent absorber of microwave radiation, as many viewers of direct-broadcast satellite television have discovered during rainstorms.

The frequency of an oven's microwave radiation is 2.45 GHz, giving a wavelength of 12.24 cm. Spoons and forks have a perfect dimension to act as an antennae at this wavelength. There will be a high electric field concentration at any sharp points on the metal, and these could cause sparking. The action is much like that of a lightning rod. The sparks you see are the dielectric breakdown of air, which has a dielectric strength of 30 kV/cm. Yes, we're talking about thousands of volts here.

You don't need an electrical conductor to absorb microwaves. Magnetic ceramics, such as ferrites, are efficient microwave absorbers. For many years I worked with a microwave ceramic material, yttrium iron garnet (YIG, Y3Fe5O12), preparing thin single crystals of this for magnetostatic wave devices [2]. Sridhar Komarneni, a professor at Penn State has teamed with ceramists at the Saga Ceramic Research Laboratory (Saga, Japan) to bring similar materials into the kitchen to make microwave cookware [3-4]. They developed a material variant of the popcorn bag principle to make microwave cookware that absorbs just the right amount of energy for heating.

Their material is formed from petalite (LiAlSi4O10) and magnetite (Fe3O4). Petalite is an appropriate material for this application, since it has a very low thermal expansion coefficient, so it is resistant to thermal shock. After heating, the magnetite converts to hematite (Fe2O3). Microwave rice cooked twice as fast in cookware made from this material, and since the cookware itself heats, the rice is kept warmer for a longer time.

This material can be used as a susceptor, or microwave energy absorber, in other processes, such as decomposition of toxic organic chemicals into safer materials. The research team demonstrated this by decomposing cooking oil. Such environmental remediation can be done at a lower energy cost than conventional heating processes. Saga is commercializing this cookware.

References:
1. Charles H. Turpin, "Microwave popcorn popping bag," US Patent No. 5,302,790 (Apr 12, 1994).
2. D.J. Halchin, D.D. Stancil, D.M. Gualtieri, and P.F. Tumelty, "Magnetostatic Wave Propagation Losses in Thorium-Substituted YIG," J. Appl. Phys., vol. 57, no. 1 (1985), pp. 3724-3726.
3. A'ndrea Elyse Messer, "Ceramic material revs up microwaving" (Penn State Press Release, August 28, 2008).
4. Hiroaki Katsuki, Nobuaki Kamochi and Sridhar Komarneni, "Novel Energy-Saving Materials for Microwave Heating," Chem. Mater., vol. 20, no. 15 (2008), pp. 4803-4807, 2008. Available as a PDF file here.

September 05, 2008

Rules of Thumb

All scientists and engineers use rules of thumb when estimating. My favorite rule of thumb is that the density of almost all inorganic materials is five gram/cc. A close second is that the density of all room temperature liquids is one gram/cc. The first rule is easy to argue, since the mean density of the Earth is 5.515 g/cc. The composition of the Earth is dominated by the presence of the low atomic number elements aluminum (2.700) and silicon (2.330) in the crust, and iron (7.874) in the core. On average, you get a density near five. Water has a density of one, and other hydrogen-bonded liquids have a slightly lesser density, but still close to one. For example, octane has a density of about 0.918, and ethanol has a density of 0.789. I use many more rules of thumb in my specialty areas, such as Trouton's rule, which states that the entropy of vaporization (the enthalpy of vaporization divided by the normal boiling point) is about 75 J/mole-K; the Wiedemann-Franz Law, that the ratio of the thermal conductivity to the electrical conductivity of a metal is proportional to the temperature; the Dulong and Petit Law, that the specific heat of a material is always three times the gas constant (R) divided by the molar mass; and the general rule of thumb for loudness, that sound power must be increased by a factor of ten to get a sound that's twice as loud.

There's a web site devoted to rules of thumb for all topic areas, rulesofthumb.org. This web site has a database of about four thousand rules, and visitors to the web site are encouraged to submit their own rules of thumb. Registered visitors vote on whether or not to add a rule to the database. An example rule on the lead page is one I learned in my childhood - the temperature in Fahrenheit is 37 plus the number of chirps a cricket [1] makes in fifteen seconds. Of course, the accuracy of this rule is accentuated by the particular season and conditions for which a cricket will chirp. While on the topic of crickets, there's another rule that a quart jar will hold 1,000 crickets. Here are a few science and math rules from this web site

• The shorter the life of an elementary particle, the more it costs to make (Rule 1966).

• Plan on a 10 percent loss of material for each step in a sequential chemical reaction (Rule 2014).

• If you can't repeat your own experiment, you are probably not doing science. If no one else can repeat your experiment, you are probably lying (Rule 2038).

• Among related species of mammals, the larger species will occur closer to the poles, and the smaller species will occur closer to the equator (Rule 2158).

• If you want to control something, measure it first (Rule 2857).

• For a small angle x measured in radians, sin x = x (Rule 1662) [2]

• For mathematics professors, each published math paper is worth $10,000 in salary (Rule 1686).

• If you can't explain a mathematical theorem to a ten-year-old, you don't understand it yourself (Rule 1710). [3]

• If you're using people as subjects of experiments, use at least 30 to be statistically significant (Rule 1758). [4]

• In the field of mathematics, if you haven't made an important discovery by the time you're 22, you probably never will (Rule 1734).

• A whole number is divisible by 3 if the sum of its digits is divisible by 3 (Rule 3636). [5]

On a similar theme is a new book, called "Guesstimation: Solving the World's Problems on the Back of a Cocktail Napkin [6]," which was reviewed in a recent issue of Nature Physics [7]. This book gives a tutorial in order-of-magnitude estimation of just about anything from first principles, although the reviewer feels that many of the physics examples have been "dumbed-down" to appeal to a wider audience.

References:
1. Orthoptera Ensifera Grylloidea Gryllidae. This is not the cricket copyrighted in near perpetuity by Disney, Orthoptera Ensifera Grylloidea Jiminy.
2. This is true for the tangent, also, since cosines near zero are one. This rule derives from the series expansion of the trigonometric function.
3. This is similar to a quotation from Bohr, "if a theory were any good it ought to be possible to explain it to a barmaid" (found in Barbara Lovett Cline, "Men Who Made a New Physics: Physicists and the Quantum Theory" (University of Chicago Press, 1987, ISBN 0226110273), p. 105).
4. As any Six Sigma practitioner knows, this applies to all objects, not just people.
5. It's fortuitous that rule 3636 is divisible by three (3+6+3+6 = 18; 18/3 = 6; 3636/3 = 1212).
6. Lawrence Weinstein and John A. Adam, "Guesstimation: Solving the World's Problems on the Back of a Coacktail Napkin (Princeton University Press, April 21, 2008)
7. Christoph Bergemann reviews Guesstimation: Solving the World's Problems on the Back of a Cocktail Napkin by Lawrence Weinstein and John A. Adam, Nature Physics, vol. 4, no. 8 (August, 2008), p. 585.

September 04, 2008

Radioactive Decay Surprise

Floyd Richtmyer, a Cornell University physicist prominent in the early twentieth century, made the following often quoted remark - "The whole history of physics proves that a new discovery is quite likely lurking at the next decimal place." A research team from Purdue University has analyzed the raw data on the radioactive decay rates of silicon-32 (32Si) and radium-226 (226Ra) obtained during extended measurements in the 1980s by physicists at Brookhaven National Laboratory and the Physikalisch-Technische Bundesanstalt, and they've found a periodic variation at the 0.1% level [1-2]. The surprising thing is that the period is found to be one year. This appears to be the season for surprises in the study of radioactive decay. I reported last month in a previous article (Nonexponential Decay, August 21, 2008) the discovery of a mysterious oscillation in the radioactive decay of the promethium isotopes, Promethium-140 (140Pm) and Promethium-142 (142Pm), contained as ions in a storage ring. Such observations could presage the development of a new theory of radioactive decay.

One member of the Purdue team that found this correlation is Ephraim Fischbach, whom I mentioned in two previous articles [3-4], one of which described his correlation of a possible gravitational anomaly called the "Fifth Force" with a material property known as hypercharge [5]. Their current finding seems to indicate that our distance from the Sun may be important to the radioactive decay process. In fact, one reason for the Purdue team's interest in this question is their discovery that a solar flare on December 13, 2006, significantly decreased the decay of manganese-54 (54Mn) in their laboratory [1]. They cite as additional evidence that measurements of the half-life of titanium-44 (44Ti), silicon-32 (32Si) and cesium-137 (137Cs) done by various laboratories don't agree, perhaps because the measurements were done at different times of the year.

The Purdue team proposes two possible mechanisms [2]. The Sun may produce a field that changes the value of the fine structure constant depending on distance. This seems unlikely to me, since the fine structure constant agrees with theory to eleven decimal places. The other mechanism supposes an interaction of the Sun's neutrino flux with the radioactive decay process. A strong coupling of solar neutrinos and atomic nuclei would explain also the mysterious oscillations in the radioactive decay of the promethium isotopes.

There are still a few problems with any solar mechanism. The data are periodic with time of year, but the maxima and minima are phase-shifted from the times of closest and farthest approach to the Sun. Also, the Earth's distance from the Sun varies by a few percent, but the effect is seen at the 0.1% level. It will take much more experimental evidence to overturn the testimony from the Crocodile that "the rate of transformation of an element has been found to be a constant under all conditions." [6]

References:
1. Do nuclear decay rates depend on our distance from the sun? (arXiv Blog, August 29, 2008).
2. Jere H. Jenkins, Ephraim Fischbach, John B. Buncher, John T. Gruenwald, Dennis E. Krause and Joshua J. Mattes, "Evidence for Correlations Between Nuclear Decay Rates and Earth-Sun Distance" (arXiv Preprint, August 25, 2008).
3. My Cellphone has a Sensor in It (January 31, 2008).
4. A Pound of Feathers (May 10, 2007).
5. Ephraim Fischbach, Daniel Sudarsky, Aaron Szafer, and Carrick Talmadge, and S. H. Aronson, "Reanalysis of the Eötvös experiment," Phys. Rev. Lett. 56, 3 - 6 (1986).
6. S. E. Rutherford, J. Chadwick, and C. Ellis, "Radiations from Radioactive Substances" (Cambridge University Press, 1930).

September 03, 2008

The North Side of the Cow

I was a member of the Boy Scouts of America when I was a child, somewhat in the tradition of the physicist, Werner Heizenberg, who was a member of the Young Bavarian League [1]. I became experienced with fire starting using flint and steel, and I learned a few woodsman tips, such as moss always growing on the north side of a tree. The reason, of course, is that in the Northern Hemisphere this is the dampest part of a tree. Nowadays, people look for satellite antennae, which point south, or use GPS. We now have a new direction finder made possible by satellite technology, but not in any way you may imagine. An analysis of Google Earth images shows that cows point north, or maybe south, but not in between [1-3].

Sabine Begall and Hynek Burda of the University of Duisburg-Essen, Germany, and a research team from the Czech Republic reviewed Google Earth images of 308 pastures in which they found 8,510 cattle, both grazing and resting [4]. They admit that finding cows was not easy, and the resolution of the images did not allow them to tell the head from the tail end of the cows, but the cattle aligned themselves mostly (60-70%) in a north-south direction. Critical to their observations was their finding that the alignment of cattle in the Southern Hemisphere was slightly canted towards east and west. This indicated that the alignment was more towards magnetic north than geographical north. Nearly three thousand wild deer were observed on the ground in the Czech Republic. About two-thirds of these oriented there heads north while resting or grazing while the rest faced south. Their study was careful to rule out the effect of the Sun's position [4].

Sensitivity to the Earth's magnetic field has been detected in bats, bacteria, bees pigeons, sharks and whales, among others, and it may be ubiquitous to animals, although varying in intensity. Kenneth J. Lohmann a biologist at the University of North Carolina, cautions that what's reported is a correlation. "Nothing like this has been observed before in cattle or in any large animal... To demonstrate conclusively that cattle have a magnetic sense, some kind of experimental manipulation will eventually be needed." Others have noted that pasture fencing may affect cattle orientation. My own home, for example, is laid out in a north-south direction. One Internet commentator said that these must have been spherical cows of uniform density at STP.

References:
1. J. J. O'Connor and E. F. Robertson, "Werner Karl Heisenberg" (St. Andrews University, UK).
2. Elizabeth Mitchell, "Cattle shown to align north-south" (BBC News, August 25, 2008).
3. Randolph E. Schmid, "Cows seem to know which way is north" (Boston.com, August 25, 2008).
4. Sabine Begall, Jaroslav Cerveny, Julia Neef, Oldrich Vojtcch, and Hynek Burda, "Magnetic alignment in grazing and resting cattle and deer" (Proc. Natl. Acad. Online, August 25, 2008, doi: 10.1073/pnas.0803650105).

September 02, 2008

Down and Out in Murray Hill

Murray Hill, New Jersey, is the central location of Bell Laboratories, now owned by Alcatel-Lucent, and previously a part of AT&T and Bell Telephone. There is a sign at the entrance to this laboratory that reads, "Lucent Technologies - Bell Labs Innovations." Someone will need to buy a can of spray paint and hide the "Innovations" part of the sign. Alcatel-Lucent has decided to withdraw from its basic scientific research in the areas of materials science, physics and semiconductors. Bell Labs research will be narrowly focused on product areas with immediate application, such as software, computer networking, and wireless technologies [1].

When I first started the practice of science, Bell Labs was the premiere industrial research laboratory in the United States. I interviewed at Bell Labs in 1977 during a protracted hiring freeze. My handler told me that I was a test case for breaking the hiring freeze. In any event, that didn't happen, so perhaps I should be thankful; but because of my research concentration, physical proximity to Bell Labs, and having a brother who worked there for a time, I know quite a few Bell alumni. Most of these are retired or have joined other companies. It's reported that only four scientists remain in Bell's fundamental physics department.

The American Physical Society, for one, thinks the Alcatel-Lucent decision is very shortsighted, responding that "fundamental physics is absolutely crucial to computing.[1]" I can give my own example here. Towards the end of his life, Nobel Physics Laureate, Richard Feynman, was a consultant for Thinking Machines Corporation, an MIT spin-off building a massively-parallel computer called the Connection Machine. It was thought that seven units of an expensive and critical piece of machine hardware were required to make this machine work. The number was just a guess, since the machine architecture was a twenty-dimensional hypercube, and the computer scientists didn't see a way to calculate how many were really needed. Feynman did a calculation that proved that only five were needed, thereby reducing the computer cost and complexity [2].

The list of Bell Labs accomplishments since its founding in 1925 is formidable. Foremost is the slew of Nobel Prizes awarded to its scientists.

• 1937 - Clinton Davisson for demonstrating the wave nature of matter

• 1956 - John Bardeen, Walter Brattain and William Shockley for inventing the transistor

• 1977 - Philip W. Anderson for theories of the electronic structure of glasses and magnetic materials

• 1978 - Arno Penzias and Robert W. Wilson for the discovery of cosmic microwave background radiation

• 1997 - Steven Chu for laser cooling of atoms

• 1998 - Horst Störmer, Robert B. Laughlin and Daniel Tsui for the discovery of the fractional quantum Hall effect

A few other accomplishments not quite worthy of a Nobel Prize are as follow

• 1920 - The one-time pad cipher was invented by Gilbert Vernam. Claude Shannon proved that it was unbreakable.

• 1931 - Karl Jansky became the father of radio astronomy.

• 1969-70 - Dennis Ritchie developed the C programming language. Ritchie, along with Ken Thompson, created the UNIX operating system.

• 1980 - Various Bell Labs researchers developed and patented TDMA and CDMA, essential for cellphone communications.

It's been argued that Bell Labs was able to accomplish so much merely because it was funded by a huge monopoly, and such an industrial research organization is impossible today. Fundamental research is supposedly conducted more properly in government-funded academic laboratories and the few government laboratories that exist today. However, government funding of research has been sparse in recent years, and it appears that applied research takes precedence in the funds allocation. One Internet commentator remarked that the Alcatel-Lucent decision to stop fundamental research at Bell Labs was a typical MBA decision. "I once had a conversation with an MBA type in which he argued that food was not economically important because it only made up 3% of the Gross Domestic Product. I'd like to see what would happen if he reduced his food budget to zero."

References:
1. Priya Ganapati, "Bell Labs Kills Fundamental Physics Research" (Wired, August 27, 2008).
2. Geoff Brumfiel, "Bell Labs bottoms out," Nature, vol. 454, no. 7207 (August 21, 2008), p. 927.
3. W. Daniel Hillis, "Richard Feynman and The Connection Machine" (Physics Today, February, 1989, via KurzweilAI.net).
4. Bell Labs (Wikipedia).