« July 2007 | Main | September 2007 »

August 31, 2007

Labor Day

Monday, September 3, 2007, is the Labor Day holiday in the United States. Labor day occurs on the first Monday in September, and it's been a US holiday since an Act of Congress in 1894. Although there isn't a physical definition of "labor," there's a physical definition of work. It's the product of the force applied to a mass and the distance it's traveled

W = F•d

when the force is in the direction of the motion, and it has the units of energy. Gaspard-Gustave Coriolis (the same Coriolis as in the Coriolis Effect) established this formal definition of work in 1835, although the principle was stated by Laplace in his description of tidal forces in 1778. Of course, real life is not as simple as W = F•d, since forces are not always in line with the resultant motion, so slightly more complicated equations apply. Note that motion is required for work, so you can push on a wall all you want, but no real work is being done, unless you push the wall over.

I'm taking a break from holding up my office wall, and I'll be away from my office all of Labor Day week. My next blog will be posted on Monday, September 10, 2007. Until that time, enjoy the following quotations about work.

Danilo Dolci - It's important to know that words don't move mountains. Work, exacting work moves mountains.

†...or PowerPoint slides

Thomas Alva Edison - Genius is one percent inspiration and ninety-nine percent perspiration.

Thomas Alva Edison - Opportunity is missed by most because it is dressed in overalls and looks like work.

Barbara Ehrenreich - Personally, I have nothing against work, particularly when performed, quietly and unobtrusively, by someone else. I just don't happen to think it's an appropriate subject for an "ethic."

Ralph Waldo Emerson - Don't waste life in doubts and fears; spend yourself on the work before you, well assured that the right performance of this hour's duties will be the best preparation for the hours and ages that will follow it.

John W. Gardner - The society which scorns excellence in plumbing because plumbing is a humble activity, and tolerates shoddiness in philosophy because philosophy is an exalted activity, will have neither good plumbing nor good philosophy. Neither its pipes nor its theories will hold water.

Victor Hugo - A man is not idle because he is absorbed in thought. There is a visible labor and there is an invisible labor.

Thomas Jefferson - I'm a great believer in luck and I find the harder I work, the more I have of it.

Helen Keller - The world is moved along, not only by the mighty shoves of its heroes, but also by the aggregate of the tiny pushes of each honest worker.

Lane Kirkland - If hard work were such a wonderful thing, surely the rich would have kept it all to themselves.

Vince Lombardi - Dictionary is the only place that success comes before work. Hard work is the price we must pay for success.

Henry Wadsworth Longfellow - The heights by great men reached and kept / Were not attained by sudden flight, / But they, while their companions slept, / Were toiling upward in the night.

Francoise de Motteville - The true way to render ourselves happy is to love our work and find in it our pleasure.

Mark Twain - What work I have done I have done because it has been play. If it had been work I shouldn't have done it.

References:
1. Quotations in various categories are available at wisdomquotes.com.
2. Wikiquote is another collection of quotations, including the following by Leonard Bernstein, "To achieve great things, two things are needed: a plan, and not quite enough time."

August 30, 2007

The Essence of Materials Science

Chemists typically are not concerned with more than the composition of matter when it is in the solid state, although the structure of matter on a molecular scale (e.g., enantiomers) sometimes enters into their discussions. This is a different paradigm than that of materials science. Materials science, in its usual definition, investigates the "relationship between the structure of materials and their properties." [1]

In a recent article [2] in Materials Today, Markus Buehler and Theodor Ackbarow of MIT present a nice summary table of the various defects that appear in materials and their consequences. Here's a slightly edited version of this table.

Crack
• Definition: Void-like inclusion or surface crack; region with reduced traction across a molecular plane
• Found in: Many crystals (metals, ceramics)
• Mechanical Consequences: Location of stress concentration; applied stress is multiplied many times at a crack tip; can lead to breaking or shearing of atomic bonds
• Comments: Cracks at various scales largely control the strength of crystalline materials

Dislocation
• Definition: Localized shear displacement in crystals
• Found in: Predominantly metals and other crystalline materials
• Mechanical Consequences: Mediates plastic deformation
• Comments: Nucleation of dislocations often competes with crack extension; determines whether a material is brittle or ductile

Grain Boundary
• Definition: Interface between two differently oriented crystals
• Found in: Crystalline materials (metals, ceramics)
• Mechanical Consequences: Mediates deformation, e.g. by grain boundary sliding, grain boundary diffusion at high temperatures
• Comments: Grain boundaries are particularly important at small grain sizes, e.g. nanocrystalline metals

Vacancy
• Definition: Missing atom in crystal lattice
• Found in: Crystalline materials
• Mechanical Consequences: Enhances diffusive material transport, which mediates deformation
• Comments: Critical for high temperature material behavior (e.g., creep)

Craze
• Definition: Region of localized yielding, formation of microvoids and fibrils
• Found in: Polymers, plastics
• Mechanical Consequences: Process prior to cracking, dissipates energy
• Comments: Increases toughness of plastics

Shear Band
• Definition: Small region inside material in which localized shear has occurred
• Found in: Polymers, metallic glasses
• Mechanical Consequences: Reduction of the material strength
• Comments: Mediates plasticity

References:
1. Definition of materials science (Wikipedia).
2. Markus J. Buehler and Theodor Ackbarow, "Fracture mechanics of protein materials," Materials Today, vol. 10, no. 9 (September, 2007), pp. 46-58.
3. Materials Today is available free of charge to qualified individuals. See its web site for subscription information.

August 29, 2007

My Universe has a Hole in It

In a previous article, I mentioned how astronomers have discovered that the universe, far from being uniformly filled with matter, has huge voids devoid of galaxies. In one case, two of these voids near Earth are separated by what's called the Great Wall, a collection of galaxies about 500 million light years long, by 300 million light years wide, but only 15 million light years thick [1]. These voids, however, pale in comparison to a recently found hole about 5-10 billion light years from Earth [2-3]. This void is a billion light years in diameter in the region of the constellation Eridanus. More surprising than the lack of galaxies in this region is the absence of dark matter, the mysterious substance proffered by cosmologists as the answer to all universal mysteries.

This huge cosmic void was discovered by Lawrence Rudnick, a professor of astronomy at the University of Minnesota while analyzing data collected at the National Radio Astronomy Observatory. These data were part of a sky survey. This survey, the NRAO VLA Sky Survey, was a quick look at the accessible sky at radio frequencies without looking for anything in particular. The survey was conducted with the Very Large Array (VLA), the array of radio telescope dishes shown in an early scene in the film, 2010, and also in the film, Contact. Rudnick's void is a thousand times larger than any void discovered before. Cross-checking with other data shows that there is diminished cosmic background radiation in that region. It's a cold spot in the universe.

Although there is a chance that this void could be just a statistical fluctuation of early galaxy formation, it's more likely that the feature was caused by gravitational forces. Other areas of greater mass have pulled mass away from the void area over the span of billions of years. Rudnick and colleagues have submitted a paper on their research to the Astrophysical Journal [4].

On the subject of voids, I should mention the hollow earth theory, which states that the Earth is hollow with entrances at both the north and south poles. Not only that, but the Earth's interior is populated by an advanced civilization who occasionally visits the surface in their flying saucers. The existence of an astronomical observatory at the south pole doesn't seem to be sufficient evidence against this theory to its proponents.

References:
1. Margaret J. Geller and John P. Huchra, "Mapping the Universe," Science, vol. 246, no. 4932 (17 November 1989), pp. 897-903.
2. Seth Borenstein, "Astronomers Find a Hole in the Universe" (Associated Press, August 24, 2007).
3. Dave Finley, "Astronomers Find Enormous Hole in the Universe" (NRAO Press Release, August 23, 2007).
4. Lawrence Rudnick, Shea Brown, Liliya R. Williams, "Extragalactic Radio Sources and the WMAP Cold Spot" (Preprint). Available as a PDF file here.

August 28, 2007

Knallquecksilber

Whether or not you speak German, most of you noticed the word, "quicksilver," in the title and realized that this article is about mercury. That's not surprising, since much of the English language evolved from Anglo-Saxon, a Germanic language, and quite a few words are similar in German and English. Much of our present day chemistry derives from the wonderful properties of mercury, although mercury has gotten some bad press because of the toxicity of some of its compounds. Not all compounds of mercury are toxic. For example, mercuric sulfide is not, since it is not soluble in water, but the soluble compounds mercuric chloride and methyl mercury are very toxic. Elemental mercury is also toxic, since it will break apart into small droplets when spilled, and the resulting high surface area of exposed mercury gives an unacceptably high mercury vapor concentration in the air. One of the most useful properties of mercury is that it's a liquid metal at room temperature, so it was used extensively in aiding electrical contacting in electrical relays. Other room temperature liquid metals are cesium, francium, and gallium. Bromine, a non-metal, is also liquid at room temperature, and rubidium would be liquid in a very hot room (39.31o).

One of the most important compounds of mercury in the history of chemistry is mercury fulminate (a.k.a., mercury (II) oxidoazaniumylidynemethane, Hg(ONC)2, CAS number 628-86-4). Mercury fulminate, in German, "Knallquecksilber," is an explosive. Since it is sensitive to shock and friction, it was used in blasting caps to trigger other explosives. Its autoignition temperature is 150 oC, so it's relatively safe from temperature affects. Mercury fulminate was first isolated by Charles Howard around 1800, but it was synthesized many years before that by alchemists. Its synthesis is quite simple - mercury is dissolved in nitric acid (aqua fortis), and ethanol (spiritus vini) is added to the solution. Silver fulminate, which is even more unstable than mercury fulminate, is prepared the same way.

Not surprisingly, chemists were reluctant to undertake scientific investigations of mercury fulminate. Now, hundreds of years after its first synthesis, a group of chemists from the Department Chemie und Biochemie, Ludwig-Maximilians-Universität München, determined the crystal and molecular structure of mercury fulminate [1]. They describe their work in a recent issue of Zeitschrift für anorganische und allgemeine Chemie [2]. A first attempt at xray crystallography of mercury fulminate was done in 1931, quite a while after Max von Laue invented xray crystallography, but structure analysis takes extensive computation available only in the last few decades [3]. The crystal structure is orthorhombic, and it was found that the individual fulminate molecules in the crystal are nearly linear in form, with the mercury atoms bracketed by carbon atoms on each side. The nitrogen and oxygen fill the ends of the linear chain; i.e., O=N-C≡Hg≡C-N=O. What is important about this research is that it corrects a long-standing misconception, promulgated in the chemical literature, that the mercury atoms are bonded to oxygen.

References:
1. Wolfgang Beck, "Explosive crystal" (Press Realease, John Wiley & Sons, Inc., August 24, 2007).
2. Wolfgang Beck, Jürgen Evers, Michael Göbel, Gilbert Oehlinger, and Thomas M. Klapötke, "The Crystal and Molecular Structure of Mercury Fulminate (Knallquecksilber)," Zeitschrift für anorganische und allgemeine Chemie, vol. 633, no. 9 (2007), pp. 1417-1422.
3. I knew a chemist who did his dissertation many years ago on a crystal structure. He labored over a mechanical calculator for many months, and in later years he lamented the fact that the essence of his dissertation is now done automatically in a few hours on a machine sometimes used by undergraduates.

August 27, 2007

Nano-Germicide

When carbon nanotubes were first produced, there was no information about their toxicity, so scientists working with them were especially careful. As toxicity tests were conducted, nanotubes were found to be toxic in some ways, but not others [1, 2], so there are presently no clear-cut guidelines. However, it's generally agreed that breathing carbon nanotubes is not a good thing, and further caution is advised [3, 4]. As they say, there are no problems, only opportunities, so it's not surprising that scientists would find an application for the toxic effects of carbon nanotubes. A team led by Memachem Elimelech, Chair of the Department of Chemical Engineering and the Director of the Environmental Engineering Program at Yale University, showed that surfaces coated with carbon nanotubes have a germicidal effect, since the nanotubes penetrate the cell wall and kill the bacteria upon contact [5]. Their work was reported at the 234th National Meeting & Exposition (August 19-23, 2007) of the American Chemical Society, and will appear as an article in Langmuir [6].

The Yale team investigated single-walled carbon nanotubes, the type with the smallest diameter, about one nanometer. In order to assess the action of the carbon nanotubes themselves, great care was taken to purify the samples, especially removing toxic metals. Escherichia coli, the "fruit fly" for most bacterial studies, was used. The carbon nanotubes were introduced both as a solution addition, and affixed to filter paper. It appears that all contacting bacterial cells were ruptured, since analysis showed large quantities of free DNA and RNA left in solution. Multi-walled carbon nanotubes, which are many times larger in diameter than single-walled carbon nanotubes, are much less toxic, so it appears that the principal effect is puncturing the cell wall.

Joseph Blake Hughes, Chair of the Civil & Environmental Engineering Department at the Georgia Institute of Technology, sees a downside to using carbon nanotubes in this way, since we rely on many "good" bacteria in the environment. Says Hughes [5], "Microbial function is critical in ecosystem sustainability and we rely on microbes to detoxify wastes in environmental systems. If they are impaired by nanotubes, or other materials, it is the cause for significant concern."

References:
1. Michael Berger, "The ongoing challenge of determining carbon nanotube toxicity" (Nanowerk LLC, March 12, 2007).
2. Karen Schmidt, "The great nanotech gamble" (New Scientist Online, July 14, 2007).
3. Nano Safety, Risk & Regulation Archive.
4. Nanotechnology White Paper by Nanotechnology Workgroup, U.S. Environmental Protection Agency (EPA 100/B-07/001, February 2007).
5. Mason Inman, "Bug-popping nanotubes promise clean surfaces" (New Scientist Online, August 22, 2007).
6. S. Kang, M. Pinault, L.D. Pfefferle and M. Elimelech, "Single-Walled Carbon Nanotubes Exhibit Strong Antimicrobial Activity," Langmuir, vol. 23, no. 17 (2007), pp. 8670-8673.

August 24, 2007

Diamonds are (Almost) Forever

The Earth is about 4.5 billion years old. This is young by cosmic standards, since the universe is estimated to be about 13.7 billion years old, give or take about 200 million years. During much of its early existence, the Earth was a molten ball of undifferentiated material. The crust was thought to have formed about 4.3 billion years ago, so it was a surprise when scientists discovered diamonds in older rocks, since the diamond allotrope of carbon forms only under pressure [1]. A report of this discovery appears in a recent issue of Nature [2]. The geological period concerned is called the Hadean, with obvious reference to Hades, the Greek word for hell.

The diamonds were discovered in Western Australia by a team of geologists from the Institut für Mineralogie, Westfälische Wilhelms-Universität, Münster, Germany, and the Department of Applied Geology, Western Australian School of Mines, Bentley, Australia. The discovery was totally unexpected. The fact that diamond only forms under pressure implies that large crustal plates had formed and were bouncing against each other very early in Earth's history. Geologists had thought that the Earth was completely molten for the first 500 million years of its existence, but in the 1980s, 4.5 billion year-old zircon crystals were discovered. While analyzing these zircon crystals, the German-Australian team discovered diamonds about 70 micrometers in size in fissures in the zircon. There's always the possibility that carbon had infiltrated the zircon fissures and was converted to diamond at a later time, so more analysis is in order.

Diamond Music is the name of a musical suite composed by the Welsh composer, Karl Jenkins [3]. It was written for a series of DeBeers television commercials in its "A Diamond is Forever" advertising campaign in the mid 1990s.

References:
1. Ewen Callaway, "Diamonds found in Earth's oldest crystals" (Nature Online, August 22, 2007).
2. Martina Menneken, Alexander A. Nemchin, Thorsten Geisler, Robert T. Pidgeon & Simon A. Wilde, "Hadean diamonds in zircon from Jack Hills, Western Australia," Nature, vol. 448 (23 August 2007), pp. 917-920.
3. Karl Jenkins, "Diamond Music," Sony Classical, 1996 (Amazon).

August 23, 2007

Technology Hackers

The expression, "to hack," has taken on several meanings over the years. Most people think of hacking in the pejorative sense of breaking into computers. Real hackers call this activity "cracking," not "hacking." Generally, the term describes a quick and dirty solution to a technical problem, and computer programmers use it to describe some of their activities. Applying a small correction to an existing program, what is called a software "patch," is also known as a "hack." Many Y2K corrections fell into this category. Larger programs written inelegantly for just one purpose and generally just a single use were called "hacks." Although software hacks are very common, there are also technology hacks in which a hacker modifies a hardware component to perform outside its intended purpose, or combines various inexpensive hardware components to create a novel machine. One example is using pieces of inexpensive single-use cameras, such as rechargeable batteries, flash lights, lenses, and memory chips, to produce other circuits - perhaps even another camera. Make Magazine is filled with many such hacks, and it's given me many interesting ideas.

The philosophy of activities such as these was the topic of a book by Robert Pirsig, Zen and the Art of Motorcycle Maintenance, published in 1974. Persig's book, which is actually a veiled discourse in philosophy, included such things as descriptions of using bits of aluminum cans to repair motorcycles. It is reportedly the most widely read philosophy book of all time, possibly because its readership found the presentation to be a refreshing change over the usual Plato.

Experimental scientists have always been hackers. It evolved from our graduate school training in which we always needed to make silk purses out of sows' ears by building apparatus, doing experiments, and finally get our degree. Recently, I transformed a laboratory hot plate into an oven to test some small components, thereby saving part of our capital budget for other items. However, my small hack doesn't come nearly to the level of a recent hack by Chris Anderson, the editor-in-chief of Wired magazine. Anderson built an unmanned aerial vehicle (UAV) from common components, including a Lego Mindstorms kit [1, 2].

Anderson has published a book entitled, "The Long Tail," in which he argues that the traditional economy of selling a few products to many people is being transformed by automated manufacturing practice to a new economy that caters to niche markets in which a large variety of products is sold to small numbers of people. One of my brothers is on the long tail, selling vacuum tube products to audiophiles. Anderson's UAV, which he built with his eight-year-old son, is hacked together from the Mindstorms computer, a gyroscope, a GPS unit, and a camera system, all mounted on a large R/C airplane. The data are communicated by cell phone, and the total cost of the components is about a thousand dollars. The video resolution is two centimeters. Anderson's first "mission" was to see (with Google's permission) whether "Google" was painted at the bottom of a swimming pool at the company's site, as it is rumored to be. It isn't.

References:
1. Patrick Mannion, "'Long Tail' author claims theory could transform design" (EETimes Online, 08/13/2007)
2. Patrick Mannion, "'Long Tail' scribe says theory can transform design" (EETimes Online, 08/20/2007).
3. Chris Anderson, "The Long Tail: Why the Future of Business is Selling Less of More," (Hyperion, 2006), ISBN-10: 1401302378, ISBN-13: 978-1401302375.
4. Hack Definition (Wiktionary).

August 22, 2007

Simulation

Many of us use computer simulations to test the waters before we do actual experiments. In the past, before ubiquitous computing, these simulations were just back-of-the-envelope calculations that would indicate whether or not something was possible, usually within a rough order of magnitude. Often these calculations were used to analyze data from experiments in order to devise modifications to subsequent experiments. Enrico Fermi, the Italian-American physicist who was awarded the 1938 Nobel Prize in Physics for his work on induced radioactivity, was a prime proponent of such calculations. Not only did he use the backs of envelopes for calculations, he actually used bits of paper from an envelope in a measurement of the yield of the first atomic bomb [1]. He did this by measuring how far bits of paper were blown away by the pressure wave, and he had a fairly accurate estimate long before the instrumental data were analyzed. Today, very complex processes are simulated by computer, and computers are getting faster all the time. There are even purpose-built computers, usually assembled from field-programmable gate arrays, which are much faster at solving their specific problem than a general-purpose computer. Today's fastest general purpose supercomputer, the IBM Blue Gene/L at Lawrence Livermore National Laboratory, runs at a peak speed of about 280 teraFLOPS, and the US Defense Advanced Research Projects Agency (DARPA) is sponsoring a program to bring computing up to the PetaFlop level. Some scientists believe that computers will become sentient somewhere between the petaFLOP and exaFLOP (1,000 petaFLOPS) level. By present trend, this will happen very soon. Is nothing impossible with computers?

In a recent New York Times article [2], John Tierney reported on a speculation by Oxford University philosopher, Nick Bostrom that our existence itself may be a simulation [3]. Bostrom, who has a background in physics, mathematics, computer science and artificial intelligence, is Director of the Future of Humanity Institute at Oxford. Progress in computing has been rapid to date, but imagine the progress that will occur when computers are able to design their own progeny. This happens presently, with humans as the middle-men using computers to design computers. Bostrom believes that eventually super-intelligent species with extremely powerful computers will create simulated universes like our own, either for research, or as a hobby. If intelligent life were to reach this "trans-human" state of development, the number of such simulations could be huge. As a consequence, the probability that we're living in a simulation becomes quite large, and the statistics are much like those of the Doomsday argument of Princeton University's Richard Gott, that I outlined in a previous article.

Of course, it may be that intelligent life never attains such advanced computer technology (see the Doomsday argument); or, they might not be interested in doing such simulations. Bostrom, himself, believes the probability of our being a simulation as only twenty percent. If we are simulated, perhaps next time they could run the simulation without cholesterol!

References:
1. Fermi used some sort of paper, not necessarily an envelope, but the story is so much better this way.
2. John Tierney, "Our Lives, Controlled From Some Guy's Couch" (New York Times Online, August 14, 2007).
3. Nick Bostrom, "Are You Living in a Computer Simulation?," Philosophical Quarterly, vol. 53, no. 211 (2003), pp. 243-255 (pdf and html online versions).

August 21, 2007

Nanotube-Paper Battery

In a previous article, I described a new type of "carbon paper" made from graphene, single atom sheets of graphitic carbon. Stacked layers of oxidized graphene have been made into free-standing, mechanically strong, sheets of graphene "paper." A previously developed type of carbon paper, Bucky paper, is a carbon-based paper formed from C60. Now, we can add another member to this growing list of carbon papers. This time it's a composite of cellulose and carbon nanotubes developed by scientists in the Materials Science and Engineering Department, and other departments, at Rensselaer Polytechnic Institute. This achievement is not as simple as it sounds, since cellulose has limited solubility in nearly every liquid. The RPI team found that an ionic liquid of the type used as supercapacitor electrolyte dissolves cellulose and allows its impregnation with multiwalled carbon nanotubes. When the ionic liquid is removed, lithium metal can be deposited on the resulting paper to form a type of lithium ion battery. If the ionic liquid is left intact, the battery stack will act as a supercapacitor. The devices are flexible, like conventional paper.

Pulickel M. Ajayan, the principal investigator of this project, says that these nanotube paper batteries can combine the rapid discharge capability of supercapacitors with the long term power supply of batteries. These batteries are said to work over a wide temperature range, and they won't deform when frozen. At this point, the energy density is a paltry thirteen watt-hours per kilogram, but this is twice the energy density of a supercapacitor. Time will tell whether these will be useful devices. My personal opinion is that the internal cell resistance will be too large for most applications, but these devices may be useful for low power sensors, and for energy-harvesting storage.

References:
1. Katharine Sanderson, "Nanotubes plus paper make for flexible batteries" (Nature News Online, August 13, 2007).
2. Victor L. Pushparaj, Manikoth M. Shaijumon, Ashavani Kumar, Saravanababu Murugesan, Lijie Ci, Robert Vajtai, Robert J. Linhardt, Omkaram Nalamasu, and Pulickel M. Ajayan, "Flexible energy storage devices based on nanocomposite paper," Proc. Natl. Acad. Sci. USA, vol. 104 (2007), pp. 13574-1357 (2007).

August 20, 2007

Bringing Home the Bacon

I've mentioned Francis Bacon in previous blogs. Francis Bacon (1561-1626) was an English statesman who had a lifelong interest in science and technology. He is credited with the popularization of the Scientific Method, if not its invention in modern form. Since he was especially interested in the application of science to improve living conditions, he is often credited with the invention of industrial research, a topic to which we can all relate. There's another Bacon known to science, but not quite in the same way. Kevin Bacon is an American film actor who appeared in such memorable films as Animal House (1978, John Landis, Director) [1]. His link to science comes from the trivia game, The Six Degrees of Kevin Bacon. Since Bacon has acted in many films since his first film, Animal House, it's usually possible to link any actor to Bacon through a series of films; e.g., that actor was in a film with another actor who had been in a film with another actor who had been in a film with Kevin Bacon. The number of film links it takes gives you the "Bacon Number," which is somewhat like the Erdös Number. Nearly every actor who comes to mind has a Bacon Number of at most two; that is, they acted with someone who had acted with Kevin Bacon [2].

Six Degrees of Kevin Bacon is adapted from the premise of a play by John Guare, Six Degrees of Separation, which explores the idea that each person can be connected to any other by a chain of no more than six acquaintances. The original concept is derived from an experiment, often called the "Small World Experiment," by Harvard psychologist, Stanley Milgram. Milgram had random individuals attempt to forward a letter to someone in Boston by forwarding it to someone they knew who might know that person, etc. In the sixty-four cases for which the letter actually did reach the intended person (many people in a chain refused to forward the letter), the average path length was about 5.5, thus the six degrees concept.

Large companies like Honeywell are communities with connections, like the world at large. The connectivity of companies and its relationship to their creativity was the topic of some recent research by business analysts at the University of Washington and New York University. The principal investigators of the study were Corey Phelps, Assistant Professor in the Management and Organization Department at the University of Washington Business School, and Melissa Schilling, an associate professor of at the Stern School of Business at New York University who has an interest in technology and innovation management. As a first example of connectivity and creativity, Phelps is an Alumnus of the NYU business school.

Schilling and Phelps examined innovation over a six year period in 1,106 companies in eleven different industrial sectors. The number of patents was used as a measure of creativity. Not surprisingly, Schilling and Phelps found that companies whose employees were linked by fewer degrees of separation were more creative. They found that although employees in most departments were generally isolated from their companies and other companies at large, a few individuals in each department had good connectivity with others, thereby increasing the overall connectivity. The isolation of most employees actually works to an advantage, since it implements the "small world" aspect of clustering. Clustering enables more efficient information flow, since the information is filtered between effective nodes. Schilling and Phelps' recommendation is to ensure that there are opportunities for some employees in every department to create and maintain connections both within the company and with others outside the company. One example of the later is attendance at industry conferences and symposiums. Says Phelps,

"Our results are particularly important because in today's knowledge economy, innovation is king. Without the ability to continually create and commercialize new products and services, companies often wither and die. This study helps us understand how large-scale alliance networks influence innovation. It improves our understanding of why some industries and regions are more innovative than others."

The National Science Foundation provided funding for this study, which appears in Management Science [4].

References:
1. Animal House on the Internet Movie Database.
2. The Oracle of Bacon (Bacon calculator at the University of Virginia).
3. Nancy Gardner, "Fewer degrees of separation make companies more innovative, creative" (Press Release, University of Washington, August 16, 2007).
4. Melissa A. Schilling, Corey C. Phelps, "Interfirm Collaboration Networks: The Impact of Large-Scale Network Structure on Firm Innovation," Management Science, vol. 53, no. 7 (July 2007), pp. 1113 ff.
5. M.A. Schilling, M.A. 2005. A small-world network model of cognitive insight. Creativity Research Journal, vol. 17, nos. 2-3 (2005), pp. 131-154.

August 17, 2007

Green, Green, Flying Machine (Part II)

In yesterday's posting, I reviewed some environmental problems facing the aerospace industry. These were the use of chromium as an alloying element, the use of lead in solder, and the problem with fossil fuels. About 20 billion gallons of fuel are used annually by US airlines alone [1].

Nature has published an article recently about green technologies in civil aviation [2]. Civil aviation accounts for about 3.5% of greenhouse gas emissions. As terrestrial technologies cut emissions, this percentage share will increase. One way to increase fuel efficiency is to replace gas turbine propulsion engines with turbofan engines. Turbofan engines, which use a forward fan as the main thruster, rather than the rearward gas jet, have the fringe benefit that they are quieter than gas turbines. Just as for automobile engines, gas turbine engines operate most efficiently at a particular rotation rate, so geared turbofans can be used to keep each engine component operating at its most efficient speed. Pratt & Whitney is developing a turbofan engine geared to have the fan rotate at a third the speed of the gas turbine, with a reported 15% increase in efficiency.

An obvious way to increase fuel efficiency is making aircraft lighter. As the typical US passenger has increased in weight over the years [3], there have been efforts to at least reduce airframe weight be the same amount. The Boeing 787, which will be in commercial service in 2008, uses a considerable quantity of lightweight composite materials. Whereas the venerable 777 used 12% composites and 50% aluminum, the 787 uses 50% composites and 20% aluminum, with titanium at an impressive 15%. Only 10% of the 787 is fabricated from steel.

The shape of aircraft is critical to efficiency, since energy is lost from drag. Computer simulations of novel shapes are being run to predict the shape of future airliners. Not unexpectedly, a blended wing and body, such as on the B-2 stealth bomber, is a front-runner. The problem here is that only the pilot will have a window seat, and passengers near the wing tips will get a free amusement park ride for the price of their transportation ticket.

Alternative fuels would seem to be the easiest technology to implement. Ethanol, has a lower energy density (26.4 MJ/kg, or 21.2 MJ/liter) than kerosene (43.8 MJ/kg, or 35.1 MJ/liter) [4-6], so biodiesel is the most likely "conventional" fuel. Virgin Airlines is working with Boeing and General Electric towards a test flight with biofuels in 2008. Of course, liquid hydrogen is another alternative fuel, as discussed in yesterday's posting.

References:
1. Jet Fuel Cost and Consumption Report (Airlines.org).
2. Kurt Kleiner, "Civil aviation faces green challenge," Nature, vol. 448, no. 7150 (12 July 2007), pp. 120-121.
3. Prevalence of Overweight and Obesity Among Adults: United States, 1999-2002 (US Center for Disease Control).
4.Bioethanol (European Biomass Industry Association)
5.Fuel Properties
6.Aviation Fuel (Wikipedia).

August 16, 2007

Green, Green, Flying Machine

This week, Bob Smith, the Honeywell Aerospace Vice President for Advanced Technology, gave a presentation on “Green Technologies” from Morristown. He reviewed aerospace technology relating to air traffic management, combustion technology, noise reduction, fuel efficiency, fuel cells, and "more electric aircraft." Bob's topic shows that environmental concerns are becoming important to the aerospace industry. Chromium is an excellent alloying element used extensively in high temperature engine components, but hexavalent chromium is toxic, so there's pressure to remove as much chromium as possible from aircraft. It goes without saying that substitution for alloys presently used in gas turbine engines requires a lot of research and qualification testing. Lead is another toxic element that has widespread use in the solders that affix components to printed wiring boards, and today's aircraft have a lot of electronics. There are several lead-free solders available, most of which incorporate tin, silver and copper. One favorable composition is SnAg3.5Cu0.9, which, like the traditional Sn-63%/Pb-37% lead-tin solder, is a true eutectic. However, lead-tin solder melts at 183 oC, and SnAg3.5Cu0.9 melts at 217 oC, so there's a problem of overheating sensitive electronic components. Not only that, but the lead-free solder compositions have mechanical properties that differ from those of lead-tin solder. They tend to be harder, so they crack instead of undergoing plastic deformation like lead-tin, and they have considerably lower ductility at low temperatures. Other problems with lead-free solders include reduced wetability, more rapid oxidation requiring more aggressive fluxes, and some stranger materials issues, such as growth of tin "whiskers," growth of the Ag3Sn intermetallic phase, and growth of Kirkendall voids at the interface to the copper conductors. Sounds like issues you would need a materials scientist to sort through (he said as he walked to the bank).

Of course, the major environmental concern for aircraft is fossil fuels. About 20 billion gallons of fuel are used annually by US airlines alone [1]. As the seasoned traveler will attest, the background scent that lingers in airport waiting rooms and onboard most aircraft confirms that the combustion process is not perfectly efficient. The combusted fuel releases environmental pollutants such as nitrous oxides, sulfur oxides, carbon dioxide and carbon soot. Subsonic aircraft emit almost 3% of the world's total combustion carbon dioxide. Water is also a product of combustion, and that, too, has an effect. The water vapor from high-flying aircraft exists as contrails (short for "condensation trails), which are the ice crystals formed from the waters of combustion. During the shutdown of air travel in the days following 9/11/2001, the average diurnal temperature range (the difference between the high and low temperatures in a day) in the US was a degree Celsius higher [2]. It is thought that contrails trap infrared radiation and have a warming effect, but this idea is still controversial.

One far-horizon idea is to replace fossil fuels with cleaner burning hydrogen. Hydrogen can be produced from renewable energy sources through electrolysis, but it can't be used in its compressed gas form as an aviation fuel because of the weight of the containment vessel. However, it can be stored as a cryogenic liquid, and the storage techniques for liquid hydrogen have been well developed for terrestrial applications. Liquid hydrogen has 2.8-times the energy density per weight as kerosene, and the cryogenic insulation needed would not be too heavy, but on a volume basis, the energy density is much less than that of kerosene. A major problem is that storage in wing tanks is not possible, since cryogenic vessels are best made in spherical or cylindrical form. The tanks would need to be located as large pods under the wings, atop the passenger cabin, or in the front and rear fuselage of widebody aircraft. The European Union launched a "Cryoplane" project in 2002 to examine the possibility of using cryogenic hydrogen as an aviation fuel [3-5]. A team of 35 partners led by Airbus collaborated on this 26-month research effort, but there hasn't been much activity since that time.

References:
1. Jet Fuel Cost and Consumption Report (Airlines.org).
2. September's Science: Shutdown of airlines aided contrail studies (Science News, Vol. 161, No. 19 (May 11, 2002), p. 291).
3.Project Cryoplane (Aviation Today); full text available here.
4. Is there a cryoplane in our future ? (Safran Group, June 6, 2002).
5. Meeting the challenges in aircraft emissions: Commission looks into clean alternatives to fossil fuel (European Union Press Release).

August 15, 2007

Transatlantic Communications

In the era of instant transatlantic communications, it's hard to imagine a time when a letter carried by ship was the only way to communicate between the US and Europe. In the 1830s, the time was ripe for invention, and the telegraph was simultaneously invented by Samuel Morse and many others. Soon thereafter, in 1861, the US was linked from coast to coast be the first transcontinental telegraph. This was actually a few years after the first transatlantic telegraph cables, which were established in 1857 and 1858 with disappointing results. Unlike air, water has a very high dielectric constant, with the result that the poorly engineered undersea cables had a low transmission rate and would function only when driven by extremely high currents. The high current was the reason that the lines failed. The physicist, Oliver Heaviside explained all this by a simplification of Maxwell's equations, and he devised a means to solve the problem - distributed inductance along the transmission line. This is another example of how a small investment in physics can solve industrial problems and save considerable money.

Most transatlantic telegraphy was by radio in the mid-twentieth century, but the advent of high quality long distance telephony within continents created a demand for a transatlantic telephone cable. This was put into service on September 25, 1956, but it only carried 36 voice channels. The Space Age opened a way for larger bandwidth communications at a potentially lower cost. The first idea for a satellite connection across the Atlantic was a passive reflector, fittingly called Echo. Echo 1 was a hundred foot (30.5 meter) metallized mylar balloon placed into orbit on May 13, 1960. The mylar was considerably thicker, 0.005-inch, than that of grocery store balloons. This satellite was followed by a larger satellite, Echo 2, a 41 meter PET film balloon, on January 25, 1964. Because of their large cross section and low Earth orbit, these passive reflectors remained in orbit for about five years only.

Of course, nothing beats an active radio relay station, and this was Telstar, placed into orbit on July 10, 1962. Telstar was a joint development effort of Bell Labs, NASA, AT&T, and the British and French Post Offices. Telstar was capable of relaying television signals, and the satellite relay concept was so successful that Telstar 18 was launched into geosynchronous orbit in 2004 and is expected to perform for thirteen years. Unfortunately, Telstar 1 was a victim of the Cold War, since it was launched the day after explosion of a high-altitude nuclear device, Starfish Prime, by the US. Starfish Prime injected considerable radiation into the Van Allen Belt, which fried most of the Telstar electronics by year's end. Engineers were able to revive Telstar in January, 1963, using back-up systems, but the radiation was too much for it, and it finally succumbed a few weeks later. Because of its high orbit, Telstar I still orbits the Earth.

Today, optical fibers span the Atlantic. They carry comminication signals of all kinds, including the internet, at tremendous bandwidth. Although the actual bandwidth figures are kept confidential by carriers, the capacity is conjectured to be in the multi-terabit per second range.

References:
1. Tony Long, "July 23, 1962: Telstar Provides First-Ever TV Link Between U.S., Europe" (Wired News, July 23, 2007).
2. Happy 45th Anniversary, Telstar 1! (Telstar Logistics). For the curious origin of Telstar Logistics, click here.

August 14, 2007

Glass

Glass is a very useful material. Many mixtures of inorganic materials, typically oxides for practical use, or chalcogenides for more technical applications, form a glass when cooled. The cooling must often be quick, because the object is to bring the solid below its glass transition temperature before the atoms are able to arrange themselves into an ordered crystalline structure. The glass transition temperature, usually symbolized as Tg , is the temperature below which atoms or molecules of a material are essentially immobile. Although common glass is weak under tension, it can withstand considerable compressive loading. Glass can accommodate a wide variety of substituents, since it isn't necessary to fit atoms into fixed-size crystal matrix locations. Examples of this are the dopants used to add color to ornamental glass.

Common glass is very inexpensive, principally because its major constituent, silica (silicon dioxide, SiO2), is plentiful in the Earth's crust. Common glass (soda-lime glass) is typically 70-72 weight percent silica, with sodium oxide and calcium oxide as the remainder. Leaded glass has about 25 weight percent lead oxide, and borosilicate glass (Pyrex) has about ten weight percent boric oxide. For extremely high temperature applications, pure silica glass (quartz glass) can be used, since it has a glass transition temperature of 1175 oC.

Glass has been used since ancient times, and it was a valued material. Pliny gives the following short story in his Natural History:

"The tale is told that, during the reign of Tiberius, a glass was devised, so compounded as to be flexible, and that the workshop of the inventor was utterly destroyed, lest there should be a decline in the value of copper, silver, and gold." [1]

Another version of this same story has the inventor beheaded. Now there's an inventor's reward program!

Recently, scientists at Emory University have been experimenting with a model system for glass to investigate the mechanics of the glass transition. They published a summary of their research in Physical Review Letters [2]. Their model system is a colloidal suspension of micrometer-sized plastic beads in water, the beads being surrogates for atoms or molecules in a solid. The model system behaves as a glass beyond a critical particle concentration. In their experiments, the colloidal mixture was confined within a wedge-shaped vessel fabricated from microscope slides, and the movement of the plastic beads in three dimensions was recorded using confocal microscopy; specifically, confocal laser scanning microscopy. Near the apex of the wedge, where the particles are more confined, The particles moved more slowly and they were less glass-like. They behaved more like a solid. At larger particles concentrations, this same solid-like behavior happened in less confined regions of the vessel. The confinement dimension at which solid-like behavior occurred was about twenty particle dimensions.

Eric R. Weeks, an Associate Professor of physics at Emory and one of the paper's authors, theorizes that there are ordered structures within the liquid phase of a glassy material that move cooperatively. If you confine the glass to too small a dimension, it will no longer behave as a glass. This research has implications for nano-scaling of materials, and the phenomenon may already have been observed. Nano machines appear to be more fragile than expected, and this may be the result of glassy materials behaving more like rigid solids.

References:
1. Pliny, Nat. Hist., bk. 36, para. 195 (Ferunt Tiberio principe excogitato vitri temperamento, ut flexile esset, totam officinam artificis eius abolitam, ne aeris, argenti, auri metallis pretia detraherentur, eaque fama crebrior diu quam certior fuit); full Latin text available here
2. Carolyn R. Nugent, Kazem V. Edmond, Hetal N. Patel, and Eric R. Weeks, "Colloidal Glass Transition Observed in Confinement," Phys. Rev. Lett., vol. 99 (13 July 2007).
3. Beverly Clark, "Emory physicist opens new window on glass puzzle" (Emory University Press Release, August 9, 2007).
4. glassproperties.com, an excellent source of glass data.

August 13, 2007

Scientific Publishing

The publication of scientific books and papers has gone through many transformations. Although the Egyptians were especially concerned with land surveying and left many texts relating to trigonometry, most of the earliest scientific writings were those of the ancient Greeks. These were manuscripts of authors such as Euclid and Aristotle, laboriously copied by hand, and so precious that they were passed down intact through many centuries. The main problem here was that publication was so expensive that very few ideas were published, the existing texts took on an aura of infallibility, and many erroneous statements by these published authors were never corrected.

All this was changed through the invention of metal movable type by Johannes Gutenberg (c.1400-1468). There is historical evidence that movable type had orginated in China and had existed for quite a while before Gutenberg, but Gutenberg went the extra measure by using it to print books in quantity. His first publication, the 1455 Bible, made use of a number of his ancillary inventions, including a suitable oil-based ink for printing, and a typeface design. The typeface design is quite significant, since it made the leap from script characters to a character set that could be understood after printing. Gutenberg's Bible was less expensive than manuscript versions, but it still cost the equivalent of several year's wages for the average clerk.

Gutenberg's advance in publishing technology allowed a greater number of scientific works to be published. Some notable examples are De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Spheres) by Copernicus in 1543 in which he presents his heliocentric model of the solar system [1]; and Sidereus Nuncius (The Starry Messenger), published by Galileo in 1610. Sidereus Nuncius reports on the first astronomical observations made through a telescope.

In the four-and-a-half centuries since Galileo's time, not much had changed in scientific publication. Then, a few decades ago, the internet was launched, and this changed the publication landscape precipitously. The peer-reviewed journals now have online versions of their publications, but the internet allows anyone to publish anything, so there's a background of "crackpot" scientific information online [2]. Such publications, many of which purport to disprove Relativity or Quantum Mechanics, were relegated in the past to the "Vanity Press," and they had limited circulation, so there wasn't much harm. Umberto Eco's "Foucault's Pendulum" gives an interesting account of a vanity publisher. The book is worthwhile reading on its own account, as is another of his books, "The Name of the Rose."

For the author who still prefers a hardbound printed copy of his thoughts, a publishing machine has been introduced by On Demand Books [3]. The Espresso Book Machine, as it's called, is not just another high speed electronic printer. It prints and binds a book from a digital file within minutes. The first application is the publication of public domain classics, and a demonstration unit has been installed at the Science, Industry and Business Library of the New York Public Library system. On Demand Books has been working with The Open Content Alliance, a non-profit organization receiving funding from the Alfred P. Sloan Foundation. The Open Content Alliance has already amassed electronic versions of 200,000 titles, and it intends to provide free content to US libraries. This publication machine is claimed to be affordable and require minimal human intervention. Says the machine's developer, Jason Epstein [4], "Printed books are one of history's greatest and most enduring inventions, and after centuries, their form needs no improvement. What does need to change is the outdated way that books reach readers."

References:
1. De Revolutionibus Orbium Coelestium (Electronic Version at Harvard University).
2. Some people have even been derisive of Wikipedia, which I have always found to be accurate and encyclopedic in the fields of own expertise. I've contributed an article to Wikipedia.
3. First Espresso Book Machine Installed and Demonstrated at New York Public Library's Science, Industry and Business Library (Press Release, June 21, 2007)
4. Jason Epstein, "The Future of Books" (Technology Review, January 2005).

August 10, 2007

Milestone - One Year Anniversary

This blog was started on August 10, 2006. In the course of one year, 240 articles have been published.

Here's a list of things that happen over the course of one year.

• About one billion tons of steel are produced.

• About 400 million tons of steel are produced from recycled material.

• 31 billion barrels of oil are produced.

• The United States spends $330 billion on R&D; China spends $136 billion; Japan spends $130 billion.

• The US government spends $2.8 trillion; $699 billion of this are for defense.

• About 1200 Physics PhDs are awarded; about 185 of these (15%) are awarded to women.

• About 175,000 US patents are granted; half of these are awarded to foreign persons.

• About 6678 pages are published in Nature.

• South America and Africa move apart 5.7 cm because of continental drift. This is about the growth of a fingernail in one year.

• A person blinks about ten million times.

• A man's heart beats about 36 million times; a woman's heart beats about 39 million times.

• The Sun irradiates each square meter of the Earth's surface with 1095.75 - 3287.25 kWh of power.

• The US population increases by about 2,700,000.

• 8.56 x 1027 electrons flow through a household using an average 5 kW of electric power.

• The earth moves 924,375,700 km in its path around the sun.

• The farthest known galaxy (Abell 1835 IR1916), recedes from us about 2.8 x 1026 kilometers because of universal expansion.

• The fastest supercomputer performs 1.14 x 1022 Floating Point Operations.

August 09, 2007

Efficient Solar Cells

Solar energy is certainly abundant. The solar energy incident on the Earth is one minute is the equivalent of the energy from fossil fuel use in one year. The scientific measure of solar energy is insolation, which is the amount of solar energy received per unit area. Insolation is usually recorded as Watts per square meter (W/m2), or as it's value over time, such as kilowatt-hours per square meter (kW-h/m2). At the top of earth's atmosphere, the insolation over all regions of the electromagnetic spectrum (called the solar constant) is a phenomenal 1366 watts per square meter. Of course, half the Earth is in darkness, and there's the added effect that most of the surface area of the Earth receives this energy obliquely, so a time average over the course of a day is about 342 W/m2. Areas in polar regions are always oblique to the sun, and other areas have a changing obliquity over the course of a day. That's why many solar installations have movable panels that track the sun. The radiation we receive at the Earth's surface is somewhat attenuated, but it peaks at nearly a thousand watts per meter when the collection area is pointed at the sun. In North America, averaging over day-night and all weather conditions, gives an available insolation of between 125 and 375 W/m2.

All these numbers are for available solar energy. Conversion of the available energy to useful electrical power comes at a high cost because of the inefficiencies of photovoltaic conversion. Photovoltaics used presently have only 15% efficiency, so a fixed solar panel sited in the US will deliver about 0.45 - 1.35 kW-h/m2/day. Since the available solar energy is constant, all advances in photovoltaic technology will be made in increased efficiency of solar cells. A team of scientists working on a DARPA program have just reported a solar cell efficiency of 42.8 % [2]. This effort, led by the University of Delaware, involved a huge number of university and corporate team members. It included the following

The National Renewable Energy Laboratory
The University of Rochester
The Georgia Institute of Technology
Purdue University
The University of California-Santa Barbara
The Massachusetts Institute of Technology
Harvard University
The University of New South Wales
Yale University
Carnegie Mellon University
DuPont
BP Solar
Corning Inc.
Blue Square Energy
• LightSpin Technologies

DARPA funding was allocated as $33.6 million, with another $19.3 million pledged by corporate sponsors. The team is led by Allen Barnett, a professor of Electrical and Computer Engineering at the University of Delaware. The goal of the DARPA project is 50% efficiency. The previous record of 40.7% was achieved by Boeing's Spectrolab, but this was in a bulky device almost a foot thick. The Delaware cell is roughly a centimeter thick, and it uses lateral optical concentrating optics to split light into three spectral bands. Light from each spectral band is directed to the most efficient photovoltaic material for those wavelengths. The solar panel incorporates an optical concentrator that accepts light from a wide range of angles, so the panel can remain stationary.

This research success has triggered the next round of DARPA funding. A University of Delaware - Dupont consortium will transfer the technology into a production environment over the course of three years. The additional funding level, government and industry combined, will be about $100 million. The goal is a production solar energy system with 50% conversion efficiency by 2010.

References:
1. National Renewable Energy Laboratory Insolation Map (JPEG Image).
2. University of Delaware-led team sets solar cell record, joins DuPont on $100 million project (Press Release, July 30, 2007, RenewableEnergyAccess.com).
3. Largest solar cell research effort launched (Associated Press, Nov 4, 2005).

August 08, 2007

A New Computer

I've just migrated to a new office computer (a Dell Precision 390 running Windows XP Professional). My previous computer was a Dell running Windows Professional 2000. Some of my legacy applications won't run in Windows XP Professional, so I'll need to find replacements. As you can imagine, my applications are not those found on a typical office computer. Quite surprisingly, Visual Basic 6 is one of the applications that won't install, and this is a Microsoft product beloved by many programmers. The most frustrating thing is that my Palm Pilot software won't run, so if I'm late for my next meeting, it's not my fault!

I was satisfied with the performance of Windows Professional 2000, which is surprising coming from someone who runs Linux at home exclusively. The new computer seems a bit faster, and the graphical user interface for Windows XP is quite good.

August 07, 2007

Computer Science

Computer science has its roots in mathematics. The first computer scientists (Alan Turing, for example) were mathematicians. Somewhat later, physicists, many of whom are mathematicians in disguise, entered the field. Recently, it's been noted that a majority of computer scientists are actually computer programmers, and the traditional computer science curriculum does not prepare them adequately for programming [1]. Karl Fant, CEO of Theseus Research, a computer company working with real-time image processing systems, has published a book, "Computer Science Reconsidered: The Invocation Model of Process Expression" (Wiley, 2007), [2] that argues that its early mathematical roots have placed computer science on the wrong development path. Writes Fant,

"Mathematicians and computer scientists are pursuing fundamentally different aims, and the mathematician's tools are not as appropriate as was once supposed to the questions of the computer scientist. The primary questions of computer science are not of computational possibilities but of expressional possibilities. Computer science does not need a theory of computation; it needs a comprehensive theory of process expression... The notion of the algorithm simply does not provide conceptual enlightenment for the questions that most computer scientists are concerned with."

Of course, Fant's point of view is not without its detractors, as a recent deluge of postings on Slashdot will attest [3]. Some liken the difference between a "real" computer scientist and Fant's vision of a computer scientist to the difference between an automobile designer and an automotive mechanic.

Google, one of the most successful computer companies, still believes that their employees should be well versed in mathematics. When Google launched its initial public offering, Eric Schmidt, the Google CEO stated that it intended to raise $2,718,281,828 [4]. As most of you realize, these are the first few digits of e, the base of natural logarithms. In its attempt to lure mathematically-oriented computer scientists to its fold, Google placed billboards in Silicon Valley and in other computer scientist haunts with the cryptic internet address, "{first 10-digit prime found in consecutive digits of e}.com." [5] This prime number, 7427466391, appears starting at the 101st digit in e, giving the internet address 7427466391.com (now inactive). Arriving at this web site gave you the opportunity to solve a more difficult mathematical puzzle. If you solved it, you were invited to submit your CV.

References:
1. Stuart Corner, "Want to be a computer scientist? Forget maths" (IT Wire).
2. Karl M. Fant, "Computer Science Reconsidered: The Invocation Model of Process Expression," Wiley-Interscience (June 29, 2007), 269 pages (ISBN-10: 0471798142, ISBN-13: 978-0471798149).
3. Forget Math to Become a Great Computer Scientist? (Slashdot).
4. Richard Elwes, "e: the mystery number" (New Scientist Online, July, 18, 2007). This article is listed as being 2,693 words long. It should have been 2,718.
5. Google Entices Job-Searchers with Math Puzzle (NPR)

August 06, 2007

Betavoltaic Cells

Spacecraft in the television series, Star Trek, are powered by a nuclear reaction mediated by dilithium crystals. The supposed mechanism has matter and antimatter combining in these crystals to produce a plasma power source. This is a far more efficient way to generate power from a nuclear reaction than the simple heat engines used in today's nuclear reactors. Of course, you do need the antimatter and a source of dilithium crystals. Betavoltaics is another way to extract electrical energy from a nuclear reaction without an intermediate heat engine. The betavoltaic concept was invented a half century ago, but its low efficiency and small power output relegated the concept to footnotes until the advent of low power electronics and the desire to power sensors in inaccessible places.

Beta radiation is the emission of high-energy electrons caused by the decay of neutrons into protons. As an example, an isotope of lithium 8Li decays to an isotope of beryllium in the following reaction

8Li -> β- + 9Be

Beta particles are easier to shield than other forms of radiation, so devices that incorporate beta sources are not particularly dangerous. One implementation of a betavoltaic source uses the beta radiation to generate electron-hole pairs at a semiconductor junction. The device operation is the same as for a photovoltaic cell, but the beta electrons substitute for the photons in generating the electron-hole pairs. A typical beta source is tritium, which has a half-life of 12.32 years. This means that half your power source is gone after twelve years, but this isn't a problem in most applications. Betavoltaics were proposed as a power source for medical pacemakers as early as 1973 [1].

A fundamental problem with betavoltaic energy sources is that only a small fraction of the beta particles deposit their energy at the voltage-generating junction. If a conventional photovoltaic structure is used, the betavoltaic efficiency is a percent or less, and the device power output is in the nanowatt range. In 2005, porous silicon structures were fabricated to distribute the tritium source throughout the device and increase the fraction of beta radiation that provides useful energy. A team from Cornell University has improved on this concept using a MEMS approach [2]. In a DARPA-funded program (Contract No W314P46-04-1-R002), they fabricated photovoltaic junctions as high aspect ratio pillars on a silicon carbide substrate. The tritium gas is able to circulate between the pillars, increasing device efficiency. More details are available in their patent application, now available online [3].

There is another type of betavoltaic device based on a cantilever beam. Since beta particles are electrons, an irradiated cantilever beam can be charged by a solid beta radiator, such as nickel-63. Since charge is conserved, the beta radiator develops an equal and opposite charge, so the cantilever beam is attracted to it. When the beam touches the radiation source, the charges are neutralized, and the cantilever beam springs back to its original state, ready for another cycle. The energy from bending is converted to a voltage by a piezoelectric material. Such devices have been produced with 7% efficiency and cycle rate of up to 120 Hz, although a few cycles per minute, or per hour, are more common. Since nickel-63 has a half-life of more than 100 years, a very long-lived power source can be made.

References:
1. Betavoltaics (Wikipedia).
2. Justin Mullins, "Invention: Reinventing radioactive batteries" (New Scientist Online, July 9, 2007).
3. M.V.S. Chandrashekhar, Christopher Ian Thomas, and Michael G. Spencer, "Betavoltaic cell" (United States Patent Application 20070080605, April 12, 2007).
4. Mark Paulson, "Nano-Nuclear Batteries".

August 03, 2007

US Science in Definite Decline

US scientists have realized that US scientific competitiveness has been in decline over the past decade, or more. There have been a number of formal studies to confirm this observation, the most recent of which is the report [1] "Changing U.S. Output of Scientific Articles: 1988 - 2003," from the National Science Foundation. The report finds that the number of articles published yearly by US scientists in major journals has been essentially static since the 1990s. Significantly, one findings of this study is that the repair of the US science machine will take time. Despite recent increases in US R&D funding, the US scientific research output, as measured by publications, has not increased, and this fact is true across all research disciplines and types of institutions.

The report tries to find a positive spin on these data. While its true that the percentage of scientific publications by US authors has decreased, the reason is that the research output of the rest of the world has ramped up. Publications for Asia have increased, but the European Union numbers have increased also, so it's not just the Third World playing catch-up. The European Union has had a 2.8% annual growth rate of published scientific articles, which is four times the US growth rate. Furthermore, Japan's article output has increased at a rate that's five times that of the US. South Korea, Singapore, Taiwan and China have seen an average annual growth rate of publications of 15.9%.

James Gentile, president of Research Corporation, a private foundation that provides funding for scientific research, warned in a recent letter to Science [3] that private philanthropy cannot mitigate the current dearth of federal funding. The top fifty private research foundations contribute less than half a billion dollars yearly to research funding. This is only one percent of US government money allocated to research.

Of course, when you don't make the numbers, you try to assert quality, and that's what the NSF report tries to do by quoting the number of citations that US articles receive. The US still leads all nations in the total number of scientific articles published, but its scientific leadership has begun to wane. The NSF will publish another report, "U.S. Academic Scientific Publishing," later in the year.

References:
1. Changing U.S. Output of Scientific Articles: 1988-2003 (HTML Version); PDF Version.
2. Number of Published Science and Engineering Articles Flattens, But U.S. Influence Remains Strong (NSF Press Release No. 07-082).
3. James M. Gentile, "Keeping the U.S. a World Leader in Science," Science, vol. 317, issue 5835 (July 13, 2007), p. 194.

August 02, 2007

Gel Filtration

Gels, from the Latin adjective, "gelatus" (frozen), are a very common arrangement of matter. One example known to all is flavored edible gelatin. Gels are usually colloids in which interconnected nanoparticles "float" in a liquid medium. In the case of edible gelatin, the particles are proteins, and the liquid is water. The edible gelatin product is very similar in form and composition to collagen present in the human body. The density of gels is about that of most liquids, but the mechanical properties are more like those of solids.

Although edible gelatin is an organic material, there exist inorganic gels, also. My first experience with an inorganic gel happened when I was etching YAG (Yttrium Aluminum Garnet, Y3Al5O12). This material, used in powerful lasers, is chemically very stable, and it resists acids. The only effective etchant for YAG was a mixture of phosphoric acid and ferrous chloride (FeCl2) heated to 180oC (face-shield, heavy gloves, and chemical apron required!). The phosphoric acid you buy in the bottle has a considerable amount of water in it (about 15 - 20% by volume). If you just hold the phosphoric acid mixture at these high temperatures, the water will escape, and the phosphate molecules will link to form a gel. I think this may be how the common rust remover, naval jelly, is made. To prevent this we needed to inject steam into the phosphoric acid-ferrous chloride solution to stabilize the liquid.

Gels are well known to molecular biologists, who use them in an analytical technique called gel electrophoresis. This technique uses a voltage applied to the ends of a tubular gel made of a structurally-optimized crosslinked polymer to separate large molecules. These large molecules are usually DNA or RNA. The electric field induced by the applied voltage causes the molecules to diffuse in the gel at different rates, which allows identification and separation of the molecular species.

If the liquid phase of a gel is replaced by a gas, aerogels are formed. These extremely light materials are thermal insulators, so they are suitable for many aerospace applications. Recently, a multi-institution team from Illinois has discovered a type of aerogel that's useful as a filter for removal of toxic heavy metals [1]. As reported in Science [2], they created aerogels from chalcogenides doped with platinum salts in aqueous solution. The chalcogenides, which were sulphides and selinides of elements such as germanium, were linked together by the platinum salts to form a gel. After drying with supercritical carbon dioxide, aerogels were produced with high internal surface area and controllable pore size. The measured surface areas were as large as 327 m2/g, and the aerogels had a strong affinity for heavy metals, such as mercury. Placing just 10 milligrams of the gel into water contaminated with 645 ppm of mercury reduced the mercury concentration to 40 ppb; that is, 99.994% of the mercury was removed. The team's further research will concentrate on how to reproduce the aerogels without using platinum. If an inexpensive analogue can be produced, it would be useful for environmental remediation.

References:
1. Kurt Kleiner, "Novel gel soaks up heavy metal pollution" (New Scientist Online, 26 July 2007).
2. Santanu Bag, Pantelis N. Trikalitis, Peter J. Chupas, Gerasimos S. Armatas, and Mercouri G. Kanatzidis, "Porous Semiconducting Gels and Aerogels from Chalcogenide Clusters," Science, vol. 317. no. 5837 (July 27, 2007), pp. 490-493.

August 01, 2007

The Allen Array

Paul Allen is principally known as the co-founder, with Bill Gates, of Microsoft. He's one of the ten richest Americans, having an estimated worth of eighteen billion dollars. With that amount of money, you can do quite a few things. Allen is owner of the Seattle Seahawks football team and the Portland Trail Blazers basketball team. Quite an interesting rise in career from the inauspicious start of dropping out of college after two years and working as a computer programmer for Honeywell in Boston. In Boston, he was near his friend, Bill Gates, and he convinced Gates to drop out of Harvard to create Microsoft.

The Paul G. Allen Family Foundation gives financial aid to organizations for the advancement of science and technology. Allen supported the Scaled Composites SpaceShipOne suborbital commercial spacecraft, the first to put a civilian into suborbital space and win the Ansari X Prize. The Allen Foundation further supports the SETI Institute, which is devoted to the search for signals from extraterrestrial intelligence. The Foundation donated an initial $11.5 million to the SETI Institute for the research and development phase for a sensitive radio telescope to search for signals from an extraterrestrial civilization. Allen donated an additional $13.5 million for the construction phase, and the telescope was named the Allen Telescope Array in his honor. Nathan Myhrvold, the former CTO of Microsoft, has donated an additional million dollars.

Prior to the development of computers and complex electronic circuitry, radio telescopes were huge, dish-like structures. The large area dish surface focused radio signals onto a single receiver. The telescopes at Jodrell Bank Observatory are examples of this early type of radio telescope. Instead of a single large dish, the Allen Telescope Array will have 350 smaller dish antennas. The signals of these are combined electronically to simulate a single large dish. The telescopes are designed to cover a huge range of frequencies, from 1 - 10 GHz. The receivers are fed by log-periodic antennas at their focus, the only type of antenna structure to accommodate this four-and-a-half octave bandwidth. The synthetic aperture nature of the array allows it to cover a large area of the sky during observations. Each of the 350 antennas will cost about $100,000.

The array is designed to scan the radio spectrum in narrow slices. Signals from an extraterrestrial civilization would necessarily be confined to a small bandwidth, since concentrating radio power into a few hertz allows signals to travel farther. If signals are detected, Allen will be one of the first to know.

Reference:
1. David Pescovitz, "Sharing the sky: An engineer's quiet search for extraterrestrial intelligent life," Forefront (Spring 2007, University of California, Berkeley).