« December 2007 | Main | February 2008 »

January 31, 2008

My Cellphone has a Sensor in It

By now, everyone realizes that cellular telephones (a.k.a., cellphones) have locator capability. This capability, known as Enhanced 911, is a consequence of the Wireless Communications and Public Safety Act of 1999. Cellphone location can be done using radio direction-finding from two or more cellphone towers, but some cellphones now have a GPS receiver built into them. Some of these GPS units are based on assisted GPS in which some location information is derived from proximity to a particular cellphone tower. Cellular companies weren't too reluctant to provide the Enhanced 911 feature, since location information can be used for profit; e.g., pumping ads for local restaurants to a cellphone screen. Since technology can be used for both good and evil, I keep my cellphone powered-down most of the time, and I'm a member of the Electronic Frontier Foundation.

Ephraim Fischbach, a Fellow of the American Physical Society, Professor of Physics at Purdue University, and author of an interesting paper on testing random numbers [1], has proposed a novel enhancement for cellphones [2]. If radiation detectors can be built into cellphones, their data, combined with GPS data, can show the location and movement of radioactive materials. Radiation shielding is not 100% effective, so even when dangerous levels of radiation are prevented, there's still detectable radiation above the usual background level.

Andrew Longman, a Purdue alumnus who is working on instrumentation for the project, explains the rationale for the project quite simply. "The likely targets of a potential terrorist attack would be big cities with concentrated populations, and a system like this would make it very difficult for someone to go undetected with a radiological dirty bomb in such an area." AT&T is providing free data air time for the project which is funded by the Indiana Department of Transportation.

Testing was accomplished by installing a (safe) radiation source, detectable at a fifteen foot range, on the Purdue campus, and having people walk around the campus randomly. This source was much smaller than an expected "dirty bomb" source, and its position was determined easily. The system could be used to detect radioactive spills and other accidental radiation releases. There will, of course, be some false alarms, such as radioactive bananas. Bananas contain nearly half a gram of potassium. One isotope of potassium, 40K, has a natural abundance of 0.012% and a half-life of a little more than a billion years. 40K doesn't sound that active, but half a gram is still more than 1022 atoms, of which about 1018 are radioactive. This results in about a billion decays in a year, or about thirty decays per second from your luncheon banana.

1. S.J. Tu and E. Fischbach, "Geometric Random Inner Products: A Family of Tests for Random Number Generators," Phys. Rev. vol. E67 (2003), 016113.
2. Emil Venere and Elizabeth K. Gardner, "Cell phone sensors detect radiation to thwart nuclear terrorism" (Purdue University Press Release, January 22, 2008).

January 30, 2008

Rain Power

It's raining this morning in Morristown. Most homes are equipped with rain gutters, which collect rain water from roof tops and direct it to convenient discharge points. This prevents erosion of the soil near the foundation of a house and prevents possible leakage of water through the foundation walls into the house. All this is good in theory, but it's a nuisance to homeowners who have deciduous (falling leaf) trees near the house. Leaves are swept off the roof into the gutter where they clog the downspouts and prevent water flow. During one excursion in the pouring rain to clear a clogged downspout at my own house, I noticed the tremendous flow of water that occurred when I removed the clog. Having had energy-harvesting devices on my mind at the time, I reasoned that I could harvest this energy quite easily with a turbine generator, but a quick calculation I did while drying-out showed that there wasn't that much energy that could be obtained from the area of my roof. Furthermore, this would be transient energy, and there was the practical problem of leaves clogging my proposed turbine generator. A further problem was what would happen to my power plant in freezing weather. All this dissuaded me from experimenting with a rain water energy-harvesting system.

A team of scientists at the French Atomic Energy Commission (Commissariat à l'énergie atomique), Grenoble, France, weren't so easily dissuaded, and they found a way to extract more energy from raindrops than the twelve-foot potential energy fall I had calculated for rainwater from my roof [1-3]. The team, led by Jean-Jacques Chaillout, decided to harvest the full energy of raindrops as they fall. The pitter-patter of rain drops on a roof is caused by a transfer of energy from the falling rain drop to the roof, so they decided to harvest this energy using piezoelectric materials. They found that a one millimeter diameter rain drop, of the type found in drizzle, has an impact energy of two microjoules. Really heavy rain, with droplets about five millimeters in size, have a millijoule of energy per droplet. Some rain droplets have more than ten millijoules of energy. Energy-harvesting is aided by the fact that most rain drops impact inelastically; that is, they don't bounce.

The French scientists used 25 micrometer polyvinylidene fluoride polymer sheets as their piezoelectric energy-harvester, and they were able to generate about 1 microwatts of power for the smallest droplets, and more than 10 milliwatts for the largest. They calculate that the rainfall common in France would produce an average power of about one Watt-hr per square meter each year. This is rather small, but the system might be good for remote instrumentation in tropical rain forests. Their research has been published in the Smart Materials and Structures [4].

1. Paul Marks, "Pitter-patter of raindrops could power devices" (New Scientist Online, January 24, 2008).
2. Lisa Zyga, "Rain Power: Harvesting Energy from the Sky" (PhysOrg, January 22, 2008).
3. Strom aus Regentropfen (Welt der Technik, January 24, 2008).
4. Romain Guigon, Jean-Jacques Chaillout, Thomas Jager and Ghislain Despesse, "Harvesting raindrop energy: experimental study," Smart Mater. Struct. vol. 17 (2008) 015038-9.

January 29, 2008

Communicating Astronomy

One way to ensure government funding for science is to generate public interest. The Mars Pathfinder Mission, with its robotic Sojourner vehicle did more to stimulate interest in Mars in its three months of operation than the prior three centuries of fuzzy telescope images. Although the subsequent Spirit and Opportunity Mars Exploration Rovers had a much greater scientific worth, the little-robot-car-that-could captured the public imagination. Stunning images from the Hubble Space Telescope have generated public interest, also. The United Nations has designated 2009 to be The International Year of Astronomy [1], so we can expect to be bombarded with many astronomy-themed television programs and print articles in the next two years.

There have been attempts to generate interesting astronomy content on the web. The US-based Astronomy Education Review is one such site, and it's supported or endorsed by all the principal astronomy organizations in the United States. These are the National Optical Astronomy Observatory, NASA, the American Astronomical Society, and the Astronomical Society of the Pacific (of which I was member early in my career). As its name implies, this web site is intended for science educators. It suffers from a major problem - it's boring!

Fortunately, The International Astronomical Union (IAU) has a different approach in the launch of its new online, free-access journal, "Communicating Astronomy with the Public," known simply as the CAPjournal. The first issue of the CAPjournal was published in October, 2007 [2]. Although it's available freely on the web, there's a companion print edition sponsored by the Hubble Space Telescope Group of the European Space Agency. There were only 1500 copies printed of the first issue of the CAPjournal, so its major impact will be through the web-hosted electronic version.

The first issue has a list of the "Top Ten Astronomical Breakthroughs of the 20th Century," as follows:

• The expanding universe
• The multitude of galaxies
• The cosmic microwave background
• Exotic cosmic structures (quasars and active galactic nuclei)
• Stellar energy sources and stellar evolution
• The Hertzsprung-Russell diagram and stellar diversity
Extrasolar planets
• Stellar chemical composition
Dark matter
• Galaxy mapping and structure

To quote from the CAPjournal web site, "... as a tool to communicate science, astronomy possesses almost magical powers. Astronomy touches on the largest philosophical questions facing the human race: Where do we come from? Where will we end? How did life arise? Is there life elsewhere in the Universe? Space is one of the greatest adventures in the history of mankind: an all-action, violent arena with exotic phenomena that are counter-intuitive, spectacular, mystifying, intriguing and fascinating." I concur.

1. The UN declares 2009 the International Year of Astronomy (Astronomy2009.org).
2. CapJournal, Issue 1 (October 2007).
3. David W. Hughes and Richard de Grijs, "The Top Ten Astronomical Breakthroughs of the 20th Century," CapJournal, Issue 1 (October 2007), pp. 11-17.

January 28, 2008

Printing Nanoelectronic Circuitry

In a recent article (Electrospray, January 23, 2008), I described the electrospray process and the efforts by a team of chemical engineers at Purdue University to develop a greater understanding of this versatile process [1-2]. Now, another group of chemical engineers from Princeton University has published its method for printing extremely fine lines using large spray nozzles and electric fields. In fact, they were able use a 500 micrometer nozzle to produce 100 nanometer lines [3-4]. This is an order of magnitude better resolution than ink-jet printing, and their process is much simpler than other nanoprinting techniques.

The essential feature of the process is an electrohydrodynamic jet, a stream of liquid accelerated by a strong electric field. The electrohydrodynamic jet was discovered nearly a hundred years ago [5], and it has been useful in many processes, but an electrohydrodynamic jet has a fundamental problem - the liquid stream become unstable. The jet will whip around uncontrollably, or it breaks apart into droplets. Of course, the droplet feature has been used to advantage in electrospray. A key finding by the Princeton team is that the liquid jet transfers some electrical charge to the surrounding gas. They found that they could control the jet by controlling the surrounding gas. For aqueous-based inks, this involved changing the humidity. They were eventually able to obtain extremely straight and stable jets projecting large distances from the nozzle. Some research is necessarily long-term, and the Princeton group has been studying electrohydrodynamic jet instability for several years [6-7].

As if in confirmation of the proverb, "The acorn does not fall far from the tree," [8] Princeton has licensed its patent on this discovery to Vorbeck Materials Corporation, a company founded by one of its chemical engineering alumni. The major present interest of Vorbeck, a privately-held specialty materials company located in Maryland, is graphene nanomaterials.

1. Emil Venere, "Electrospray droplet research yields surprising, practical results" (Purdue University Press Release, January 7, 2008).
2. Robert T. Collins, Jeremy J. Jones, Michael T. Harris and Osman A. Basaran, "Electrohydrodynamic tip streaming and emission of charged drops from liquid cones," Nature Physics (to appear).
3. Steven Schultz, "Fine print: New technique allows fast printing of microscopic electronics" (Princeton University Press Release, January 24, 2008).
4. Sibel Korkut, Dudley A. Saville, and Ilhan A. Aksay, "Enhanced Stability of Electrohydrodynamic Jets through Gas Ionization," Phys. Rev. Lett., vol. 100 (2008), 034503.
5. John Zeleny, "Instability of Electrified Liquid Surfaces," Phys. Rev., vol. 10 (1917), pp. 1-6.
6. C.-H. Chen, D. A. Saville, and I. A. Aksay, "Electrohydrodynamic "drop-and-place" particle deployment," Appl. Phys. Lett., vol. 88 (2006), 154104.
7. C.-H. Chen, D. A. Saville, and I. A. Aksay, "Scaling laws for pulsed electrohydrodynamic drop formation," Appl. Phys. Lett. vol. 89 (2006), 124103.
8. The acorn never falls far from the tree; Francophones may prefer Les chiens ne font pas des chats.

January 25, 2008

US Science Scorecard 2008

In a previous article (US Science in Definite Decline, August 3, 2007) I wrote about a report by the National Science Foundation that the number of articles published yearly by US scientists in major journals has been essentially static since the 1990s [1-2]. The rest of the world is overtaking the US in the number of scientific publications. While US output has been static, the European Union has had a 2.8% annual growth rate of published scientific articles, which is four times the US growth rate. Furthermore, Japan's publication output has increased at a rate that's five times that of the US. Asian countries, including South Korea, Singapore, Taiwan and China, have seen an average annual growth rate of publications of 15.9%. The just-released biennial report on the state of science and engineering research and education in the United States by the National Science Board [3-5] confirms that the decline in scientific publications is a consequence of the decline in US science and engineering.

As usual, this government report attempts to show that things are still good. For example, US spending on research and development, at $340 billion, is still the largest in the world, and the US still leads in the number of patents issued to its citizens. The dark side, as everyone with school-aged children will agree, is that US schools lag behind the rest of the world in science and math education. Education is among thirteen Science and Engineering Indicators listed in the report, as follow:

• US primary school students continue to lag behind the rest of the world in science and math, although mathematics skill appears to be improving.

• Secondary school completion and college enrollment have increased, but there are still differences across socioeconomic groups.

• The number of baccalaureate degrees awarded in the US declined from one third of those worldwide in 1980, to just a quarter in 2000; however, the total number of degrees awarded worldwide increased from 73 million to 194 million in the same period.

• US firms increased research and development employment outside the US by 76% from 1994 to 2004. This is somewhat balanced by the fact that foreign firms increased their US research and development employment by 18%.

• Over the past two years, funding for basic research was $62 billion (18%); for applied research, $75 billion (22%); and development, $203 billion (60%). The federal government supplies nearly two-thirds of research funding and industry about 17%. Private foundations and other sources supply the rest.

• When inflation is taken into account, government funding of research has declined since 2004, and this decline is expected to continue. It is important to note that this would be the first multiyear decline since 1982.

• Not surprisingly, Asian countries, led by China, have rapidly increased their global market share in high technology manufacturing and software.

• The US still leads the world in patents.

• US exports of technology products have eroded, and we are becoming a net importer of technology products.

• US citizens still support government funding for basic research. In fact, public support of science has increased significantly.

• US citizens have great confidence in leaders of the scientific community. White coats still sell. The public looks for advice from the scientific community on policy issues such as global warming and genetically modified foods.

• The number of PhD women and minorities working at universities has increased substantially from 1973 to 2006 (Women from 9% to 33% of the total workforce, and minorities from 2% to 8%).

• Federal support of science and engineering PhDs employed by universities has remained steady in the past twenty years.

The Board made the following recommendations:

• The federal government should enhance funding for basic research.

• Cooperation between industry and universities should be encouraged, and researchers in industry are encouraged to publish the results of their research.

• New data are needed on the implications of globalization of the high technology industry for the US economy.

1. Changing U.S. Output of Scientific Articles: 1988-2003 (HTML Version); PDF Version.
2. Number of Published Science and Engineering Articles Flattens, But U.S. Influence Remains Strong (NSF Press Release No. 07-082).
3. Press Release 08-005, "National Science Board Releases Science and Engineering Indicators 2008" (January 15, 2008).
4. National Science Foundation Science and Engineering Statistics Web Site.
5. State of US science report shows disturbing trends; challenges (Network World, January 17, 2008).
6. Science and Engineering Indicators 2008 Web Site.

January 24, 2008

Physics Publications

Physics encompasses many topic areas. That's the reason why the physics umbrella organization, the American Institute of Physics, has so many member professional societies. These are

• American Physical Society
• Optical Society of America
• Acoustical Society of America
• The Society of Rheology
• American Association of Physics Teachers
• American Crystallographic Association
• American Astronomical Society
• American Association of Physicists in Medicine
• American Vacuum Society
• American Geophysical Union
• Society of Physics Students
• Sigma Pi Sigma (National Physics Honor Society)

I've been a member of the American Physical Society (APS) for more than thirty years. The APS has about 45,000 members, the interests of whom are so broad that the society is divided into fourteen divisions (for example, Materials Physics and Polymer Physics) and nine topical groups that encompass interests of more than one division; for example, Magnetism. The physical fitness movement has had its affect on the society, but not in reducing the girth of its portly members (including me). It was thought that the term "Physical" was misleading, and there were plans to change the society name to the "American Physics Society." However, this name change has not been implemented because of legal issues regarding its charter of incorporation.

The principal activity of the APS is the publication of its hefty journal, the Physical Review. The Physical Review has been published since 1893, with the APS taking over its publication in 1913. The field of physics became so huge and specialized by 1970 that the Physical Review was split into four volumes, and by 1993 a fifth volume was added.

• Physical Review A - Atomic, molecular, and optical physics
• Physical Review B - Condensed matter and materials physics
• Physical Review C - Nuclear physics
• Physical Review D - Particles, fields, gravitation, and cosmology
• Physical Review E - Statistical, nonlinear, and soft matter physics

Physics is so important to many of Honeywell's products that the Honeywell Technical Library now provides free access to all AIP publications (Honeywell Access to past issues of the Physical Review has been available for some time). The AIP publications are

• Journal of Applied Physics
• Applied Physics Letters
• Review of Scientific Instruments
• Chaos [1]
• Journal of Chemical Physics
• Journal of Mathematical Physics
• Physics of Fluids
• Physics of Plasmas
• Low Temperature Physics
• Journal of Physical and Chemical Reference Data
• AIP Conference Proceedings
• Physics Today

Access to these can be found at this link. Since access is keyed from Honeywell's IP address, no password is required. I've published quite a few papers in the Journal of Applied Physics [2], a few papers in some of the other journals on this list, and a letter in Physics Today. You can view some of my earliest work here [3].

1. Cheap joke involving Journal of Technical Management suppressed.
2. Publications of D. M. Gualtieri.
3. D.M. Gualtieri, K.S.V.L. Narasimhan, and T. Takeshita, Control of the Hydrogen Absorption and Desorption of Rare Earth Intermetallic Compounds, J. Appl. Phys. 47, 3432-3435 (1976).

January 23, 2008


Spray is a very versatile and popular technique for coating. In a spray process, the material to be deposited is mixed with a carrier and forced by pressure through a capillary nozzle to eventually coat an article. For spray painting, this would be pigment and its vehicle in a solvent, but the process is applicable to metal powders in a carrier gas. Such a simple process begs to be improved, and the physicist, John Zeleny, did this by applying an electrical potential between the spray nozzle and the work piece [1]. Of course, when Zeleny did his work, the object was pure research, not applied, but society supports pure research knowing that useful applications are often the result.

The essential features of this "electrospay" process is its production of fine droplets that uniformly coat an article; and the additional sticking power caused by the electrostatic attraction of the charged droplets to the article surface charged at the opposite polarity. The fine spray is the result of the mutual repulsion of charged droplets in the ionized spray jet. Droplets are released from the original fluid cylinder at the tip of the Taylor cone, the conical end cap on the end of the cylinder with a half-angle of about fifty degrees published by Geoffrey Taylor in 1964 [2]. Taylor found also that a drop becomes unstable when its length is 1.9 times its equatorial diameter.

As electrospray is applied to more and more processes, it's inevitable that academic researchers will devote more time to its study. Chemical engineers at Purdue University have developed a mathematical model that predicts droplet formation in electrospray [3]. In a most commendable scientific approach, they conducted experiments to verify their model. Viscosity has been ignored in previous electrospary studies, since its effect was thought to be small, and viscoelastic effects are hard to treat mathematically. The Purdue team used finite-element analysis, which is used in modeling studies at Honeywell-Morristown, with an elliptic mesh. Their mesh included more elements at the tip of the cone, where greater accuracy is required, a technique called multi-scale modeling. Their calculations showed that viscosity has a large effect on droplet size, and they were able to develop a scaling law for droplet size. Their research was sponsored by the U.S. Department of Energy, and it will appear in a future issue of Nature Physics [4].

1. John Zeleny, "The Electrical Discharge from Liquid Points, and a Hydrostatic Method of Measuring the Electric Intensity at Their Surfaces," Physical Review, vol. 3 (1914), pp. 69-91.
2. G. I. Taylor, "Disintegration of Water Drops in an Electric Field," Proc. R. Soc. London, Ser. A, vol. 280, no. 1382 (1964), pp. 383-397.
3. Emil Venere, "Electrospray droplet research yields surprising, practical results" (Purdue University Press Release, January 7, 2008).
4. Robert T. Collins, Jeremy J. Jones, Michael T. Harris and Osman A. Basaran, "Electrohydrodynamic tip streaming and emission of charged drops from liquid cones," Nature Physics (to appear).
5. Taylor cone and droplet image (Purdue University).
6. What is Electrospray? (New Objective, Inc.).
7. Electrostatic spray painting (Wikipedia).

January 22, 2008

Google Palimpsest

At least once each year, we get the call to clean our labs. This usually occurs prior to the visit of a vice president; or on a designated "Safety Day." Since scientists are accustomed to making silk purses out of sow's ears, especially when capital budgets are thin, we tend to save anything that looks useful. However, we're never asked to clean-out our old data files. For better, or for worse, this happens automatically. When we migrate to a new computer, some of our data analysis programs will not port, and the data linked to these can no longer be read. This is the reason I write my own data acquisition programs to output text files, usually comma-separated-variable (CVS) files. It's also the approach that the authors of the popular open source (and free) plotting programs, RLPlot and Gnuplot, have used.

Although your data may be in text format, there's the further problem of "dead media." I have a box of eight-inch floppies, but I haven't seen an eight-inch floppy reader in two decades. I have a box of 5-1/4 inch floppies in my lab, but I haven't seen a computer with a 5-1/4 inch drive in about a decade. My colleagues and I must each have several boxes of 3-1/2 inch floppies under our desks, and these will become harder to read, since most computers sold today do not have any mechanical drives. One tactic we all use is to copy from our old media, such as floppies, to new media, such as CDs. This can be tiresome, and that's why it's seldom done in practice.

I once communicated with a NASA scientists who had some data from one of the early space probes on ancient reel tapes. These tapes had turned brittle with age, and he was afraid to put them on a standard a tape machine. Also, the data format was lost. Since I dabbled in magnetics at the time, I told him we could build a reader; and we could use cryptological techniques to extract his data from the tape using some of his published data as "plaintext." He replied with interest, but he apparently didn't get funding, since he didn't follow-up.

In an effort to preserve scientific data, Google is preparing to launch its Palimpsest project to provide terabytes of storage for public scientific datasets [1]. This storage will be free in two senses; first, there is no charge to have Palimpsest host your data; second, your data will be freely available to everyone. This will be great for collaborative research projects across several locations, but this service will be off-limits to Honeywell scientists and engineers. We need to protect our proprietary information, but it will be interesting to troll through other people's data.

Google plans to offer algorithms for analysis of Palimpsest information. Palimpsest will allow source annotation of the data (e.g., an abstract), and viewers can offer comments. One idea is that amateur scientists would enjoy exploring real data. All data from the Hubble Space Telescope, presently 120 terabytes, will be online. Hopefully, these amateur scientists will do more than look for the stellar equivalent of faces on Mars. High resolution images of the Archimedes Palimpsest will be online, also, in tribute to the project's source name. For large datasets, Google will send you a "data suitcase." This suitcase is a three-terabyte array of hard drives in a Linux RAID configuration.

1. Alexis Madrigal, "Google to Host Terabytes of Open-Source Science Data" (Wired, January 18, 2008).

January 21, 2008

Materials Top Ten

In a previous article (Greatest Material Events of All Time, April 10, 2007), I presented excerpts from a list of the fifty most important events in the history of materials science [1-2]. The list was developed by the The Materials Society (TMS) in celebration its fiftieth year as a member society of the American Institute of Mining, Metallurgical and Petroleum Engineers (AIME). TMS polled its members, who are professionals in such diverse fields as mineral processing, primary metals production, and basic/applied research in materials, to decide of the top fifty.

Since it's the start of a new year, the editor of Materials Today presented his own list of what he calls the top ten advances in materials science of the last fifty years in the January-February, 2008, issue, as follow [3]:

1. International Technology Roadmap for Semiconductors
2. Scanning probe microscopy
3. Giant magnetoresistive effect
4. Semiconductor lasers and light-emitting diodes
5. National Nanotechnology Initiative
6. Carbon fiber reinforced plastics
7. Materials for lithium-ion batteries
8. Carbon nanotubes
9. Soft lithography
10. Metamaterials

Items one and five are unusual, since they are "programmatic," and not technological. I can't argue that they're unimportant, but their inclusion on the list shows the deep fixation the world has on process. As the European Framework Programme has shown, process does not always equal progress. To its credit, the National Nanotechnology Initiative (NNI), launched in 2000, had the important function of combining various niche technologies in many scientific disciplines into a field called "nanotechnology." Many things we scientists had done for years were suddenly re-branded as nanotechnology to enhance interest and funding, but that's how the game is played.

Twenty-six US government agencies sponsor programs in the National Nanotechnology Initiative, which will have an estimated overall funding impact in 2008 of about $1.5 billion. This is compounded on the $7 billion in nanotechnology funding since 2000. Industrial funding of nanotechnology research and development exceeds the government contribution. As Richard Feynman famously said, "There's Plenty of Room at the Bottom,"[4] and this applies to the bottom line, also.

1. Great Materials Moments, Science, vol. 315, no. 5819 (23 March 2007), p. 1643.
2. Full list on the TMS web site.
3. Jonathan Wood, "The top ten advances in materials science," Materials Today, vol. 11, no. 1 (Jan.-Feb. 2008), pp. 40-45. Available as a PDF file here.
4. Richard P. Feynman, "There's Plenty of Room at the Bottom," American Physical Society Annual Meeting(December 29, 1959).

January 18, 2008

Data Capacitor

One useful electronic device is the capacitor, a device to store electrical charge. Its two main functions in electrical circuitry are to smooth voltage supply and to isolate AC signals from a DC background. It can be used also to form filters; and resonant circuits when combined with an inductance, a feature I've used often in my work. When I read recently about a "Data Capacitor," I immediately thought about the "Flux Capacitor", the essential component of the time machine in the movie, "Back to the Future." [1] As you've likely guessed, a data capacitor's function is to store data.

Today's high speed network links give the opportunity to transfer gigabytes of data in mere seconds. Unless you're able to use the data immediately (e.g., the data are multiple video signals that can be viewed), you need a way to store these data for subsequent processing. For example, some particle detectors for high-energy physics experiments generate huge amounts of data in short time periods. Researchers from Indiana University, Oak Ridge National Laboratory, the Technische Universität Dresden, the Pittsburgh Supercomputing Center, and the Rochester Institute of Technology have teamed to create a data capacitor with a sustained data transfer rate of 16.2 gigabits per second (Gbps) [2-3]. Their effort was in response to the 2007 Bandwidth Challenge of the Super Computing 2007 committee, the essence of which was demonstration of a 10 Gbps transfer from the SC07 conference in Reno, Nevada, to the team's home location [4].

The data capacitor, hosted at Indiana University, was built from the following components [5]:

• 52 Dell servers running Red Hat Enterprise Linux
• 24 Ten gigabit ethernet cards
• 12 DataDirect Networks S2A9550 storage controllers
• 30 DataDirect Networks 48 bay Serial-ATA disk chassis
• 535 terabytes of usable disk storage
• Six water cooled equipment racks

The data capacitor is part of the National Science Foundation's TeraGrid, an advanced internet connection that links multiple supercomputers in a national network.

Memory has always been a persistent problem in computing. There is always the problem that there never seems to be enough memory, although the definition of "enough" has changed considerably over the years. When I worked with an IBM 360 mainframe computer in the early 1970s, my workspace was constrained to 32 kbytes. Nowadays, personal computers have almost 100,000 times this capacity. A related memory issue is known as the Von Neumann bottleneck, named after the ubiquitous computer architecture devised by computer pioneer John von Neumann. The Von Neumann bottleneck concerns the shuffling of data and instructions into and out of the central processing unit (CPU). This bottleneck limits the processing speed when simple manipulations are needed on large data sets. John Backus, leader of the team that created the Fortran programming language, who I eulogized in a previous article, said this about the problem [6]:

"Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it."

Pipelining and dedicated registers have helped alleviate the Von Neumann bottleneck to some extent, but this is still a major problem in computer architecture.

1. Back to the Future on the Internet Movie Database.
2. Joel Hruska, "Indiana U. team wins SC07 Bandwidth Challenge with 18.2Gbps transfer rate" (Ars Technica, November 20, 2007).
3. Team led by IU wins Supercomputing Bandwidth Competition (Indiana University Press Release, Nov. 16, 2007).
4. SC07 Bandwidth Challenge (PDF File).
5. Data Capacitor Web Site.
6. Von Neumann bottleneck (Wikipedia).

January 17, 2008

How Many Experimental Physicists Does it Take...

...To change a light bulb? They don't replace the bulbs, they repair them! Well, at least on the internet they do, but any physicist would see that the cost-benefit analysis weighs heavily in favor of replacing the bulb.

Although a physicist at any degree level could repair a light bulb, a PhD would need no instructions. He would calculate from first principles what filament temperature would give the desired spectrum of light, and this would dictate what wire materials he could use. He would calculate what vacuum would be required for a reasonable lifetime, and then calculate the electrical power required to heat the filament to the desired temperature in this thermal environment. This would dictate the wire resistance, which would give a wire length and diameter. The wire form would modify the thermal environment, so he would iterate his calculations a few times. A theoretical physicist wouldn't need to iterate - he would derive an analytical solution - but he wouldn't have the laboratory skill to make the repair.

Since PhDs are the vanguard of technological advancement, it's interesting to compare the numbers of PhDs produced by the US and other countries. Since population varies widely from country to country, it's more instructive to look at the ratio of combined science and engineering doctoral degrees to combined science and engineering bachelor degrees, as shown in the following table [1]

• US 11.3% (2006)
• India 0.4% (2006)
• Japan 4.0% (2004)
• China 1.2% (2003)

What these data show is that the US is still first-rate at production of PhDs, so at least one of our innovation metrics is still good. It also points out a problem with university education in India, as highlighted recently in an article in EETimes [2]. The problem is the lack of a sufficiently large academic infrastructure. This is not merely a brick-and-mortar issue, but it stems from the fact that professors can make a much better salary in industry. The root problem, it appears, is that professors in India are treated as civil servants, and they are not adequately compensated for their special skills. Academic pay tops-out at the level of senior government officials, a situation also found for scientists working for the US government, but senior government employees in the US are generally highly-compensated.

The lack of Indian PhDs is mirrored in its patent metric. According to the World Intellectual Property Organization, Indian patent filings are just three per million of population, compared with a world average of 250 [3]. The Indian government has announced plans to improve its university infrastructure by establishing several science and technology research institutes. It will add an additional eight campuses to its Indian Institutes of Technology system to supplement the existing seven (Chennai, Delhi, Guwahati, Kanpur, Kharagpur, Mumbai, and Roorkee). However, India will need to increase compensation levels for professors, or these will be empty buildings.

1. Sheila Riley, "India eyes fixes for education," EETimes, December 17, 2007, p 4.
2. Sheila Riley, "India eyes fixes for education" (EETimes Online, December 17, 2007).
3. K.C. Krishnadas, "Indian patent filings lag behind global average" (EETimes Online, December 7, 2007).

January 16, 2008


Helium is the second most abundant element in the universe (23 atomic percent). Its scarcity on Earth (just 5.2 ppm in the atmosphere) prevented its discovery until 1868, when its presence was detected by its 587.49 nm emission line in the spectrum of the sun's chromosphere during a solar eclipse. This emission feature was first identified with sodium (D1 at 589.594 nm and D2 at 588.9973 nm), but the element was isolated on Earth and named helium, after the Greek sun god, Helios. Helium is non-reactive, and helium in the atmosphere is lost eventually to outer space. All helium on Earth is the result of the radioactive decay of elements such as uranium and thorium, since no helium remains from the time of Earth's initial formation.

Since helium is non-reactive, you wouldn't think it would be of much use to scientists, but it's essential to low temperature studies. The boiling point of helium is 4.22 K, just 4.22 degrees above absolute zero, so liquid helium is used as a coolant. There are various techniques, such as magnetic cooling, that allow temperatures very close to absolute zero to be reached. In my early career, I used a lot of liquid helium, then priced at about five dollars per liter, to study superconductors [1].

Although helium is scarce in the atmosphere, it's found in abundance in natural gas, since the same geological formations that trap natural gas also trap the helium released from radioactive decay. Helium in natural gas was discovered in 1903 when an analysis of a non-flammable gas geyser revealed that the gas contained 1.84% helium by volume. Since that time, helium has been recovered from natural gas streams, and there was a US government program to support helium recovery by buying quantities for the National Helium Reserve. The first incentive for this was to provide a strategic reserve for airships, but the expansion of technologies that employ helium kept the reserve functioning when the airship requirement vanished. The reserve eventually reached more than a billion cubic meters of gas, but the effort was getting expensive. The US congress decided to "liquefy" the reserve (in the fiscal sense) and privatize helium production in the mid-1990s, and now there's a fear of a helium shortage, or that a major price increase will impact many fields of science [2].

How useful is helium? Aside from the coolant and airship applications mentioned above, here are some examples [3]:

• Mixtures of helium and other gases are used in deep-sea breathing systems.

• Helium is used as a coolant in some nuclear reactors.

• Helium is used as a shielding gas in electric arc welding.

• Helium is used as a protective gas for materials and historical documents.

• Helium is used in the ubiquitous helium-neon laser, although semiconductor lasers are displacing this application.

• Helium is used as a tracer gas to detect leaks in sealed containers. Its small atomic diameter and non-reactive nature makes it ideal for this application.

• Helium has a high thermal conductivity and high sound velocity which make it useful in thermoacoustic refrigeration.

What would life be like without the balloons of the Macy's Thanksgiving Day Parade? According to Wikipedia [4], parade organizers decided in 2006 to use fewer balloons because of the nationwide helium shortage.

1. P. Duffer, D.M. Gualtieri, and V.U.S. Rao, Pronounced Isotope Effect in the Superconductivity of HfV2 Containing Hydrogen (Deuterium), Phys. Rev. Lett. vol. 37, pp. 1410-1413 (1976).
2. Tony Fitzpatrick, "Helium supplies endangered, threatening science and technology" (Washington University-St. Louis Press Release, Jan. 1, 2008).
3. Helium Applications (Wikipedia).
4. NBC telecast coverage, 11/23/06, referenced by Wikipedia.
5. Helium (Wikipedia).

January 15, 2008

Rendezvous with Mercury

Mercury is the closest planet to the sun. Not surprisingly, the temperature of the sun side is about 430 oC, but there are cratered regions near the poles with a temperature of about -180 oC. Radar observations of Mercury have suggested that there may be water ice present in the dark recesses of some polar craters. The heavily cratered surface of mercury resembles that of Earth's moon. It was once thought that the rotation of mercury was such that one face always faced the sun, much like the way the Moon presents just one face towards the Earth. It is now known that Mercury rotates three times for every two orbits of the sun in a resonant rotational condition, a more complicated state of tidal locking than a 1:1 ratio of rotation and revolution periods.

Although Mercury was known to the ancients, its surface is not resolved by Earth-based telescopes, and the appearance of its surface was a mystery up to the end of the last century. The surface of Mercury was finally resolved by NASA's Mariner 10 spacecraft, which passed near Mercury three times in 1973 and 1974 to reveal its cratered appearance. Even then, less than half its surface was mapped. It's too close to the sun to be photographed by the Hubble Space Telescope.

Yesterday, January 14, 2008, at 2:04:39 PM Eastern Standard Time, after a lapse of more than thirty years, another spacecraft visited Mercury, approaching to within 200 kilometers (124 miles) [1-3]. This was NASA's MESSENGER spacecraft, which should acquire more than a thousand images of Mercury and make other observations on this fly-by. These observations include measurement of the intensity of Mercury's magnetic field and its surface composition (inferred from gamma-ray, neutron and x-ray spectra). If all goes well, MESSENGER (all capital letters, since it's the acronym for MErcury Surface, Space ENvironment, GEochemistry and Ranging) will revisit Mercury in October, 2008, and September, 2009. MESSENGER will then orbit the planet in March, 2011, the first spacecraft to orbit Mercury. Once in orbit, it will have a clear and recurrent view of the polar regions. It's a tribute to modern technology that electronic systems can survive so close to the heat and radiation of the sun for so many years.

1. MESSENGER is On The Way to Explore Mercury (SpaceToday.org).
2. Maggie McKee, "Probe to fly by Mercury for first time in decades" (New Scientist Online, January 11, 2008).
3. Robert Lemos, "Messenger Makes a Pass at Venus" (Wired Online, October 24, 2006).
4. MESSENGER Web Site.

January 14, 2008

MEMS Gas Sensor

Continued process development for fabrication of microelectromechanical systems (MEMS) has allowed the fabrication of miniature versions of larger devices, or miniature devices with the same functionality as other devices. Of course, the prime example of the former is the gyroscope. An example of the later is the MEMS mechanical resonator that's replacing quartz crystals in some applications [1].

Gas chromatography (GC) and mass spectrometry (MS) are staples in any analytical laboratory, and the combined use of these is a typical method for analysis on unknowns if they can be vaporized. Unfortunately, these laboratory instruments are refrigerator-sized (dormitory room refrigerator size for the GC, and residential refrigerator size for the MS) and not easy to deploy in the field. Efforts to produce portable combined GC/MS systems still require an instrument several cubic feet in volume with a power demand of more than ten watts. These units require about fifteen minutes per analysis.

Miniature GC systems have been developed over the decades using standard semiconductor processor techniques, principally because a GC is a simple system. Essentially all that's needed is a heated column, a carrier gas flow, and a thermal conductivity detector. Shrinking an MS is more of a problem, since a high quality vacuum is required, along with some fairly sensitive high-speed electronics. As the saying goes, there are no problems, just opportunities, and who better to grasp this opportunity than the engineers at MIT. Engineers at MIT's Microsystems Technology Laboratories are working towards a matchbox-sized combined GC/MS system to detect toxic industrial chemicals and chemical warfare agents [2]. The MEMS GC/MS consumes about a watt of power, and it gives a reading in just four seconds. In the MS, gas molecules are ionized by electric fields generated at the tips of carbon nanotubes, and an electrometer array detects the molecules at their specific ratio of charge to molecular weight.

MIT is the lead in an international team for development of a practical GC/MS device. Other team members include Cambridge University, the University of Texas at Dallas, Raytheon and Clean Earth Technologies. A progress report is scheduled to be given on January 15, 2008, at the IEEE MEMS 2008 Conference [3]. This research program has been funded for the last three years by DARPA and the U.S. Army Soldier Systems Center.

1. Aaron Partridge and John McDonald, "MEMS Resonators look to displace Quartz resonators," MEMS Manufacturing (July 2006), p. 11-14.
2. Anne Trafton, "Energy-efficient device could quickly detect hazardous chemicals" (MIT Press Release, January 10, 2008).
3. L.F. Velasquez-Garcia and A.I. Akinwande, "A PECVD CNT-Based Open Architecture Field Ionizer for Portable Mass Spectrometry" (MEMS 2008 Conference Schedule, PDF File).

January 11, 2008

James Watson

Many years ago, Jack Paar interviewed Nobel Laureate, James Watson, on television. I remember this interview, not for what was said, but for Watson's performance. He appeared to be in his own little world, mostly staring up at the stage lights in a total mental fog. He appeared almost autistic and completely lacking in the usual social skills. Years later, I realized this was an act. Watson was cultivating the public idea of the brilliant scientist; or, should that be "Mad Scientist?" One of his students later recalled a similar incident when he and Watson visited the home of one of their patrons. Before ringing the door bell, Watson unbuttoned his jacket, loosened his tie, and tousled his hair, explaining that this is how scientists are expected to look.

I mentioned Watson's pubic relations fiasco in a previous article. Watson has written an autobiography of sorts, "Avoid Boring People: Lessons from a Life in Science." I haven't read it yet (I should have asked for it for Christmas), but it should make interesting reading from what's been written about it. His first book, "The Double Helix," was autobiographical, also, but he was just a young man at the time. "The Double Helix" was interesting, and very controversial, and the same is true of this latest book. What else could we expect from the man who once said (at Berkeley in 2000), "Whenever you interview fat people, you feel bad, because you know you're not going to hire them." [1]

Watson takes the opportunity in this book to sling some mud at Harvard, where he was a professor for a twenty year period from 1956-1976. He recalls that Harvard refused to give him a modest $1,000 raise in salary he requested after winning the Nobel Prize, and he still fumes over the fact that Harvard's President recalled him from an unapproved trip to California [2]. The enmity is not just one-way, since Harvard entomologist E. O. Wilson once said that Watson was "the most unpleasant human being I had ever met [2]."

Watson does have kind words for his undergraduate alma mater, the University of Chicago, where he was educated not in science only, but in literature, history, philosophy, and sociology as well. His education there was a consequence of the educational philosophy of Chicago's President, Robert M. Hutchins, who ascribed to the Great Books curriculum. Hutchins thought that universities had become like trade schools in which students were taught just facts and processes. Hutchins wanted his students to learn how to think. There's no denying that Watson is a thinker, he just thinks differently than the rest of us.

1. Tom Abate, "Nobel Winner's Theories Raise Uproar in Berkeley".
2. Steven Shapin, "Book Review: Chairman of the Bored" (Harvard Magazine, January 2008).
3. James D. Watson, "Avoid Boring People: Lessons from a Life in Science" (Alfred A. Knopf, ISBN-10: 0375412840, ISBN-13: 978-0375412844, on Amazon).

January 10, 2008

Zinc Devaluation

As I mentioned in a previous article, the gravitational constant G is less precisely known than the other physical constants. G is known to only five significant figures, whereas the fine structure constant is known to eleven significant figures. Universal gravitation is a problem for physicists, but chemists have their own weighty problem. Absolute isotopic masses of the elements are known only to as great a precision as G, but that's not a problem, since chemists are generally interested in the ratios of elements. To set the ratios, one element must be fixed in mass. The first obvious method was to set hydrogen as exactly one (H2 as exactly two). Since it's easy to make exact quantities of hydrogen electrochemically, this was a very practical solution. Hydrogen, however, is very light, so oxygen became the first international mass standard, set to be exactly sixteen in 1903. Eventually, it became easier to work with an isotopically-pure solid, so carbon-12 became the official mass standard in 1961.

The practical problem that confronts chemists and materials scientists is that the elements we work with are mixtures of isotopes, so we need to rely on our knowledge of the "natural abundance" of the elements to convert weight to numbers of atoms. As we get better at measurement and believe that our processes require more precision, it's inevitable that the numbers we've used all our professional lives are changed. Last year, the atomic mass of five elements was changed. The most drastic change was the atomic weight of zinc, which was reduced by nearly 450 parts-per-million. These elements, along with their old -> new atomic weights, are listed below.

• Lutetium 174.967 ± 0.001 -> 174.9668 ± 0.0001
• Molybdenum 95.94 ± 0.02 -> 95.96 ± 0.02
• Nickel 58.6934 ± 0.0002 -> 58.6934 ± 0.0004
• Ytterbium 173.04 ± 0.03 -> 173.054 ± 0.005
• Zinc 65.409 ± 0.004 -> 65.38 ± 0.02

These changes were made by the Commission on Isotopic Abundances and Atomic Weights of the International Union of Pure and Applied Chemistry (IUPAC) at its 44th IUPAC General Assembly, August, 2007. Established in 1919, IUPAC is the self-proclaimed "world authority on chemical nomenclature, terminology, standardized methods for measurement, atomic weights and many other critically evaluated data." The committee further refined the ratio of two isotopes of Argon, 40Ar/36Ar, from 296.03 ± 0.053 to 298.56 ± 0.031. This ratio is important for geochronological dating. A new table, "Standard Atomic Weights 2007," will be published in Pure and Applied Chemistry, the IUPAC journal.

1. Standard Atomic Weights Revised (IUPAC Press Release).
2. Standard Atomic Weights Revised, Chemistry International, vol. 29, no. 6 (November-December, 2007).

January 09, 2008

Absolute Zero

Yesterday evening I watched an interesting program on NOVA, the PBS weekly science serial [1]. It was about thermodynamics, one of my supposed specialties. This is such an extensive field of physics, covering many subtopics and several centuries of historical context, that only the first hour aired last night. The second hour will be presented next week (as they used to say in the industry, "Same time, same channel). If you missed it, there's considerable content available online, and it will likely repeat several time in the future. It's also available for purchase as a DVD. Of course, the program was titled, "Absolute Zero," since few people would tune into a television program called "Thermodynamics."

One interesting thing about this exposition of thermodynamics was the opposing early theories of heat proposed by a chemist and a physicist. The chemist, Antoine Lavoisier, is not just any chemist. He's known as the father of modern chemistry. The physicist was Count Rumford, better known among scientists as Benjamin Thompson. It's interesting that Thompson was an American, and he was forced to immigrate to Europe since he had supported the British in the American Revolution.

Lavoisier disproved the phlogiston theory, which supposed the existence of a colorless, odorless, massless element, called phlogiston, which was liberated from materials when they are burned to reveal their principal form. This was calx, which is what we would today call an oxide. While eliminating phlogiston, he introduced a new substance, caloric, which he supposed to be an element as real any of the others. Caloric was a type of fluid that flowed from hot to cold bodies. However, since Lavoisier also discovered the law of conservation of matter, it was necessary to believe that this element was without mass. As a chemist, Lavoisier was programmed to think in terms of things existing as physical objects. If these objects had no mass, it was more of a discovery rather than a problem of the theory.

Thompson, however, performed experiments that established the mechanical equivalent of heat. He observed that in the process of boring cannon, considerable quantities of heat were produced by the friction of the cannon bore against the metal. The quantity of heat seemed inexhaustible, and the materials were unchanged. Both of these observations were contrary to the caloric theory of Lavoisier. Thompson observed that heat was a principle, a property of matter, rather than being matter itself. Of course, even in those days, physicists needed to work towards practical ends, so Thompson is credited with inventing the double boiler, a drip coffeepot, and thermal underwear!

1. NOVA Web Site (PBS.org).

January 08, 2008


Indium metal is an interesting element, just for the fact of its low melting point (156.6 oC, or 313.9 oF). The eutectic composition, In24.5Ga75.5, has a melting point of about 15oC, and it is a liquid metal at room temperature. Indium as a pure metal is very soft, so it's often used as a gasket material in low temperature vacuum systems. Indium metal "wets" many surfaces, so it's used as a glass to metal sealant in some applications. The important present use of Indium, however, is in combination with tin and oxygen to form indium-tin oxide, called simply ITO, with a typical composition of 90% In2O3 and 10% SnO2 by weight. Indium-tin oxide is transparent and highly electrically conductive, an unusual combination of properties, so it's the premier electrode material for a multitude of display technologies. Electrically conductive materials, such as metals, are reflective, not transparent, since the current-carrying electrons also scatter photons. ITO is a highly conductive semiconductor, and this alternative type of conductivity allows its transparency.

ITO is used as a transparent electrode material in liquid crystal displays (LCDs), flat panel and plasma displays, touch panels, organic light-emitting diodes, and solar cells. It's also used as an antistatic coating, and for electromagnetic shielding. Because of its optical properties, it's useful for dielectric mirrors in the infrared, so-called hot mirrors that allow visible light to be transmitted, but reflect potentially damaging heat rays. ITO is used also in niche applications, such as gas sensors, thin film strain gauges for use in gas turbine engines, and vertical-cavity surface-emitting laser (VCSEL) Bragg reflectors. Considerable quantities of indium metal are used in the lead-free solder composition, SnIn8.0Ag3.5Bi0.5 (melting point 197 oC).

Indium, which is usually obtained as a byproduct of zinc ore refining, has an abundance in the Earth's crust about the same as silver (0.1 ppm). As can be expected for such a rare material, increased demand results in an increased price, and demand has increased from 200 metric tons to 1,500 metric tons in the past decade. The price of indium jumped from about $90 per kilogram to about $1,000 per kilogram from 2002 to 2005. Since then, the price of indium has dropped to the $700 per kilogram level due to increased production [1-2]. This lower price still results in a $2.00 cost per LCD computer screen.

There are some alternatives to ITO, none of which are totally satisfactory at present [3]. Tin oxyfluoride is more difficult to etch; zinc aluminum oxide is not quite as transparent; and cadmium tin oxide is highly toxic. Calcium aluminum oxide is under investigation by the Japan Science and Technology Agency, and carbon nanotube coatings may also serve as an ITO replacement.

By the way, the Indium Corporation of America is based in my childhood hometown, Utica, New York.

1. Sean Barry, "Indium price falls as metal supply increases," American Metal Market (March 15, 2006).
2. Historical Indium Market (USGS, PDF File).
3. Andrea Chipman, "A commodity no more," Nature, Vol. 449 No. 7159 (13 September 2007), p. 131.

January 07, 2008

Solar Fuel

When solar energy is mentioned, nearly everyone thinks of photovoltaics, the direct method for production of electrical power from sunlight. Photovoltaic power is what you get when you ask an electrical engineer to harvest solar energy. Although it's nice for small scale power sources, such as roadside call boxes and remote weather stations, it doesn't quite make the grade for residential, commercial, or industrial power; and the electricity generated is suitable for vehicle power only if it is stored in a battery or supercapacitor. Photovoltaic power comes at a high capital cost, which is generally translated to a "pay-back" period of several years. The semiconductor materials used in photovoltaics come at a high environmental price in the energy and vast quantities of water required for semiconductor processing; the toxic substances found in the photovoltaic cells; and the toxic chemicals used in manufacture. As for efficiency, Google has a major photovoltaic installation at its Mountain View, California, headquarters, but it supplies just thirty percent of its peak power requirements.

If you ask a chemical engineer to harvest solar energy, you get a much different answer. You also get a system that scales to large energy requirements and vehicle fuel requirements. Sandia National Laboratories is developing a process to make a synthetic fuel using solar energy to react hydrogen from water with carbon dioxide, in effect reversing the combustion reaction [1]. The hydrogen, synthesized in what's called a Counter Rotating Ring Receiver Reactor Recuperator, or CR5, will produce hydrocarbon fuel that's compatible with existing internal combustion engines. Most importantly, the present fuel-distribution infrastructure would be unchanged. The Sandia team has named their project, "Sunshine to Petrol," (S2P).

Richard Diver of Sandia is the inventor of the CR5 reactor, which is essentially a stack of counter-rotating rings of a catalyst that have one side exposed to solar heat. The CR5 splits water intro hydrogen and oxygen. On the hot side, the oxide catalyst loses oxygen; at the cold side, water reacts with the catalyst to reform the oxide and create hydrogen. The catalyst used is a mixture of iron oxide with oxides of cobalt, magnesium, or nickel, although a more expensive ferrite-zirconia catalyst gives better results. The purpose of the counter-rotating rings is to conserve heat - The hotter rings heat the cooler rings as they pass each other. The CR5 is a type of Stirling Engine, the efficiency of which arises from heat recuperation, which is accomplished by the counter-rotating rings [2].

For some reason, Diver believes that the CR5 process is still 15-20 years away from commercialization. I'm betting on a shorter time frame. DARPA provided some funding for this project.

1. R. Colin Johnson, "Sandia's synthetic-fuel recipe: Mix CO2 , water; heat with sun," (EETimes, December 19, 2007).
2. Sandia pioneers hydrogen economy (Sandia Newsletter).
3. Richard B. Diver, Jr., James W. Grossman, Michael Reshetnik, "Solar reflection panels," US Patent No. 7077532 (July 18, 2006).

January 04, 2008

Who Invented the Transistor?

On Sunday, December 16, 2007, I celebrated the sixtieth birthday of the transistor by handling some vacuum tubes and contemplating how the function of these multi-cubic-inch glass and metal objects is now duplicated in a structure much less than a cubic micrometer in size. I celebrated the occasion, since I believe that the transistor was invented by William Shockley, John Bardeen, and Walter_Brattain at Bell Labs in 1947. After all, they were awarded the 1956 Nobel Prize in Physics for "their discovery of the transistor effect." However, a few might say it was Julius Lilienfeld.

Julius Edgar Lilienfeld was a German physicist who taught at the University of Leipzig. In the 1920s, he emigrated to the United States because of the growing hostile climate for Jews. His interests were manifold. They included liquefaction of hydrogen for cryogenic research and as a buoyant gas for Zeppelins. Lilienfeld was involved also with the development of X-ray tubes, and he received several US patents for this work. After emigrating to the US, Lilienfeld developed electrolytic capacitors for AMRAD Corp., Medford, Massacusetts, and patented the first solid state electrolytic capacitor (US no. 1,906,691). He eventually purchased AMRAD's engineering laboratories, renaming them Ergon Laboratories.

Lilienfeld's interest in the device we now call the transistor may have begun with his 1926 invention of a solid state rectifier (US no. 1,611,653). During 1925-1928, Lilienfeld developed and patented several field effect and point junction transistors (US nos. 1,745,175 (January 28, 1930), 1,877,140 (September 13, 1932), and 1,900,018 (March 7, 1933)). Lilienfeld's field effect transistor was made from a thin film of copper sulfide with an aluminum foil gate. Copper sulfide is a semiconductor, but in this case it would be polycrystalline and the transistor would have low gain. Nevertheless, Harry E. Stockman wrote in a letter to Wireless World magazine that Lilienfeld had built and demonstrated a "...remarkable tubeless radio receiver on many occasions..." after 1923. [2]

Because of Lilienfeld's prior art, the Bell Lab's field effect transistor patent, and half the claims in its point junction transistor patent application, were disallowed. It is interesting to note that Bell Labs had built working devices described by Lilienfeld's patents, but no mention of Lilienfeld appears in any research papers or the Bell Labs version of transistor history. Robert G. Arns, Professor of Physics at the University of Vermont, discovered [3] a 1948 patent deposition by John B. Johnson (of Johnson noise fame) that Bell Labs had built an aluminum oxide MOSFET called out in Lilienfeld's patent, found useful power amplification, and published the result without reference to Lilienfeld. This conspiracy of silence has been discussed in an internet newsgroup.

Lilienfeld patented other devices, such as a loudspeaker, a spark plug, and several elastic fabrics and garments. Lilienfeld died at age 82 in 1963. The Julius Edgar Lilienfeld Prize of the American Physical Society was established in 1988 by a bequest from Lilienfeld's wife.

1. Brian Melody, "Julius Edgar Lilienfeld.".
2. Transistor discussion page at Wikipedia.
3. R. G. Arns, "The other transistor: early history of the metal-oxide-semiconductor field-effect transistor," Engineering Science and Education Journal, vol. 7, No. 5 (October, 1998), pp. 233-240.

January 03, 2008

Morristown Corporate Technology Center

2007 marked the twentieth anniversary of building that houses my office and laboratory, the Morristown Corporate Technology Center, also known as CTC. My own laboratory, on the first floor, began operation in September, 1987, shortly after a laser and electro-optics group was relocated from another New Jersey location to the third floor. The building got the grandiose name, Corporate Technology Center, since it was the corporate research facility of AlliedSignal before the merger with Honeywell. I tried to organize a twentieth anniversary celebration for CTC, but very few of the present occupants of the building were with the company at the building's dedication, so there wasn't any interest.

CTC was planned originally to be a much larger building, having five floors instead of its present three, but there was a zoning regulation that none of our site's building could be visible from the road, so the design was scaled-back. One other interesting fact about this building concerned the working conditions for scientists. Office space was planned originally to be cubicle style, a la Dilbert. However, our vice president at the time argued that the equivalent category of professional on site, the lawyers, had private offices; and unless the lawyers were to be moved into cubicles, his scientists would have offices. My office may be small, and the view from my window may be a stone bulwark, but I still have a door to close when I'm in a teleconference. For those of you working in cubicles, here's a gallery of cubicle makeovers.

There's a belief that the architecture of a building is important to how employees function, and this idea seems to apply to laboratory buildings more than any others. Each year, R&D Magazine has a Laboratory of the Year competition. The idea (with which I wholeheartedly concur) is that a happy scientist is a productive scientist. This idea has been taken to extremes at Janelia Farm Research Campus of the Howard Hughes Medical Institute, Chevy Chase, Maryland, which includes a pub. There is also a games area, a living room, and a food service area. The purpose of these peripheral areas is to get scientists talking to each other and "thinking-out-of-the-box."

One of the more extreme examples of laboratory architecture is the Ray and Maria Stata Center, which houses the Computer Science and Artificial Intelligence Laboratory and the World Wide Web Consortium (W3C) at MIT. Architect Frank Gehry designed it with an underlying purpose of encouraging interdisciplinary interaction and collaboration. When it opened in 2004, Robert Campbell of the Boston Globe wrote [1] that the Stata Center is "a work of architecture that embodies serious thinking about how people live and work, and at the same time shouts the joy of invention... The Stata is always going to look unfinished. It also looks as if it's about to collapse. Columns tilt at scary angles. Walls teeter, swerve, and collide in random curves and angles... The Stata's appearance is a metaphor for the freedom, daring, and creativity of the research that's supposed to occur inside it."

Some describe the architecture as "part Dr. Seuss, part hurricane aftermath." [2]

On October 31, 2007, MIT filed suit against Gehry and the Stata Center's contractor, Skanska USA Building Inc., a New Jersey-based subsidiary of Skanska AB, for "design and construction failures" that resulted in leaks, cracks and drainage problems requiring constant repair. Skanska says that Gehry ignored its warnings about design flaws and rejected a formal request for design modifications. Gehry says that "value engineering," a politically-correct term for cost-cutting, is to blame. Says Gehry, "There are things that were left out of the design... The client chose not to put certain devices on the roofs, to save money." [3]

One occupant, at least, praises the building. Rodney Brooks, Professor of Robotics in the MIT Electrical Engineering & Computer Science Department, says that "It is a joy to work in this building, and I know that many of its occupants feel the same as I do about it. We asked Frank to give us a building that fostered communication, and he delivered." [4]

If chaos encourages creativity, my office must be a hotbed of invention!

1. Robert Campbell, "Dizzying heights - In Frank Gehry's remarkable new Stata Center at MIT, crazy angles have a serious purpose" (The Boston Globe, April 25, 2004)
2. Ray and Maria Stata Center, Cambridge, Mass. (Building Design and Construction, May 1, 2005)
3. Robin Pogrebin and Katie Zezima, "M.I.T. Sues Frank Gehry, Citing Flaws in Center He Designed" (New York Times, November 7, 2007).
4. Robert Campbell, "The state of Stata" (The Boston Globe, March 11, 2007).
5. Stata Center Home Page.

January 02, 2008

2007 Science and Technology in Review

Happy New Year! One common ritual for welcoming a new year is to review the past year. The New York Times has covered science and technology very well over the years, which is surprising, since it is often called the "Gray Lady" because of its conservative style. It's this same conservative style that makes it authoritative, and it's considered by many to be the US newspaper of record. Science has been the focus of the Times every Tuesday since 1978 [1]. On Wednesdays in times past, we denizens of the corporate research labs would get a query from the executive wing asking what we were doing about..., where the sentence would be finished by naming some topic in the Tuesday Times. The Times has always treated science as part of our culture, and we are in their debt for this perspective. Their science writers are first rate, and they dig down to the original sources, who are often abstruse scientists.

For the seventh consecutive year, the science staff at the New York Times have listed their top science and technology stories of the past year. Here are the seventy stories on their not-so-short list [2].

Airborne Wind Turbines
Alzheimer's Telephone Screening
Ambiguity Promotes Liking
Appendix Rationale
Best Way to Deflect an Asteroid
Biodegradable Coffins
Biofuel Race
Braille Tattoo
Cardboard Bridge
'Cat Lady' Conundrum
Climate Conflicts
Community Urinalysis
Craigslist Vengeance
Criminal Recycling
Culinary Orientalism
Death of Checkers
Digital Search Parties
Edible Cocktail
Electric Hockey Skate
Faces Decide Elections
Fake Tilt-Shift Photography
Fish-Flavored Fish
God Effect
Handshake Sex Appeal
Height Tax
Honeycomb Vase
Hope Can Be Worse Than Hopelessness
Iconic-Performance-Network Player
Indie-Rock Musicals
Interstellar Ramadan
Jogging Politique
Knot Physics
Lap-Dance Science
Left-Hand-Turn Elimination
Lightning Farms
Lite-Brite Fashion
Marijuana Mansions
Mindful Exercise
Minimal Chair
Mob Jurisprudence
Murphy Balcony
Next Violin
Office-Chair Exercise (for Men and Women)
Pixelated Stained Glass
Pop Fecundity
Posthumous E-Mail
Postnuptial Agreements
Prison Poker
Quitting Can Be Good for You
Radiohead Payment Model
Right to Medical Self-Defense
Rock-Paper-Scissors Is Universal
Second-World Solidarity
Self-Righting Object
Smog-Eating Cement
Starch Made Us Human
Suing God
Telltale Food Wrapping
24/7 Alibi
Two-Birds-With-One-Stone Resistance
Unadapted Theatrical Adaptation
Wave Energy
Weapon-Proof School Gear
Wireless Energy
Youtube (Accidental) Audition
Zygotic Social Networking

1. John Noble Wilford, "The Birth of Science Times: A Surprise, but No Accident" (New York Times, November 11, 2003).
2. The 7th Annual Year in Ideas (New York Times, December 9, 2007).
3. Science News - New York Times