« February 2008 | Main | April 2008 »

March 31, 2008

Long Term Research

Many years ago, when I was still a part of a centralized corporate research and development organization, we interviewed an employment candidate who was an intern at Bell Labs. At Bell, she was building a computer to model the evolution of the universe. No, she wasn't programming a computer to do a model, she was assembling computer hardware that would do the model many time faster than you could on a supercomputer. Why would a telephone company devote money and resources to a project this removed from telephony? The knowledge gained by building such a purpose-built computer might be used in areas not then envisioned. A similar exercise by Bell Labs scientists, Arno Penzias and Robert Wilson, led to their 1978 Nobel Prize in Physics for the experimental confirmation of the cosmic microwave background radiation from the Big Bang. Probable spin-offs from this project were advances in low-noise microwave circuitry. At the very least, the spin-off was shaping Penzias into Vice-President of Research at Bell Labs.

Another unusual research area pioneered by Bell Labs is the chess-playing computer. Ken Thompson of Unix and C programming language fame and his colleagues built a chess-playing computer called Belle in the late 1970s and early 1980s. Like the cosmic computer mentioned above, Belle was a purpose-built machine, and it was the first computer to receive a master-level rating in chess. The technology behind Belle was considered so advanced that it was confiscated by the US State Department for violation of ITAR restrictions when it was being shipped to the USSR in 1982 for a computer chess tournament. Bell Labs was fined $600 for this supposed infraction of ITAR rules.

These Bell Labs projects are not just isolated examples. IBM has traditionally funded projects that seemed out of the ordinary for a corporation. IBM had a long-standing research effort in chess-playing computers. The IBM Deep Blue chess-playing computer defeated world chess champion, Garry Kasparov, in 1997. Deep Blue was also a purpose-built machine, employing several hundred VLSI chips specially-built for chess play. Although Deep Blue wasn't a general purpose supercomputer, it ranked as the 259th most powerful computer in the world at the time of its match with Kasparov since it had a processing speed of 11.38 gigaflops. The spin-offs from Deep Blue are in parallel computer architecture and software.

Of course, when we talk about spin-offs, the NASA space program comes to mind. Putting a man on the moon is an abstract notion without any immediate financial payback. However, integrated circuit development owes much to NASA investment, and the US space program gave us communications satellites and freeze-dried food [1]. In an earlier age, the Manhattan Project gave us nuclear energy and nuclear medicine. The physicist Michael Faraday (1791-1867) was once asked about the utility of his research. He replied, "What use is a baby?" In a similar encounter with the Prime Minister of England, Faraday said, "In ten years you will be taxing it!"

The centralized corporate research lab is now a thing of the past, and its demise has eliminated such long term research programs. In my experience, the idea that similar long term research is now the province of the research university is incorrect. University programs are not funded at a similar scale. Funding agencies have been funding more short-term research, and they have been looking for immediate spin-offs, especially in biotechnology. Many professors are looking towards applied research that can help them launch their own profitable companies, or generate patent royalties.

Penzias wrote a vieled obituary of the corporate research lab in his official Nobel Prize autobiography [2]

"By the early 1990's, my life had settled into a familiar - if not entirely comfortable - routine. The joy and satisfaction that I found in helping to help shape exciting new ideas was offset by onerous management chores - most notably, my annual task of getting adequate financial support for my organization's budget requirements from our parent corporation. Beset by competitors who didn't have research labs of their own to pay for, AT&T's leaders nonetheless did their best to provide for its "crown jewel". As one year followed another, I did my best to repay that trust by helping to turn some of our scientific "gems" into profitable jewelry."

1. NASA Procedure SS-F-0224, Equipment, Formulation and Processing Procedures for Tofu with Hot Mustard Sauce
2. Arno Penzias Autobiography.

March 28, 2008

Arthur C. Clarke

Science fiction author, Arthur C. Clarke (b. December 16, 1917) died on March 19, 2008, at the age of ninety [1-4]. Clarke was so successful in the science fiction genre because he had a thorough knowledge of the topics he wrote. He earned a degree in physics and mathematics at King's College, London, after World War II. During the war, Clarke served as a radar technician in the Royal Air Force, working on such things as radar defense systems and ground approach radar, and he attained the rank of Lieutenant upon demobilization.

It was perhaps this experience with radar systems that gave Clarke the idea for the geosynchronous satellite, which he published in 1945 as a letter, and then an article, in the British magazine, Wireless World [5]. The principle for a geosynchronous satellite can be derived from Kepler's Third Law of Planetary Motion. A satellite's orbital period increases with altitude, and at a sufficient altitude (22,300 miles, or 35,786 km) the period is one day, so the satellite maintains the same position above a place on the Earth. Clarke's original idea was for a manned satellite; after all, those vacuum tubes needed to be replaced when they burned-out, at least in 1945 electronics.

Some prior claims to the geosynchronous satellite idea have been uncovered over the years, but Clarke hit the presses at the right time. The world saw the possibilities of rocketry in the German V-2, and wireless telecommunications were becoming ubiquitous. John R. Pierce of Bell Labs, who was scientific lead on the Telstar telecommunications satellite project, claimed that the idea was commonly held in that era. However, Clarke's article was cited as prior art in the denial of patents on the fundamental concept. Clarke never fretted about not patenting the idea, since any patent would have long since expired when the first such satellites were launched.

One Clarke short story I've always remembered is "The Nine Billion Names of God." This was published in 1953, long before ubiquitous computing, and it described how two computer scientists were hired to help some Buddhist monks in their quest to transcribe all the names of God. The monks believed that once this was done, human destiny would be fulfilled, and the world would end. Even today, nine billion is a large number, just out of range of an unsigned long integer, but these computer scientists had done their job well, the list was made, and "overhead, without any fuss, the stars were going out."[6]

I've seen Clarke's two most popular works; namely, the movies "2001: A Space Odyssey" (1968, Stanley Kubrick, Director), and "2010" (1984, Peter Hyams, Director). "2001: A Space Odyssey" is based on Clarke's short story, "The Sentinel," which he wrote in 1948 and then expanded into a novel with Stanley Kubrick. "2010" is the sequel. These movies, and much of Clarke's other works, deal with extraterrestrial civilizations, so it's interesting to get Clarke's perspective on life in the universe. Said Clarke about extraterrestrial civilizations, "Two possibilities exist: either we are alone in the universe or we are not. Both are equally terrifying.[3]"

I have only one Clarke book on my bookshelf, now. This is Rendezvous with Rama. Although it was first published many years ago, in 1974, I highly recommend it. Those who have read it will surely have read it three times.

1. Gerald Jonas, "Arthur C. Clarke, Author Who Saw Science Fiction Become Real, Dies at 90," New York Times (March 19, 2008).
2. Sir Arthur C. Clarke (The Times, March 19, 2008).
3. Dennis Overbye, "A Boy's Life, Guided by the Voice of Cosmic Wonder" (New York Times, March 25, 2008).
4. Arthur C. Clarke (The Economist, March 27, 2008).
5. Arthur C. Clarke, ""Extra-Terrestrial Relays - Can Rocket Stations Give Worldwide Radio Coverage?" Wireless World (October, 1945), pp. 305-308.
6. Arthur Clarke, "The Nine Billion Names of God".

March 27, 2008

Women and Children First

Although the phrase, "Women and children, first," is usually associated with lifeboats, it's applied also to aircraft boarding when it's women with children. The time delays in aircraft boarding have always been a nuisance, but the newer generation of ultra-capacity, superjumbo aircraft make this more of a problem. The Airbus A380, the largest passenger airliner in the world, seats from 525-853 passengers, depending on the configuration. Efficient boarding is not just a matter of passenger convenience. A slow turn-around time for aircraft means lost airline revenue. This is especially true for short flights for which the aircraft is used many times each day. One internet commentator noted that the Boeing 757-300, a super-stretch narrowbody aircraft, has the lowest cost per passenger mile of any aircraft, but it wasn't a big seller. The turn-around time involved in boarding the 45 rows of seats of the 757-300 is too long. It's not surprising that some computer modeling studies have looked at the aircraft boarding problem.

Jason Steffen, a physicist at Fermilab (Batavia, Illinois), has done a recent study of aircraft boarding [1-2], and his results confirm those of earlier studies [3]. Steffen analyzed the boarding process, and he discovered that the time needed to stow carry-on articles is the dominant contribution to boarding time. The time taken in such processes as climbing over seated passengers is negligible. Armed with this model, Steffen did a Markov Chain Monte Carlo computer simulation of aircraft boarding. It will not surprise frequent fliers that the traditional pattern of boarding passengers in blocks from the rear to the front is far from optimum. What this does is to just shift the line of passengers into the airplane but does not allow optimal boarding. The alternative process of first boarding window seats, then the middle seats, then the aisle seats, reduces boarding time to less than half of the worst case. The reason for this is that it allows passengers to load their carry-on items in parallel. The general principle derived from Steffen's model is that an optimal process will spread passengers throughout the length of the airplane to allow such parallel operations.

The ideal arrangement is to board passengers whose seats are separated by a particular number of rows, but this would be a problem for passengers wishing to board together; e.g., parents with children. It turns out that "free-for-all" boarding, allowing passengers to board in no set order, is not that bad. The randomness allows local optimization near some rows, and this speeds the boarding process. Of course, the best boarding plan is a strict row-by-row, seat-by-seat boarding from back to front, but passengers are not disciplined enough for this approach.

1. Philip Ball, "Strict ordering slashes tarmac time" (Nature Online, February 27, 2008).
2. Jason H. Steffen, "Optimal boarding method for airline passengers" (ArXiv, February 6, 2008)
3. Eric Berger, "Chaos beats order, in airline cabins at least" (Sci Guy Science Blog, December 10, 2005).

March 26, 2008

You Can Get There From Here

Quite a few classic math problems are simple to state, but difficult to solve. One of the more famous of these is Fermat's Last Theorem, which I mentioned in a previous article (Every Number Has A Story, April 18, 2007). Fermat's Last Theorem simply states that for integers n>2, the equation an + bn = cn has no solutions other than a=b=c=0. Princeton University mathematicians Andrew Wiles and Richard Taylor proved the theorem was true in 1994, more than 350 years after it was stated.

The statement of another classic math problem, the Four Color Conjecture, doesn't require an equation. This conjecture was first stated by the mapmaker, Francis Guthrie, in 1852, although it was first published by the mathematician, Arthur Cayley, many years later. Guthrie noticed that only four colors were ever needed to make a map in which adjoining colored regions never had the same color, and he conjectured that four colors will suffice to color any planar map. In 1976, Kenneth Appel and Wolfgang Haken published a controversial proof of this conjecture. They used a computer to analyze every conceivable map and demonstrated that the Four Color Conjecture was true. If not for computers, this problem would still be unsolved.

If you rummage through the mathematics crayon box, you'll find another "coloring" problem. This is the Road Coloring Problem, which goes back a few decades [1-2]. This problem is simply stated in a recent internet posting:

"You're in a town that's been planned very carefully. At any intersection, the possible roads away from that intersection are labeled with colors. I'll assume three colors, and that each intersection has exactly three roads leading away from it (the number of roads that lead into the intersection doesn't matter). Your friend tells you that his address is "Red-Blue-Blue". This means that, no matter which intersection you start from, by repeatedly following these directions, you will end up at his house."

The Israeli mathematician, Avraham Trahtman, has just proved this conjecture, elevating it to the Road Coloring Theorem [3-5]. Mathematical discovery is typically the province of young mathematicians, but Trahtman did this when he was 63 years old, and this is no mean feat. The mathematician, Joel Friedman of the University of British Columbia, who works in the area of this problem, symbolic dynamics, says that he and nearly everyone else in the field, and some computer scientists, have tried to solve this problem [3]. Trahtman's approach was to restate the problem as the existence of a synchronizing word for a deterministic, complete, and finite automaton. So, once again, computer science comes to the rescue of mathematics.

1. R.L. Adler, L.W. Goodwyn, and B. Weiss, "Equivalence of topological Markov shifts," Israel J. of Math., vol. 27(1977), pp. 49-63.
2. R.L. Adler and B. Weiss, "Similarity of automorphisms of the torus," Memoirs of the Amer. Math. Soc., vol. 98 (1970).
3. Aron Heller, "At 63, Israeli immigrant solves 38-year-old math riddle" (Associated Press - USA Today, March 20, 2008).
4. A. N. Trahtman, "The road coloring problem" (ArXiv preprint; to appear, Israel Journal of Mathematics).
5. A.N. Trahtman, "A Subquadratic Algorithm for Road Coloring" (ArXiv preprint).

March 25, 2008

Pay No Attention to That Man Behind the Curtain

Many years ago, I gave a small retirement present to our retiring department manager. It was a copy of Silicon Dreams: Information, Man, and Machine (1989) by Robert W. Lucky. It was a fitting gift, since our former manager is, like Lucky, both an electrical engineer and a Fellow of the IEEE. Lucky was a technology vice president at Bell Labs, and he was a part of Bellcore/Telcordia Technologies after the Bell System divestiture that ended the alleged US telecommunications monopoly of AT&T and effectively destroyed Bell Labs. Although he spent most of his career as a manager, Lucky has the heart of an engineer with a love for electronics technology. Lucky writes a regular opinion article for the IEEE Spectrum magazine.

Lucky's article, Math Blues [1], in the September, 2007, issue of IEEE spectrum, concerns engineering mathematics. As an undergraduate student, I struggled through the transient solution to the harmonic oscillator with pencil and paper, as did most scientists and engineers of my generation, and certainly in Lucky's generation (Lucky's about ten years older than I am). Engineers are required to learn a lot of mathematics, most of which they never use in their specialty areas. Lucky writes that Maxwell's equations are inscribed at the entrance to the National Academy of Engineering, of which he is a member, but he wonders how often these are actually put to paper by today's engineers.

The demise of pencil and paper mathematics is a direct consequence of ubiquitous computing. Computers have become an essential part of the practice of technology. You're reading this on your computer screen, and you may have arrived here after fetching some data sheet off the internet. Unlike the engineers and scientists of yesteryear, when you calculate, you let the computer do your work for you. You take the results on blind faith, since you don't have the time to do them yourselves; or, worse yet, you forgot how to do the math. Since technical papers abound with equations, there's really a disconnect between what's considered important in a field and what's actually done by those in practice in those fields. I must confess that I, too, use a computer to calculate, but I always do an order-of-magnitude estimate with pencil and paper to check the computer result. I'm checking not just my desktop computer itself, but I'm checking the "men behind the curtain" who wrote the program and designed the hardware. Since I'm a programmer, too, I spend some time behind the curtain myself, and I know what hidden errors lurk in even the most established code.

Computer math errors are not just the programmers' errors. They may be hardwired into the computer chip. There's the famous Pentium floating-point division bug, hard-coded into the original Intel Pentium chip. This bug was discovered in 1994 by Thomas Nicely, a mathematics professor, who was doing research on prime numbers [2-3]. On the average, one in every nine billion floating-point divisions, such as 4195835.0/3145727.0 = 1.333739068902037589, were erroneous. Intel's replacement of most of these flawed chips cost the company about $500 million.

The title of this article, "Pay No Attention to That Man Behind the Curtain," is a phrase spoken by Professor Marvel in the The Wizard of Oz (1939, Victor Fleming, Director) when he was discovered behind a curtain controlling the wizard imposter. The book on which this movie was based figures prominently in the plot of the 1974 film Zardoz (John Boorman, Director). "Zardoz" is "Wizard of Oz" with some missing letters. The Levenshtein distance between "Wizard of Oz" and "Zardoz" is 7. However, in the movie, "of" is ignored, and the "Wi" is covered. Ignoring letter case, the Levenshtein distance between "zard oz" and "zardoz" is only one (the extra space or line break).

1. Robert W. Lucky, "Math Blues," IEEE Spectrum (September, 2007).
2. Ivars Peterson, "Pentium Bug Revisited," Science News (May 12, 1997).
3. Pentium FPDIV Bug (Wikipedia).
4. The Wizard of Oz (1939, Victor Fleming, Director) on the Internet Movie Database.
5. Zardoz (1974, John Boorman, Director) on the Internet Movie Database.

March 24, 2008

A Matter of Solubility

Last week, I attended the 2008 PRISM University-Industry Research Symposium on Materials for Energy, hosted by The Princeton Institute for the Science and Technology of Materials at Princeton University. One talk in particular caught my interest. Joan Brennecke of the University of Notre Dame Chemical Engineering Department reported on her research on the use of ionic liquids for carbon sequestration.

As the name indicates, ionic liquids are liquids composed of ions and not neutral molecules. Sodium chloride (NaCl) is an ionic liquid above its melting point of 801 oC, but the term ionic liquid is principally used to describe compounds that are liquid near room temperature, and these compounds are organic salts. The first ionic liquid to be discovered was the simple organic compound, Ethanolammonium nitrate, C2H5NH3NO3, which has a melting point very near room temperature. It was discovered in 1888 [1]. Most of today's ionic liquids are not as simple, as the tongue-twister, 1-Butyl-3-methylimidazolium hexafluorophosphate, will attest. This compound, called BMIM-PF6 for short, has been well studied, since it is commercially available. Ionic liquids have much to commend themselves. Since these organic compounds are ionic, the molecules are strongly-bonded to each other, so the liquids are non-volatile. This is one reason why they are being considered as "green" solvents for industrial processes.

As it turns out, some ionic liquids have a very high affinity for carbon dioxide, absorbing many mole percent at a few atmospheres partial pressure. The goal of the Notre Dame research effort [2] is the development of liquids to capture carbon dioxide at about 0.2 bar; that is, the partial pressure of CO2 in flue gas. Their research in these liquids has been encouraging. Not only do ionic liquids solvate CO2, they can solvate SO2, also.

Why is carbon sequestration important? Merely reducing our use of fossil fuels will not stop global warming, since the affect of carbon emissions is cumulative. If we just intend to maintain atmospheric CO2 at its present (high) level of about 450 ppm, we need to prevent additional accumulation. Additionally, we may discover a way to make this CO2 useful, perhaps by photochemical transformation into a fuel. Research in the photolysis of CO2 was presented also at the PRISM Conference, and I mentioned an approach to photolysis in a previous article (Solar Fuel, January 7, 2008).

1. S. Gabriel and J. Weiner (1888). "Ueber einige Abkömmlinge des Propylamins," Ber., vol. 21, no. 2 (1888), pp.2669-2679, as cited in P. Walden, Bull. Acad. Imper. Sci. (1914), p. 1800.
2. Brennecke Group Web Site.
3. Ionic liquids (Wikipedia)

March 17, 2008

Travel, Easter Holiday and Vacation


I'll be at Princeton University this week for a conference at the Princeton Institute for the Science and Technology of Materials. After that, I'll be taking an Easter holiday. My next article will appear on Monday, March 30, 2008.

Unlike Christmas, which falls on the same calendar date each year, the date for Easter Sunday is computed based on a lunar calendar. Easter Sunday this year is March 23, one of the earliest Easters possible. The earliest Easter can happen is March 22. The latest possible date for Easter is April 25, and the most common is April 19. Because of the interplay of the lunar and solar cycles, the repeat period for Easter dates is 5,700,000 years.

March 14, 2008

Life, the Universe, and Everything

This year's Templeton Prize has been awarded to physicist and cosmologist Michael Heller, who is also a Roman Catholic priest. This prize, officially named the "Templeton Prize for Progress Toward Research or Discoveries about Spiritual Realities," has been awarded annually since 1972. The monetary value of the prize, $1.6 million this year, is purposely set to exceed that of the Nobel Prize. The prize is awarded for "... New concepts of divinity, new organizations, new and effective ways of communicating God's wisdom and infinite love, creation of new schools of thought, creation of new structures of understanding the relationship of the Creator to his ongoing creation of the universe, to the physical sciences, and the life sciences, and the human sciences, the releasing of new and vital impulses into old religious structures and forms." Of course, mixing science and religion is a good formula for creating controversy, so it's not surprising that this prize has its critics. Nonetheless, a prize of this scope is not easily ignored.

Heller, who has a Ph.D. in cosmology, teaches philosophy at the Pontifical Academy of Theology (Kraców, Poland), he's an adjunct member of the Vatican Observatory, and was a visiting scientist at the University of Arizona. It's not surprising that a cosmologist would win such an award, since cosmology is concerned with the origin and nature of the Universe. Heller, whose scientific credentials are impeccable, has focused his career on the reconciliation of scientific fact with religious thought. Principally, he is outspoken against the God of the gaps idea that religion is only there to explain what science cannot. Scientifically, he has studied the unification of relativity and quantum mechanics. Heller has published nearly 200 scientific papers, and he has authored more than twenty books.

One of the best strategies for knowledge discovery is the restatement or inversion of a problem. In this vein, Heller's work has transformed doubts about the existence of God into doubts about material existence. Heller, like Eugene Wigner [4], is intrigued by "The unreasonable effectiveness of mathematics in the Natural Sciences." Says Heller, "If we ask about the cause of the universe, we should ask about the cause of mathematical laws. By doing so, we are back in the great blueprint of God's thinking about the universe, the question on ultimate causality: why is there something rather than nothing? When asking this question, we are not asking about a cause like all other causes. We are asking about the root of all possible causes. Science is but a collective effort of the human mind to read the mind of God from question marks out of which we and the world around us seem to be made. [1-2]"

Heller joins a distinguished cohort of Templeton Prize winners. Along with Mother Teresa of Calcutta (1973) and Aleksandr Solzhenitsyn (1983), the following scientists are represented [5]:

• 1989 - Carl Friedrich von Weizsäcker, physicist and philosopher
• 1995 - Paul Davies, theoretical physicist
• 2000 - Freeman Dyson, physicist
• 2004 - George F. R. Ellis, cosmologist and philosopher
• 2005 - Charles Townes, Nobel laureate and physicist
• 2006 - John D. Barrow, cosmologist and theoretical physicist

Heller says he will use the prize money to found a center for the study of science and theology at the Pontifical Academy of Theology where he teaches.

The number forty-two is The Answer to Life, the Universe, and Everything in The Hitchhiker's Guide to the Galaxy by Douglas Adams. It took 7-1/2 million years for the Deep Thought computer, built especially to answer this question, to get and check this answer.

1. Brenda Goodman, "Priest-Cosmologist Wins $1.6 Million Templeton Prize," New York Times, March 13, 2008.
2. Ruth Gledhill, "Professor wins prize for maths link to God," The Times (London), March 13, 2008.
3. Cosmologist priest wins Templeton prize (Physics Today, March 12, 2008).
4. Mark Colyvan, "The Miracle of Applied Mathematics," Synthese, vol. 127, no. 3 (June, 2001), pp. 265-78.
5. Templeton Prize Web Site
6. Michael Heller, "Evolution of Space-Time Structures,"Concepts of Physics, vol. III, no. 2 (2006), pp. 117 ff. (PDF File).
7. Michael Heller's Home Page (in Polish, and somewhat out of date).

March 13, 2008

Mathematical Art

In a previous article (The Art of the Periodic Table, February 29, 2008) I reported on some artistic interpretations of the chemical elements. The advent of computer graphics has allowed the emergence of art based on mathematics. The boundary between mathematical art, which is the visualization of mathematical objects, and computer art, which is the use of computers for artistic expression, is not well defined. A prime historical example of mathematical art is the Mandelbrot set, a graphical depiction of chaos devised by the mathematician, Beînot B. Mandelbrot. The Mandelbrot set sold a lot of computers in the early days of personal computing, and it was used often to benchmark the speed and graphical performance of PC brands.

This year's joint meeting of the American Mathematical Society and the Mathematical Association of America (San Diego, CA, January 6-9, 2008) [1] had an exhibit of mathematical art [2-3]. This exhibit was dedicated to Magnus Wenninger, a Benedictine Monk who wrote an influential book on polyhedron models [4-5]. The exhibit organizers cited Wenninger as "a pioneer in the mathematical art community, whose models of polyhedra have inspired a new generation of mathematical artists."

All forty mathematical art works can be found at http://www.bridgesmathart.org/art-exhibits/jmm08/, but here are some that caught my eye.

Robert Bosch, "One Fish, Two Fish, Red Fish, Black Fish". Robert Bosch is a Professor of Mathematics at Oberlin College. He has his own website at http://www.dominoartwork.com. This work is a continuous line drawing based on the solution to a 1500-city Traveling Salesman Problem.

Chaim Goodman-Straus, "Orchid (613121)". Chaim Goodman-Strauss is a Professor of Mathematics at the University of Arkansas. This work is based on an anisotropic space group related to crystallography.

Douglas McKenna, "Phiberspace". Douglas McKenna is a freelance artist and software developer who has his own website at http://www.mathemaesthetics.com/MathArtPrints.html. This work is based on a a recursive subdivision of a plane based on the square root of the golden ratio.

Kerry Mitchell, "Mom". Kerry Mitchell is an Institutional Research Analyst at Phoenix College. This is a rendition of a photograph of his mother using a space-filling curve of varied width.

Andrew Pike, "Sierpinski Carpet". Andrew Pike is a Senior Math major at Oberlin College. This work is a rendition of a photograph of the mathematician, Waclaw Sierpinski, using the mathematical object, the Sierpinski curve, that Sierpinski invented.

Reza Sarhangi and Robert Fathauer, "Buzjani's Rusty Compass Pentagon". Reza Sarhangi is a Professor of Mathematics at Towson University, and Robert Fathauer is proprietor of Tessellations Company. This work is based on the geometric construction of a pentagon by Abu'l-Wefa Buzjani (940-998).

Carlo H. Séquin, "Scherk-Tower". Carlo H. Séquin is Professor of Computer Science at the University of California, Berkeley. This is a bronze sculpture based in a complex fashion on Gaussian curvature.

Wenninger had his own entry, Magnus J. Wenninger, "3D Models of 4D Polytopes", a series of paper models of polyhedra.

1. Joint Meeting of the American Mathematical Society (114th Annual Meeting) and the Mathematical Association of America (91st Meeting), San Diego, CA, January 6-9, 2008.
2. Exhibition of Mathematical Art.
3. Julie J. Rehmeyer, "Math on Display: Visualizations of mathematics create remarkable artwork," (Science News Online, vol. 173, no. 7, February 16, 2008).
4. Wenninger, Magnus (1974), Polyhedron Models (Cambridge University Press. ISBN 0-521-09859-9).
5. Ivars Peterson, "Papercraft Polyhedra" (Science News Online, vol. 169, no. 16, April 22, 2006).

March 12, 2008

Blending (Part II)

In yesterday's article (Blending, March 11, 2008) I mentioned the problems encountered when trying to mix dissimilar powdered materials. One example of this is the Brazil Nut Effect, the tendency for large nuts to be at the surface of a container of mixed nuts. The mixing problem for granular materials is so complex that adding energy to some mixtures, such as by a greater mixing speed, may actually demix the materials. Although past research has centered on granular mixing as just another object of study, there is research underway to devise better mixing processes. Such research is important, since mixing is an important industrial process for pharmaceuticals, chemicals, ceramics, polymers and foodstuffs, all of which are produced in huge annual quantities.

Julio M. Ottino and Richard M. Lueptow of Northwestern University have summarized some of the current research in granular mixing in an article in a recent issue of Science [1]. Among the variables involved in the mixing process are the adhesive properties of the particles, including electrical charging, the interstitial fluid (usually air), and the size, shape, and density of the particles. Often the material of the mixing container is important, especially when electrical charging is important, as well as its shape and how it is moved (spun, shaken, etc.). Since it is difficult, undesirable, or costly to modify the particles, most research is directed towards the mixer and the mixing process.

Of course, the easiest way to do an experiment is to do it not with the actual physical objects, but to simulate the process using a computer model. A few general rules have emerged from such studies, the most important of which is that the mixing process should be slow enough to allow the smaller particles the flow through the larger particles. One approach to effective mixing which allows this in a peculiar fashion is the zig-zag chute developed by engineers at the University of Pittsburgh [2]. This chute is a simple method to introduce periodic flow inversion which aids mixing, since particles that are originally of the top of the pile end up on the bottom after each leg of the chute.

A zig-zag chute, although an effective mixer, is a large apparatus. Furthermore, several cycles through the chute might be required to obtain adequate mixing, so the zig-zag idea has been applied to the baffles in a rotary mixer. Baffling has always been a trial and error type of optimization, and the typical baffles in a rotary drum mixer are short struts attached to the outer wall of the cylinder. Information gained from zig-zag chute prompted the Pittsburgh team to look at a single, central baffle filling nearly 90% of the diameter of the cylinder. This type of mixer proved very efficient, and this proves, once again, that a little money spent on basic research can solve some big production problems.

1. Julio M. Ottino and Richard M. Lueptow, "On Mixing and Demixing," Science, vol. 319, no. 5865 (February 15, 2008), pp. 912-913.
2. Deliang Shi, Adetola A. Abatan, Watson L. Vargas, and J. J. McCarthy, "Eliminating Segregation in Free-Surface Flows of Particles," Phys. Rev. Lett., vol. 99 (October 4, 2007), 148001.

March 11, 2008


From 1977 to about 1990 my main occupation was crystal growth. During that period I was in frequent communication with the technical staff at our Synthetic Crystal Products plant in Charlotte, North Carolina. One important product was YAG (yttrium aluminum garnet) laser crystals, which are essentially just a mixture of yttrium oxide and aluminum oxide in the proper proportions (and a pinch of neodymium). In order to maintain the proper lattice constant from top to bottom in the crystal cylinder that's produced, it's important to mix these two oxides to a very precise ratio. Since they produced a lot of crystals, they decided to blend a large quantity of these oxide powders, and then just scoop off whatever quantity was needed when they needed to make another crystal. This was easier than weighing out each batch separately, and it was believed to be less prone to operator error. They immediately had problems. These were traced to the fact that the ratio of the oxides were not the same from scoop to scoop in the blended master batch. The blending was imperfect.

Such blending operations are quite common in industry, and much effort has gone into processes that give a consistent blend. A common apparatus for blending is the double-cone "V" blender. This is essentially a "V" shaped container that's rotated so that the contents are first at the apex of the "V," and then divided into the two prongs. The process is, in effect, like shuffling cards. The deck (contents in the apex) is split into two halves (in the prongs), and then recombined (in the apex). A large number of shuffles (rotations) should ensure a random deck of cards (blended contents).

The problem here is that we assume that the cards are all the same. What if some cards are rougher than others and tended to stick together, or some cards are slightly larger? In the case of blending, the chemical powders may not be the same size, and some may be rougher than others. We aren't shuffling an ideal deck. An example of particle segregation in granular materials is the "Brazil Nut Effect. I mentioned the Brazil Nut Effect, the tendency for large nuts to be at the surface of a container of mixed nuts, in a previous article (Granular Materials, June 12, 2007). This effect can be reproduced in any mixture of large and small particles shaken vertically, as nuts in a can are shaken during their extended transit to your home.

There has been a surge of interest in granular materials in the scientific community, and some of this has been directed to blending [1-2]. One research group from Rutgers University has looked at "V" blending in particular [1]. They found that segregation patterns, such as stripes and bands, can form with changes in the blender fill level or rotation rate of just one percent. In one case of extreme segregation, one mixture component was completely absent from half of the "V." An Australian team used magnetic resonance imaging (MRI) to track particles and measure flows [2].

1. Albert W. Alexander, Troy Shinbrot, and Fernando J. Muzzio, "Granular segregation in the double-cone blender: Transitions and mechanisms," Physics of Fluids, vol. 13, no. 3 (March, 2001), pp. 578-587.
2.Guy Metcalfe, Lachlan Graham, James Zhou, and Kurt Liffman, "Measurement of particle motions within tumbling granular flows," Chaos Vol. 9, Issue 3 (September, 1999), pp. 581-593.
3. H. M. Jaeger, S. R. Nagel, and R. P. Behringer, "Physics of granular materials," Rev. Mod. Phys. 68, 1259-1280 (1996).

March 10, 2008

Ockham's Electric Shaver

When I was a child, I noticed that my father had a drawer full of electric shavers. Electric shavers were advertised as bringing shaving into the modern age, but at that time they just didn't work that well. My father would try one, only to throw it into that drawer when a better one hit the market. As for me, I've only used a regular razor (a so-called "safety razor"). My son has used only an electric shaver, so I guess they've finally been perfected. A famous technological razor is Ockham's razor (also known as Occam's razor). William of Ockham was a fourteenth century English Franciscan friar who stated his famous principle of ontological parsimony, "Entities should not be multiplied beyond necessity [1]." What Okham was saying is that too complicated an explanation is likely the wrong explanation. This principle is followed by scientists to this day.

Engineers have a similar idea in their KISS principle, "Keep it Simple, Stupid," which is a blunt way of stating that designs should be simple. The antithesis of this is the Rube Goldberg machine, known as a Heath Robinson machine to the international crowd. However, lest we get carried away by stripping too much from our designs, there must be a limiting point. Albert Einstein supposedly said, "Everything should be made as simple as possible, but no simpler." Said the French writer, Antoine de Saint Exupéry, known for The Little Prince, "It seems that perfection is reached not when there is nothing left to add, but when there is nothing left to take away."

One problem in reduction of complexity is its measure. In some cases, it's just a number of objects. Thus, reduction in the "number of moving parts" was important when machines were composed mostly of moving parts prone to failure. For software, however, a definition of complexity is more elusive. Often the number of lines of source code is used as the measure of complexity in a computer program, but there's actually a better measure called "Cyclomatic Complexity."

Cyclomatic Complexity was defined by the mathematician, Thomas McCabe, as the number of linearly independent paths through a computer program [2]. A program with no decision points (e.g., no If... Then statements, no Do... While loops, etc.) would have a complexity of 1. If there is just a single IF... Then statement, then there are two paths through the program, and it has a complexity of 2. There is a more formal definition of this idea that involves graph theory, but you get the idea. Not surprisingly, more complex programs have a greater number of errors. William T. Ward of Hewlett-Packard found a linear correlation between Cyclomatic Complexity and the number of errors [3]. A rule-of-thumb used by software engineers is that the Cyclomatic Complexity of a function should never exceed ten.

A recent article by Jack Ganssle in Embedded [4] looked at the complexity of Linux. Since Linux is open source software, the source code is available for analysis. The Linux kernel (vers. 2.6.23) has about 160,000 functions, about a third of which have a complexity of only one! The average function complexity in Linux is 4.94, and 144,000 of the functions are below ten. However, one 1,800 line function has a complexity of 352, and it probably qualifies as an entry in the Obfuscated C Code Contest [5]. The complexity of Microsoft Vista can't be checked, since it's not open source software, but it would be interesting to know what it is.

1. entia non sunt multiplicanda praeter necessitatem.
2. Thomas J. McCabe, "A Complexity Measure," IEEE Transactions on Software Engineering, vol. SE-2, no. 4 (December, 1976), pp. 308-320.
3. William T. Ward, "A Software Defect Prevention Using McCabe's Complexity Metric," Hewlett-Packard Journal, April 1989, pp. 64-68.
4. Jack Ganssle, "Taming software complexity: A simple equation can help you measure the complexity of your code," Embedded (March, 2008), pp. 49-52.
5. An example of an Obfuscated C Code Program.
6. RSM Software Metric Tool (M Squared Technologies).

March 07, 2008

Hydrogen Storage Materials

Sometimes, a little bit of an element added to another element can have a large beneficial affect on a property. A prime example of this is adding carbon to iron to make steel. A carbon addition of just a percent by weight changes soft iron into a high strength material. Adding a small percentage of another element to one candidate hydrogen storage material, sodium alanate (NaAlH4) has a large affect on its hydrogen storage potential. Pure sodium alanate releases hydrogen when heated, producing sodium hydride (NaH) and metallic aluminum in the process. Since sodium and aluminum are light elements, the percentage weight of hydrogen contained in this material is large. The problem is that neither aluminum nor sodium hydride absorb hydrogen well, so you can get hydrogen out, but you can't put it back in. About ten years ago, it was found that addition of just 2-4% titanium to sodium alanate allows a reversible large extraction of hydrogen from the compound. Scientists have been investigating why this is true and whether they can apply a similar principle to other hydrogen storage compound.

Early studies at Brookhaven National Laboratory [1-2] used high energy x-rays from the National Synchrotron Light Source to analyze the material crystallographically on a microscopic scale. They found that the titanium segregates at the surface as titanium aluminide, rather than completely mixing in the bulk material. This finding suggested a catalytic enhancement of hydrogen absorption by titanium, which explained the enhanced rate of hydrogen absorption in this material. Neutron inelastic scattering studies by scientists at the National Institute of Standards and Technology, the University of Maryland, and the University of Hawaii [3], showed that titanium atoms do substitute for sodium atoms in the crystal, and that titanium has a strong affinity for hydrogen that facilitates the breaking of multiple Al-H bonds in the solid.

Recently, materials scientists from UCLA did computer calculations of the processes and energies involved in hydrogen release from sodium alanate [4]. These so-called ab initio calculations show that diffusion of aluminum cations within the hydride is the rate-limiting step for hydrogen release, and this is enhanced by the titanium doping. The UCLA research was funded by the Department of Energy and the National Science Foundation.

I was involved with hydrogen storage materials early in my career (yes, the idea of hydrogen storage has been around for that long!). The materials I investigated were alloys of the rare-earth metals with transition metals. The idea that the metal surface was important to the hydrogen absorption/desorption process did not escape me. I actually put this idea to good use by developing a method to lock the hydrogen into these materials [5]. Exposing these hydrides to sulfur dioxide, a well-known catalytic poison, prevents the recombination of the hydrogen atoms in the alloy to form hydrogen gas at the surface. This effectively traps the hydrogen in the alloy, and it allowed my colleagues and me to do analysis of these hydrides without needing pressure cells. It also allowed us to examine the superconductivity of these materials [6].

1. Laura Mgrdichian, "Unlocking the Secrets of Titanium, a "Key" that Assists Hydrogen Storage," (Brookhaven National Laboratory Press Release, July 23, 2004.
2. Santanu Chaudhuri, Jason Graetz, Alex Ignatov, James J. Reilly, and James T. Muckerman, "Understanding the Role of Ti in Reversible Hydrogen Storage as Sodium Alanate: A Combined Experimental and Density Functional Theoretical Approach," J. Am. Chem. Soc., vol. 128, no. 35 (August 10, 2006), pp.11404-11415.
3. Jorge Íñiguez, T. Yildirim, T. J. Udovic,1 M. Sulic, and C. M. Jensen, "Structure and hydrogen dynamics of pure and Ti-doped sodium alanate," Physical Review, vol. B70 (August 15, 2004). PDF file here
4. Wileen Wong Kromhout, "UCLA solution to chemical mystery could yield more efficient hydrogen cars" (UCLA Press Release, February 27, 2008).
5. D.M. Gualtieri, K.S.V.L. Narasimhan, and T. Takeshita, "Control of the Hydrogen Absorption and Desorption of Rare Earth Intermetallic Compounds," J. Appl. Phys., vol. 47, no. 8 (August, 1976), pp. 3432-3435 .
6. P. Duffer, D.M. Gualtieri, and V.U.S. Rao, "Pronounced Isotope Effect in the Superconductivity of HfV2 Containing Hydrogen (Deuterium)," Phys. Rev. Lett., vol. 37, no. 21 (November 22, 1976), pp. 1410-1413.

March 06, 2008

May the Force be with You

Atoms in solids are bound to each other with varying strength, as can be seen by the huge range in melting points and boiling points of the elements. It's known that surface atoms are less tightly bound to their neighbors, but a direct quantitative measurement of the binding forces that hold atoms in place on a surface has never been done - until now. A team of Scientists from IBM and the University of Regensburg, Germany, have used an atomic force microscope to move individual atoms on a surface and measure the force required to do this [1-2]. Gerd Binnig of IBM was co-inventor of the atomic force microscope, for which he was awarded the 1986 Nobel Prize in Physics.

This may seem like blue sky research, but IBM considers these measurements to be a critical first step in understanding the molecular nanoassembly process. They make the analogy to the fundamental material mechanics studies that led to the ability to make bridges that don't fall down. Andreas Heinrich, a scientist at IBM Almaden Research Center says that he's working towards establishing an "IBM nanoconstruction company."

The IBM-Regensburg team adsorbed individual atoms and molecules onto the probe of an atomic force microscope, and they measured the vertical and lateral forces when moving the probe. There was considerable difference in bonding force between different atomic species. For example, moving a cobalt atom over a smooth platinum surface takes 210 piconewtons (210 x 10-12 newton), but moving a cobalt atom over a copper surface takes only 17 piconewtons. The research team found also that the lateral component of force predominates in the movement of metal atoms over metal surfaces. Their research is described in a recent issue of Science [2].

"May The Force Be With You" is the theme phrase from the Star Wars films. It's described by Obi-Wan Kenobi as "an energy field created by all living things that surrounds and penetrates living beings and binds the galaxy together." Sounds a lot like duct tape.

1. Jenny Hunter, "IBM Scientists First To Measure Force Required To Move Individual Atoms" (IBM Press Release (February 21, 2008).
2. Markus Ternes, Christopher P. Lutz, Cyrus F. Hirjibehedin, Franz J. Giessibl, and Andreas J. Heinrich, "The Force Needed to Move an Atom on a Surface," Science, vol. 319, no. 5866 (February 22, 2008), pp. 1066-1069.

March 05, 2008

Research Specialization

In the distant past, we scientists were all Natural Philosophers; that is, men (yes, only men in those days) who were interested in the all the workings of nature and making no distinction as to the type of knowledge that they sought. Towards the nineteenth century, specialization began to occur in the study of science. At that point we had the emergence of physics and chemistry as separate disciplines. Later still, we had the division of chemistry into organic, inorganic, physical, analytical, biochemical, and so forth. The same is true for physics and every other scientific and engineering discipline. Today, we are nearly all specialists.

The idea of specialization, formally called the "division of labor," goes back at least a few thousand years to Plato. In his Republic, Plato believed that a person's nature determined his profession, and once you had a handful of specialties, such as architect/builder, farmer, shoemaker and weaver, you had a workable society. Of course, things were simpler in 400 BC. This idea was amplified shortly thereafter by Xenophon, who wrote [1]

"Now it is impossible that a single man working at a dozen crafts can do them all well... Necessarily the man who spends all his time and trouble on the smallest task will do that task the best."

In more recent times, the idea that specialization engenders efficiency was echoed by David Hume and Adam Smith. Smith, who used the example of making pins to describe how specialization increases efficiency in his "Wealth of Nations" (1776), was also cognizant that some specialization, such as assembly line work, is a form of mental mutilation.

The days of the generalist have been over for quite some time, and yet generalists persist. Although I have my own supposed specialty, Materials Science (more specifically, Solid State Thermodynamics), it's rare that I practice this more than a small percentage of the time. There must be a reason why a few generalists have been allowed to flourish; well, at least to survive.

A recent paper by scientists in the Department of Evolution, Ecology and Organismal Biology, Ohio State University, published in the Journal of Theoretical Biology [2] uses computer modeling to study the utility of specialists and generalists. Their research object was the sea anemone, a marine organism that's an assemblage of genetically identical members who specialize in one of two roles, warrior or reproducer. The behavior of the sea anemone is complex and essentially unpredictable, somewhat like that of humans. The computer model showed that there are conditions important to the survival of the animal in which having some generalists is helpful. This is most important for small groups of individuals.

The authors of the paper offer the example of a bakery making cookies and employing three people, two of whom bake the cookies, and the other sells the cookies [3]. If these three are all specialists, when the salesman is out sick, the operation shuts down, since no one would buy stale cookies. If one of the bakers is trained in sales, you lose some money in training, but if he takes over sales while the specialist salesman is ill, the operation will still make a little money.

Excuse me while a walk to the vending machine for a package of cookies.

1. Xenophon, "Cryopaedia: The Education of Cyrus" This website is mistakenly blocked by Honeywell.
2. A. E. D'Orazioa and T. A. Waite, "Incomplete division of labor: Error-prone multitaskers coexist with specialists," Journal of Theoretical Biology, vol. 250, no. 3, (February 7, 2008), pp. 449-460.
3. Emily Caldwell, "In Nature - And Maybe The Corner Office - Scientists Find That Generalists Can Thrive" (Ohio State University Press Release, January 28, 2008).

March 04, 2008

Magnetic Gold

There are very few magnetic elements. By "magnetic," I mean "ferromagnetic." This means they can carry a permanent magnetic field, which is the common definition of "magnetic." In actuality, all elements are magnetic, they're just magnetic in different ways, the most common of which is diamagnetic. The common ferromagnetic elements are iron, nickel and cobalt, but gadolinium is ferromagnetic slightly below room temperature. Ferromagnetism arises from the spin and orbital angular momentum of unpaired (not bonding) electrons, so there's the possibility of making a non-magnetic element magnetic by affecting the way its electrons are bonded. A team of scientists from Spain, Japan, and Australia has done just this, making magnetic gold [1]. They created magnetic gold from gold nanoparticles, and their results are published in Nano Letters [2].

The trick here is the efficient modification of the electron structure of gold atoms. This is easy to do when there are just a few atoms in the particle, since most of the atoms are "surface" atoms. The research team created 2 nanometer clusters of gold atoms composed of just a few crystal unit cells, and nearly two-thirds of the atoms in these particles are surface atoms. The nanoparticles were surrounded by organic molecules of dodecanethiol that attach to the surface atoms. Dodecanethiol (C12H25SH, CAS No. 112-55-0, also known as Dodecyl mercaptan and Lauryl mercaptan) is a long-chain organic molecule with a sulfur-hydrogen group at the end. Compounds terminated in an SH group are known as thiols, and they're the sulfur equivalent of an alcohol. An alcohol is terminated in the characteristic OH group, whereas thiols are terminated in an SH group.

The traditional name of thiols is mercaptan, a word derived from the Latin mercurius captans, meaning "seizing mercury," an appellation derived from the tendency of these molecules to bond strongly to mercury. This tendency seems to apply to gold also, and this is an important key to the creation of the magnetic gold particles. The thiol interacts strongly with the surface atoms, "stealing" their electrons to cause significant charge redistribution and the subsequent magnetism. On the atomic level, the 5d electrons of gold appear to form hybrid orbitals with the 6s electrons, causing some of the d electron states to become unoccupied, resulting in magnetism. Not surprisingly, this magnetic transformation was accomplished also with silver and copper nanoparticles.

The research team claims that their magnetic particles are the smallest magnetic particles ever created. The magnetic elements tend to loss their ferromagnetism if the crystal size is made too small, so a few atoms of iron are not ferromagnetic. They then enter a "superparamagnetic" state. Of course, when talking about size, the creators of the magnetic gold particles ignore the hefty assemblage of organics attached to the metal clusters.

1. Prentsa Bulegoa, "Magnetic atoms of gold, silver and copper have been obtained" (University of the Basque Country Press Release, February 28, 2008).
2. Jose S. Garitaonandia, Maite Insausti, Eider Goikolea, Motohiro Suzuki, John D. Cashion, Naomi Kawamura, Hitoshi Ohsawa, Izaskun Gil de Muro, Kiyonori Suzuki, Fernando Plazaola, and Teofilo Rojo, "Chemically Induced Permanent Magnetism in Au, Ag, and Cu Nanoparticles: Localization of the Magnetism by Element Selective Techniques," Nano Letters, vol. 8, no. 2 (January 24, 2008), pp. 661-667.

March 03, 2008

Self-Cleaning Socks

There was a joke making the rounds when I was a young boy, possibly circulating since the time my father was a young boy, about a factory that sold indestructible socks. Business was really good for a while, but the factory went out of business since there were no reorders. Aside from the problem of holes, socks and other clothing articles have another problem that after they are worn for a single day, they are set aside for washing. Sometimes, there's the greater problem that, in some cases, washing should be done more than once a day. I'm a coffee drinker, and a considerable quantity of spilled liquid ends up on my desk, and sometimes on my clothing. Since we've put a man on the moon, how much harder can it be to create self-cleaning fabrics?

Although inert inorganics, such as raw dirt, can be removed only through conventional washing, most organic materials can be photocatalytically decomposed to volatile compounds that evaporate away. This process has been used for many years in self-cleaning window coatings, and the material of choice for this application is the anatase crystal form of titanium dioxide (TiO2), also called titania. Titania has the further beneficial property that it's non-toxic, and millions of tons of titania are used annually as a paint pigment. The catalytic property of titania-coated silica has even been used as an odor-control agent for cat litter. Titania-coated fabrics might have similarly useful properties, but the problem is development of a low temperature process to coat traditional fabric fibers, such as cotton, wool and silk with photoactive titania.

Chemists at Monash University, Victoria, Australia, have succeeded in coating wool fabric with a layer of five nanometer particles of titania [1-2]. What better a venue for research on wool than Australia? They did this using a low temperature process they had developed previously to titania-coat cotton, [3] and a pretreatment of the wool to allow binding to the titania particles. Their coating process is a low-temperature sol-gel process which involved soaking in an alkoxide solution of titanium isopropoxide followed by boiling in water for three hours. This produced surface crystals of anatase about 20 nanometers in diameter. Unlike cotton, which is nearly pure cellulose, wool, silk, and spider silk are composed of the protein, keratin. Binding titania to keratin required a surface pretreatment with succinic anhydride (dihydro-2,5-furandione, C4H4O3), which introduced additional carboxylic groups by acylation. The Monash team has demonstrated that a wine stain on titania-coated wool is substantially removed after eight hours, and nearly invisible after twenty hours. I'm hoping it works as well for coffee stains. Their work is published in a recent issue of the journal, Chemistry of Materials [2].

1. Roger Highfield, "Self-cleaning wool and silk developed using nanotechnology" (Telegraph (UK), February 11, 2008).
2. Walid A. Daoud, S. K. Leung, W. S. Tung, J. H. Xin, K. Cheuk, and K. Qi, "Self-Cleaning Keratins," Chem. Mater., DOI: 10.1021/cm702661k (ASAP Web Release, January 23, 2008).
3. Walid A Daoud and John H Xin, "Nucleation and Growth of Anatase Crystallites on Cotton Fabrics at Low Temperatures," Journal of the American Ceramic Society, vol. 87, no. 5 (May, 2004), pp. 953-955.