In previous work, Cornell biologist Thomas Seeley clarified how scout bees in a honeybee swarm perform "waggle dances" to prompt other scout bees to inspect a promising site that has been found. If it meets their approval, they, in turn, return to advertise the site with their own dances. Meanwhile, other scouts advertise alternative sites, creating a popularity contest among scouts committed to different sites. When one group exceeds a threshold size, the corresponding site is chosen.
In the new study, Seeley, a professor of neurobiology and behavior, reports with five colleagues in the United States and the United Kingdom that scout bees also use inhibitory "stop signals" -- a short buzz delivered with a head butt to the dancer -- to inhibit the waggle dances produced by scouts advertising competing sites. The strength of the inhibition produced by each group of scouts is proportional to the group's size. This inhibitory signaling helps ensure that only one of the sites is chosen. This is especially important for reaching a decision when two sites are equally good, Seeley said.
A swarm of bees labeled for individual identification. Image: Thomas Seeley
Previous research has shown that bees use stop signals to warn nest-mates about such dangers as attacks at a food source. However, this is the first study to show the use of stop signals in house-hunting decisions.
Such use of stop signals in decision making is "analogous to how the nervous system works in complex brains," said Seeley. "The brain has similar cross inhibitory signaling between neurons in decision-making circuits."
For example, when a goldfish detects the pressure wave of an approaching predator, it receives stimuli from both sides of its body. But if the pressure stimulus is stronger on the right side, brain neurons reporting from the right side will suppress the neurons reporting from the left side, which provides more clarity for the fish and helps it pinpoint the predator's location.
The study was conducted at Shoals Marine Laboratory on Appledore Island, six miles off the New Hampshire/Maine coast, where there are no big trees and no natural nest sites. The researchers brought swarms of bees to the island and offered them two equally good nest boxes. The scout bees that visited both boxes were marked with different colors.
Seeley and colleagues found that scouts that had committed to one box directed their stop signals mainly toward scouts promoting the other box.
"This analysis could not have been done without Shoals Marine Lab," said Seeley. "It's one of the very few places where there are no natural nest sites, so we can put out artificial nest sites and control and watch the whole decision-making process."
Co-authors Patrick Hogan and James Marshall of the University of Sheffield in the United Kingdom explored the implications of the bees' cross-inhibitory signaling by modeling their collective decision-making process. Their analysis showed that stop signaling helps bees to break deadlocks between two equally good sites and so avoid costly dithering.
Co-authors also included researchers from the University of California-Riverside and the University of Bristol, U.K.
The study was funded by the Cornell Agricultural Experiment Station, the University of California-Riverside and the U.K. Biotechnology and Biological Sciences Research Council.
The technique (developed by an international team of scientists in the United States and the United Kingdom), which is the first of its kind to use high-pressure chemistry for making well-developed films and wires of this particular kind of silicon semiconductor, will help scientists to make more-efficient and more-flexible optical fibers. The findings, by an international team led by John Badding, a professor of chemistry at Penn State University, will be published in a future print edition of the Journal of the American Chemical Society.
Badding explained that hydrogenated amorphous silicon -- a noncrystalline form of silicon -- is ideal for applications such as solar cells. Hydrogenated amorphous silicon also would be useful for the light-guiding cores of optical fibers; however, depositing the silicon compound into an optical fiber -- which is thinner than the width of a human hair -- presents a challenge. "Traditionally, hydrogenated amorphous silicon is created using an expensive laboratory device known as a plasma reactor," Badding explained. "Such a reactor begins with a precursor called silane -- a silicon-hydrogen compound. Our goal was not only to find a simpler way to create hydrogenated amorphous silicon using silane, but also to use it in the development of an optical fiber."
A bed of amorphous hydrogenated silicon wires that were prepared in the pores of optical fibers. The wires have been chemically etched out of the optical fiber to reveal them. Scale bar is 100 um. Inset: An array of amorphous hydrogenated silicon tubes deposited in an optical fiber. The optical fiber has been cleaved in half to reveal the array of tubes. The very thin glass walls of the fiber surrounding each tube are largely obscured. Scale bar is 5um. Credit: John Badding lab, Penn State University.
Because traditional, low-pressure chemistry techniques cannot be used for depositing hydrogenated amorphous silicon into a fiber, the team had to find another approach. "While the low-pressure plasma reactor technique works well enough for depositing hydrogenated amorphous silicon onto a surface to make solar cells, it does not allow the silane precursor molecules to be pushed into the long, thin holes in an optical fiber," said Pier J. A. Sazio of the University of Southampton in the United Kingdom and one of the team's leaders. "The trick was to develop a high-pressure technique that could force the molecules of silane all the way down into the fiber and then also convert them to amorphous hydrogenated silicon. The high-pressure chemistry technique is unique in allowing the silane to decompose into the useful hydrogenated form of amorphous silicon, rather than the much less-useful non-hydrogenated form that otherwise would form without a plasma reactor. Using pressure in this way is very practical because the optical fibers are so small."
Optical fibers with a non-crystalline form of silicon have many applications. For example, such fibers could be used in telecommunications devices, or even to change laser light into different infrared wavelengths. Infrared light could be used to improve surgical techniques, military countermeasure devices, or chemical-sensing tools, such as those that detect pollutants or environmental toxins. The team members also hope that their research will be used to improve existing solar-cell technology. "What's most exciting about our research is that, for the first time, optical fibers with hydrogenated amorphous silicon are possible; however, our technique also reduces certain production costs, so there's no reason it could not help in the manufacture of less-expensive solar cells, as well," Badding said.
Scientists are reporting development of a new cotton fabric that does clean itself of stains and bacteria when exposed to ordinary sunlight. Their report appears in ACS Applied Materials & Interfaces.
Mingce Long and Deyong Wu say their fabric uses a coating made from a compound of titanium dioxide, the white material used in everything from white paint to foods to sunscreen lotions. Titanium dioxide breaks down dirt and kills microbes when exposed to some types of light. It already has found uses in self-cleaning windows, kitchen and bathroom tiles, odor-free socks and other products.
Self-cleaning cotton fabrics have been made in the past, the authors note, but they self-clean thoroughly only when exposed to ultraviolet rays. So they set out to develop a new cotton fabric that cleans itself when exposed to ordinary sunlight.
Their report describes cotton fabric coated with nanoparticles made from a compound of titanium dioxide and nitrogen. They show that fabric coated with the material removes an orange dye stain when exposed to sunlight. Further dispersing nanoparticles composed of silver and iodine accelerates the discoloration process. The coating remains intact after washing and drying.
The visible-light-induced self-cleaning property of cotton has been realized by coating N-TiO2 film and loading AgI particles simultaneously. The physical properties were characterized by means of XRD, SEM, TEM, XPS, and DRS techniques. The visible light photocatalytic activities of the materials were evaluated using the degradation of methyl orange. In comparison with TiO2–cotton, the dramatic enhancement in the visible light photocatalytic performance of the AgI–N–TiO2–cotton could be attributed to the synergistic effect of AgI and N–TiO2, including generation of visible light photocatalytic activity and the effective electron–hole separations at the interfaces of the two semiconductors. The photocatalytic activity of the AgI–N–TiO2–cotton was fully maintained upon several numbers of photodegradation cycles. In addition, according to the XRD patterns of the AgI–N–TiO2–cotton before and after reaction, AgI was stable in the composites under visible light irradiation. Moreover, a possible mechanism for the excellent and stable photocatalytic activity of AgI–N–TiO2–cotton under visible light irradiation was also proposed.
Many antibiotics are produced by molds similar to those found on a slice of bread or Roquefort cheese. Penicillium molds are best known for making penicillin, but also produce the not-so-famous mycophenolic acid, a billion-dollar drug used to ward off organ rejection.
However, mycophenolic acid also poisons most microbes, which has had scientists wondering how molds that produce mycophenolic acid can grow in its presence. This general problem is only understood in a few cases. Understanding how some microbes resist high concentrations of antibiotics is important to designing new drugs and deciding how and when to prescribe existing drugs.
The mold Penicillium brevicompactum produces chemicals such as mycophenolic acid that are toxic to other microbes. Credit: Kristian Fog Nielsen, The Technical University of Denmark
Xin Sun, a Ph.D. student in Biology Professor Liz Hedstrom’s laboratory, together with Bjarne Gram Hansen of the Technical University of Denmark, got down to the molecular level to unearth that answer for mycophenolic acid production. Their research was recently reported in The Journal of Biological Chemistry and the Biochemical Journal.
Every drug has a target — in this case a protein to which the drug binds, blocking its normal function. In the case of mycophenolic acid, the target is the protein IMPDH, an enzyme found in every organism. The faster an organism is growing, the more IMPDH it needs. When an infection occurs, immune cells need to grow, so they produce more IMPDH.
Unlike most microbes, Penicillium have two copies of IMPDH.
“What Xin Sun did was to show that this second IMPDH is in fact resistant to mycophenolic acid,” says Hedstrom. “What was puzzling is that you’d expect a change in the drug binding site, but here the drug binding site is identical in both sensitive and resistant targets. Instead, the underlying function of the second IMPDH has changed in clever and sophisticated ways so the drug is no longer effective.”
These findings also provide new insights into another scientific mystery, how antibiotic production evolved in the first place. The team hypothesizes that Penicillium molds gained the second IMPDH through mutation (duplication), which allowed them to make small amounts of mycophenolic acid. Over time, the second IMPDH evolved to become more resistant, allowing the mold to make more mycophenolic acid.
AMSTERDAM, (Reuters) - The Netherlands moved to ban the sale of potent hashish cannabis on Thursday, eroding 40 years of liberal drug policy, over fears that the proceeds were flowing to organised crime gangs.
A parliamentary proposal to prohibit the sale of hashish resin in the Netherlands' famous coffee shops had the backing of both parties in the Liberal-Christian Democrat coalition. The sale of marijuana, the dried bud and leaves of the cannabis plant, will not be affected.
"Almost all of the hash that is sold in Dutch coffee shops is smuggled into the Netherlands by international criminal gangs from countries like Afghanistan, Morocco and Lebanon," said Ard van der Steur, a member of the ruling Liberal Party.
The ban on 'hash', derived from the potent TCH crystals on marijuana buds, will likely be in force by the end of 2013 and possibly sooner if changes to the law are swiftly implemented, he said.
The Netherlands is one of the few countries in the world where marijuana and hash are sold openly, but moves to crackdown on its sale have risen under the conservative government of Prime Minister Mark Rutte.
Another of those backing a ban, Christian Democrat legislator Coskun Coruz, said he hoped the ban would reduce consumption.
Studies show marijuana use in the Netherlands is roughly half that of the United States, where it is illegal.
Hash smokers in Amsterdam doubted a ban would cut use of the drug and said it would be hard to enforce.
"I know enough people to buy hash from if it is banned from coffee shops. I'm sure I'm not going to smoke less," 19-year-old Tommie van den Wouden said as he waited in line to order hash at one coffee shop in Amsterdam.
Ulrich, who works at a coffee shop, said about 40 percent of revenue came from hash sales but coffee shops would not be the only losers.
"If I can't sell hash any more, my customers will buy it on the street. This will also lead to declining tax income for the state," he said.
"I am surprised about these politicians saying they want to ban hash because of links with organised crime, because exactly the same goes for marijuana. The only difference is that most hash comes from abroad, while marijuana is grown locally."
As part of the crackdown, the Netherlands has introduced compulsory membership cards for coffee shops in the south of the country to deter drug tourists from Belgium, France and Germany. The rules came into effect in January but will not be enforced until May.
The government hopes to implement the measure nationwide, a move which would effectively herald the end of the Netherlands' position as a pot smokers' paradise.
While the sale of marijuana and hash is tolerated in the Netherlands, cultivating commercial supplies is illegal, making it complicated for coffee shop owners to acquire stock.
Source: HuffingtonPost.com - Reporting by Tjibbe Hoekstra, Editing by Anthony Deutsch and Ben Harding
These findings break new ground in the field of biomedicine because they identify an entirely new control mechanism that can be used to induce the formation of complex organs for transplantation or regenerative medicine applications, according to Michael Levin, Ph.D., professor of biology and director of the Center for Regenerative and Developmental Biology at Tufts University’s School of Arts and Sciences.
What’s especially interesting about this is that in research starting in 1937, Dr. Harold S. Burr, Professor Emeritus, Anatomy at Yale University School of Medicine, discovered that abnormal growth (such as cancer) was preceded by the appearance of abnormal voltage gradients in an organ. In a related discovery, Tufts biologists were able to control the incidence of abnormal eyes by manipulating the voltage gradient in the embryo.
The researchers achieved the most surprising results when they manipulated membrane voltage of cells in the tadpole’s back and tail, well outside of where the eyes could normally form. “The hypothesis is that for every structure in the body there is a specific membrane voltage range that drives organogenesis,” said Tufts post-doctoral fellow Vaibhav P. Pai, Ph.D.
Pai noted, “These were cells in regions that were never thought to be able to form eyes. This suggests that cells from anywhere in the body can be driven to form an eye.” To do this, they changed the voltage gradient of cells in the tadpoles’ back and tail to match that of normal eye cells. The eye-specific gradient drove the cells in the back and tail — which would normally develop into other organs — to develop into eyes.
“These results reveal a new regulator of eye formation during development, and suggest novel approaches for the detection and repair of birth defects affecting the visual system,” he said. “Aside from the regenerative medicine applications of this new technique for eyes, this is a first step to cracking the bioelectric code.”
Signals Turn On Eye Genes
From the outset of their research, the Tufts’ biologists wanted to understand how cells use natural electrical signals to communicate in their task of creating and placing body organs. In recent research, Tufts biologist Dany S. Adams showed that bioelectrical signals are necessary for normal face formation in the Xenopus (frog) embryos. In the current set of experiments, the Levin lab identified and marked hyperpolarized (more negatively charged) cell clusters located in the head region of the frog embryo.
They found that these cells expressed genes that are involved in building the eye called Eye Field Transcription Factors (EFTFs). Sectioning of the embryo through the developed eye and analyzing the eye regions under fluorescence microscopy showed that the hyperpolarized cells contributed to development of the lens and retina. The researchers hypothesized that these cells turned on genes that are necessary for building the eye.
Electric Properties of Cells Can Be Manipulated to Generate Specific Organs
The researchers achieved most surprising results when they manipulated membrane voltage of cells in the tadpole’s back and tail, well outside of where the eyes could normally form.
“The hypothesis is that for every structure in the body there is a specific membrane voltage range that drives organogenesis,” said Pai. “By using a specific membrane voltage, we were able to generate normal eyes in regions that were never thought to be able to form eyes. This suggests that cells from anywhere in the body can be driven to form an eye.”
Levin and his colleagues are pursuing further research, additionally targeting the brain, spinal cord, and limbs. The findings, he said “will allow us to have much better control of tissue and organ pattern formation in general. We are developing new applications of molecular bioelectricity in limb regeneration, brain repair, and synthetic biology.”
Changing the Signals Lead to Defects
Changing the bioelectric code, or depolarizing these cells, also affected normal eye formation. They injected the cells with mRNA encoding ion channels, which are a class of gating proteins embedded in the membranes of the cell. Like gates, each ion channel protein selectively allows a charged particle to pass in and out of the cell.
Using individual ion channels, the researchers changed the membrane potential of these cells. This affected expression of EFTF genes, causing abnormalities to occur: Tadpoles from these experiments were normal except that they had deformed or no eyes at all.
Further, the Tufts biologists were also able to show that they could control the incidence of abnormal eyes by manipulating the voltage gradient in the embryo. “Abnormalities were proportional to the extent of disruptive depolarization,” said Pai. “We developed techniques to raise or lower voltage potential to control gene expression.”
If it sputters, this is caused by the thermal motions of the smallest particles, which interfere with its running. Researchers at the University of Stuttgart and the Stuttgart-based Max Planck Institute for Intelligent Systems have now observed this with a heat engine on the micrometre scale. They have also determined that the machine does actually perform work, all things considered. Although this cannot be used as yet, the experiment carried out by the researchers in Stuttgart shows that an engine does basically work, even if it is on the microscale. This means that there is nothing, in principle, to prevent the construction of highly efficient, small heat engines.
A technology which works on a large scale can cause unexpected problems on a small one. And these can be of a fundamental nature. This is because different laws prevail in the micro- and the macroworld. Despite the different laws, some physical processes are surprisingly similar on both large and small scales. Clemens Bechinger, Professor at the University of Stuttgart and Fellow of the Max Planck Institute for Intelligent Systems, and his colleague Valentin Blickle have now observed one of these similarities.
A Stirling engine in the microworld: In a normal-sized engine, a gas expands and contracts at different temperature and thus moves a piston in a cylinder. Physicists in Stuttgart have created this work cycle with a tiny plastic bead that they trapped in the focus of a laser field. Credit: Fritz Höffeler / Art For Science
"We've developed the world's smallest steam engine, or to be more precise the smallest Stirling engine, and found that the machine really does perform work," says Clemens Bechinger. "This was not necessarily to be expected, because the machine is so small that its motion is hindered by microscopic processes which are of no consequence in the macroworld." The disturbances cause the micromachine to run rough and, in a sense, sputter.
The laws of the microworld dictated that the researchers were not able to construct the tiny engine according to the blueprint of a normal-sized one. In the heat engine invented almost 200 years ago by Robert Stirling, a gas-filled cylinder is periodically heated and cooled so that the gas expands and contracts. This makes a piston execute a motion with which it can drive a wheel, for example.
"We successfully decreased the size of the essential parts of a heat engine, such as the working gas and piston, to only a few micrometres and then assembled them to a machine," says Valentin Blickle. The working gas in the Stuttgart-based experiment thus no longer consists of countless molecules, but of only one individual plastic bead measuring a mere three micrometres (one micrometre corresponds to one thousandth of a millimetre) which floats in water. Since the colloid particle is around 10,000 times larger than an atom, researchers can observe its motion directly in a microscope.
The physicists replaced the piston, which moves periodically up and down in a cylinder, by a focused laser beam whose intensity is periodically varied. The optical forces of the laser limit the motion of the plastic particle to a greater and a lesser degree, like the compression and expansion of the gas in the cylinder of a large heat engine. The particle then does work on the optical laser field. In order for the contributions to the work not to cancel each other out during compression and expansion, these must take place at different temperatures. This is done by heating the system from the outside during the expansion process, just like the boiler of a steam engine. The researchers replaced the coal fire of an old-fashioned steam engine with a further laser beam that heats the water suddenly, but also lets it cool down as soon as it is switched off.
The fact that the Stuttgart machine runs rough is down to the water molecules which surround the plastic bead. The water molecules are in constant motion due to their temperature and continually collide with the microparticle. In these random collisions, the plastic particle constantly exchanges energy with its surroundings on the same order of magnitude as the micromachine converts energy into work. "This effect means that the amount of energy gained varies greatly from cycle to cycle, and even brings the machine to a standstill in the extreme case," explains Valentin Blickle. Since macroscopic machines convert around 20 orders of magnitude more energy, the tiny collision energies of the smallest particles in them are not important.
The physicists are all the more astonished that the machine converts as much energy per cycle on average despite the varying power, and even runs with the same efficiency as its macroscopic counterpart under full load. "Our experiments provide us with an initial insight into the energy balance of a heat engine operating in microscopic dimensions. Although our machine does not provide any useful work as yet, there are no thermodynamic obstacles, in principle, which prohibit this in small dimensions," says Clemens Bechinger. This is surely good news for the design of reliable, highly efficient micromachines.
Getting there, however, will entail much more than incremental progress. It will require adopting entirely new technology and surmounting a formidable roster of technological problems. One of most daunting of those – identifying and characterizing the factors that cause contamination of key lithographic components – has begun to yield to investigators in PML's Sensor Science Division, who have made some surprising and counterintuitive discoveries of use to industry.
In general, feature size is proportional to the wavelength of the light aimed at masks and photoresists in the lithography process. Today's super-small features are typically made with "deep" ultraviolet light at 193 nm. "But now we're trying to make a dramatic shift by dropping more than an order of magnitude, down to extreme ultraviolet (EUV) at 13.5 nm," says physicist Shannon Hill of the Ultraviolet Radiation Group. "That's going to be a big change."
This chamber and attached apparatus (see diagram, bottom) are used to introduce various gases to test contamination build-up on multi-layer surfaces.
In fact, it complicates nearly every aspect of lithography. It will necessitate new kinds of plasmas to generate around 100 watts of EUV photons. It demands a high-quality vacuum for the entire photon pathway because EUV light is absorbed by air. And of course it requires the elimination of chemical contaminants on the Bragg-reflector focusing mirrors and elsewhere in the system – contaminants that result from outgassing of materials in the vacuum chamber.
As a rule, the focusing mirrors are expected to last five years and decrease in reflectivity no more than 1 percent in that period. Innovative in-situ cleaning techniques have made that longevity possible for the present deep UV environments. But the EUV regime raises new questions. "How can we gauge how long they're going to last or how often they will have to be cleaned?" says Ultraviolet Group leader Thomas Lucatorto. "Cleaning is done at the expense of productivity, so the industry needs some kind of accelerated testing."
Unfortunately, Hill adds, "You can't even test one of these mirrors until you know how everything outgasses. Ambient hydrocarbon molecules outgassing from all the components will adsorb on the mirror's surface, and then one of these high-energy photons comes along and, through various reactions, the hydrogen goes away and you're left with this amorphous, baked-on carbonaceous deposit."
But what, exactly, is its composition? How long does it take to form, and what conditions make it form faster or slower? To answer those questions, the researchers have been using 13.5 nm photons from the NIST synchrotron in a beam about 1 mm in diameter to irradiate a 12-by-18 mm target in multiple places.
"We built a chamber where we can take a sample, admit one of these contaminant gases at some controlled partial pressure, and then expose it to EUV and see how much carbon is deposited," Hill says. The chamber is kept at 10-10 torr before admission of contaminant gases, and the inside surface plated in gold. "Gold is very inert," Hill explains, "and we want to be able to pump the gases out of the chamber with no traces remaining."
Contamination forms on a clean multi-layer surface (top) when EUV photons react (middle) with gases, resulting in carbonaceous deposits (bottom). Photos: SEMATECH
In the course of building the chamber, "we learned some solid chemistry that was unfortunate," Lucatorto recalls. "These things are typically sealed with copper gaskets. Our stainless steel chamber was coated with gold, and we used copper gaskets. Well, it turns out that gold loves copper. It naturally forms a gold-copper alloy just by being in contact. So we could not take the flange off!"
"We made it even worse," Hill adds, "because we baked these chambers – heated them up to clean them off. So it was effectively welded. We had to get a crowbar and a hammer to get the edges apart. After that, we had our gaskets covered with silver."
The PML team uses two techniques to analyze the EUV-induced contamination – x-ray photoelectron spectroscopy (XPS), which reveals the atomic composition and some information on chemical state, and spectroscopic ellipsometry, which is very sensitive to variations in optical properties – integrated with data from surface scientists at Rutgers University.
"The great thing about spectroscopic ellipsometry," Hill says, "is that it can be done in air and it can map all the spots on the sample in 8 or 9 hours. But being NIST, we're concerned with measuring things accurately. And we've determined if you want to determine how much carbon is present, ellipsometry alone may not be the right way to go – it can give you some misleading answers. XPS is much slower. It takes around 4 hours just to do one spot. But the two techniques give complementary information, so we use both.
"There are several things we wanted to investigate, and one was the pressure scaling of the contamination rate – nanometers of carbon per unit time. Each spot was made in a very controlled way, at a known pressure and EUV dose. The key thing we started finding is that the rate does not scale linearly with pressure. It scales logarithmically. That's not at all what you'd expect. It's counterintuitive, and it has really important implications for the industry. You could spend millions of dollars designing a system in which you were able to lower the background partial pressure by, say, two orders of magnitude. You would think that you'd done a lot. But in fact, you would have only decreased your contamination rate by a factor of two – maybe."
In addition, PML collaborated with the research group at Rutgers that was headed by NIST alumnus Theodore Madey until his death in 2008. "They have a world-class surface-science lab that studies the fundamental physics of adsorption," Hill says. The Rutgers investigators found, contrary to simple models in which all the adsorption sites have the same binding energy, that in fact the measured adsorption energy changes with coverage. "That is," Hill explains, "as you put more and more molecules on, they are more and more weakly bound. That can qualitatively explain the logarithmic relation we found."
EUV lithography requires multiple mirrors (multi-layer Bragg reflectors) to position and focus the EUV beam.
"Shannon and Ted [Madey] were the first to fully explain this and present it to the surface-science community," Lucatorto says. Industry benefits because the work clearly shows manufacturers that they cannot evaluate a product's contamination potential by taking measurements at a single pressure or intensity.
In a parallel line of research, Hill, Lucatorto and the other members of the Ultraviolet Radiation Group – which includes Nadir Faradzhev, Charles Tarrio, and Steve Grantham – along with collaborator Lee Richter of the Surface and Interface Group, are studying the outgassing of different photoresists that may be used in EUV lithography.
The outgas characteristics have to be known in rigorous detail before a wafer and resist can be placed in an enormously expensive lithography apparatus. Using another station on the NIST synchrotron's Beam Line 1, they are exposing the photoresists to 13.5 nm light and measuring the outgassed substances both in the gas phase and as they are "baked" by EUV photons on a witness plate.
"There are commercially available ways to test resists using electrons as proxies for EUV light, under the assumption that the effects are relatively similar and scale in comparable ways," Hill says. "But right now, NIST is the only place available to any company to test these things using photons." So far, the throughput is around two a week.
"We'll get faster," Lucatorto says. "Companies would like us to do 10 or more a week. By comparison, for deep UV lithography – when contamination from outgassing was not as great a concern – resist manufacturers would test thousands of resists each month to refine lithographic quality."
But that’s putting new demands on chip designers. Because handhelds are battery powered, energy conservation is at a premium, and many routine tasks that would be handled by software in a PC are instead delegated to special-purpose processors that do just one thing very efficiently. At the same time, handhelds are now so versatile that not everything can be hardwired: Some functions have to be left to software.
A hardware designer creating a new device needs to decide early on which functions will be handled in hardware and which in software. Halfway through the design process, however, it may become clear that something allocated to hardware would run much better in software, or vice versa. At that point, the designer has two choices: Either incur the expense — including time delays — of revising the design midstream, or charge to market with a flawed device.
At the Association for Computing Machinery’s 17th International Conference on Architectural Support for Programming Languages and Operating Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new system that enables hardware designers to specify, in a single programming language, all the functions they want a device to perform. They can thereafter designate which functions should run in hardware and which in software, and the system will automatically churn out the corresponding circuit descriptions and computer code. Revise the designations, and the circuits and code are revised as well. The system also determines how to connect the special-purpose hardware and the general-purpose processor that runs the software, and it alerts designers if they try to implement in hardware a function that will work only in software, or vice versa.
The new system is an extension of the chip-design language BlueSpec, whose theoretical foundations were laid in the 1990s and early 2000s by MIT computer scientist Arvind, the Charles W. and Jennifer C. Johnson Professor of Electrical Engineering and Computer Science, and his students. BlueSpec Inc., a company that Arvind co-founded in 2003, turned that theoretical work into working, commercial code.
As Arvind explains, in the early 1980s, an engineer designing a new chip would begin by drawing pictures of circuit layouts. “People said, ‘This is crazy,’” Arvind says. “‘Why can’t I write this description textually?’” And indeed, 1984 saw the first iteration of Verilog, a language that lets designers describe the components of a chip and automatically converts those descriptions into a circuit diagram.
BlueSpec, in turn, offers an even higher level of abstraction. Instead of describing circuitry, the designer specifies a set of rules that the chip must follow, and BlueSpec converts those specifications into Verilog code. For many designers, this turns out to be much more efficient than worrying about the low-level details of the circuit layout from the outset. Moreover, BlueSpec can often find shortcuts that a human engineer might overlook, using significantly fewer circuit components to implement a given set of rules, and it can guarantee that the resulting chip will actually do what it’s intended to do.
For the new paper, Arvind, his PhD student Myron King, and former graduate student Nirav Dave (now a computer scientist at SRI International) expanded the BlueSpec instruction set so that it can describe more elaborate operations that are possible only in software. They also introduced an annotation scheme, so the programmer can indicate which functions will be implemented in hardware and which in software, and they developed a new compiler that translates the functions allocated to hardware into Verilog and those allocated to software into C++ code.
Today, King says, “if I consider my algorithm just to be a bunch of modules that I’ve hooked together somehow, and I want to move one of these modules into hardware, I actually have to re-implement it. I have to write it again in a different language. What we’re trying to give people is a language where they can describe the algorithm once and then play around with how the algorithm is partitioned.”
King acknowledges that BlueSpec’s semantics — describing an algorithm as a set of rules rather than as a sequence of instructions — “is a radical departure from the way that most people think about software.” And indeed, among chip designers, Verilog is still much more popular than BlueSpec. “But it’s precisely this way of thinking about computation that allows you to generate both hardware and software,” King says.
Rajesh Gupta, the Qualcomm Professor in Embedded Microsystems at the University of California at San Diego, who wasn’t involved in the research, agrees. “Oftentimes, you need a dramatic change, not for the sake of the change, but because the problem demands it,” Gupta says. But, he adds, “hardware design is hard to begin with, and if some group of very smart people at MIT — who are not exactly known for making things simple — comes up with what looks like a very sophisticated model, some people will say, ‘My chances of making a mistake here are so high that I better not risk it.’ And hardware designers tend to be a little bit more conservative, anyway. So that’s why the adoption faces challenges.”
Still, Gupta says, the ability to redraw the partition between hardware and software could be enticing enough to overcome hardware designers’ conservatism. If you’re designing hardware for portable devices, “you need to be more power efficient than you are today,” Gupta says. But, he says, a device that relies too heavily on software requires so many layers of interpretation between the code and the circuitry that “by the time it actually does anything useful, it has done many other things that are useless, which are infrastructural.” To design systems that avoid such unnecessary, energy-intensive work, “you need this integrated view of hardware and software,” he says.