But a first-of-its-kind analysis of newly available government data found just the opposite when it comes to infants covered by insurance.
Among the insured, infants in low-income families are better off under the nation's government-funded public health insurance than infants covered by private insurance, says economist and study author Manan Roy, Southern Methodist University, Dallas. The finding emerged from an analysis that was weighted for the fact that less healthy infants are drawn into public health insurance from birth by its low cost.
The finding is surprising, says Roy, because the popular belief is that private health insurance always provides better coverage. Roy's analysis, however, found public health insurance is a better option — and not only for low-income infants.
"Public health insurance gets a lot of bad press," says Roy. "But for infants who are covered by health insurance, the government-funded insurance appears to be more efficient than private health insurance — and can actually provide better care at a lower cost."
Why?
"Private health insurance plans vary widely," Roy says. "Many don't include basic services. So infants on more affordable plans may not be covered for immunizations, prescription drugs, for vision or dental care, or even basic preventive care."
The U.S. doesn't have a system of universal health insurance. But the Patient Protection and Affordable Care Act signed into law by President Obama on March 23, 2010, requires all Americans to have health insurance. The act also expands government-paid free or low-cost Medicaid insurance to 133 percent of the federal poverty level.
"Given the study's surprising outcome, it's likely that the impact of national reforms to bring more children under public health insurance will substantially improve the health of infants who are in the worst health to begin with," Roy says. "It's likely to also help infants who aren't low-income."
Roy presented her study, "How Well Does the U.S. Government Provide Health Insurance?" at the 2011 Western Economic Association International Conference, San Diego. Roy is a Ph.D. student and an adjunct professor in SMU's Department of Economics.
Study weighted to account for less healthy infants covered under public health insurance
A large body of previous research has established that insured infants are healthier than uninsured infants. Roy's study appears to be the first of its kind to look only at insured infants to determine which kind of insurance has the most impact on infant health — private or public.
Roy found:
1 - Infants covered by public insurance are mostly from disadvantaged backgrounds.
2 - Those under Medicaid and its sister program — CHIP — come mostly from lower-income families. Their parents — usually black and Hispanic — are more likely to be unmarried, younger and less educated. Economists refer to this statistical phenomenon — when a group consists primarily of people with specific characteristics — as strong positive or negative selection. In the case of public health insurance, strong negative selection is at work because it draws people who are poor and disadvantaged.
3 - Infants on public health insurance are slightly less healthy than infants on private insurance. On average they had a lower five-minute Apgar score and shorter gestation age compared to privately insured infants. They were less likely to have a normal birth weight and normal Apgar score range, and were less likely to be born near term.
4 - Infants covered by private health insurance are mostly from white or Asian families and are generally more advantaged. They are from higher-income families, with older parents who are usually married and more educated. Their mothers weigh less than those of infants on public insurance. This demonstrates strong positive selection of wealthier families into private health insurance.
Roy then compared the effect of public insurance on infant health in relation to private health insurance. To do that, she used an established statistical methodology that allows economists to factor negative or positive selection into the type of insurance. In comparing public vs. private insurance — allowing for strong negative selection into public health care — a different picture emerged. "The results showed that it's possible to attribute the entire detrimental effect of public health insurance to the negative selection that draws less healthy infants into public health insurance," Roy says.
In fact, in a most striking revelation, allowing for a modest to significant amount of negative selection of infants into public health insurance, Roy's findings suggest that among the insured population of infants, private health insurance is detrimental to child health.
"The real surprise with these findings is that despite a less healthy population —due to the negative factors created by poverty — public health insurance is actually improving the health of these infants," Roy says.
Public health insurance provides more comprehensive benefits
The findings are less surprising upon deeper analysis.
A previous study by the nonpartisan Center on Budget and Policy Priorities sheds light on Roy's research. That group found that public health insurance provides more comprehensive benefits than private insurance. For example, all children on Medicaid and CHIP receive preventive and primary medical care, inpatient and outpatient care, pediatric vaccines, laboratory and X-ray services, prescription drugs, immunizations, and dental, vision and mental health care coverage. The Medical Expenditure Panel Survey collected by the U.S. Department of Health and Human Services found that on a per person basis, government-provided health insurance for children under 4 years old is cheaper on average compared to private health insurance plans. "Enrollees in private health insurance can choose from a wide variety of plans," Roy says. "Those who cut their costs by purchasing less coverage are reducing their access to quality care, including basic services like preventive care, prescription drugs, and vision and dental care."
Roy says she can only speculate why infants from advantaged and disadvantaged families differ in their health outcomes. It's possible, however, that infants from families that are better off have access to better nutrition, a healthier lifestyle and possibly safer, cleaner neighborhoods than those from poorer backgrounds.
"Poor families and their infants may be subsisting on cheap food, for example, which tends to be fatty and less nutritious," Roy says, "and that translates to worse health."
Study relied on new U.S. government data on thousands of infants
Roy's statistical analysis drew on data from more than 7,500 infants born in 2001. The data were the most recent available from the Early Childhood Longitudinal Study-Birth Cohort, released by the National Center for Education Statistics, U.S. Department of Education.
The Early Childhood Longitudinal Study follows children born in the United States from birth through the start of kindergarten. Children are from diverse socioeconomic and racial/ethnic backgrounds. Data were gathered from parents, teachers and providers of child care and early education.
Data collected cover children's health, care, education and cognitive, social, emotional and physical development over time. Included are standard infant health measures like length, infant weight, five-minute Apgar score, and the number of weeks the child was in the womb, which is considered an indicator of birth weight.
Poor families living at or below 185 percent of the federal poverty level represented 49 percent of Roy's data set.
Demand for public health insurance has increased during the past decade, says Roy, while demand for private insurance has declined. Specifically, between 1999 and 2009 there was an increase in the overall proportion of children under 3 years of age who were insured. Of those, the proportion covered by private insurance declined. The proportion covered by public health insurance increased.
Other researchers have firmly established that infants who are covered by health insurance have timely access to quality care, Roy says. Expanding access could reduce, for example, the number of infants born with low birth weight, which is associated with chronic medical diseases like diabetes, hypertension and heart disease in adulthood. Low birth weight also has been linked to lower average scores on tests of intellectual and social development.
The United States has the highest infant mortality rate among developed nations due to low birth weight and is the only industrialized nation without universal health insurance. The U.S. Supreme Court has agreed to hear a legal challenge to the Obama administration's new law requiring everyone have health insurance.
Researchers used a poverty measure which assesses a range of deprivations in health, education and living standards at the household level to uncover vast numbers of poor people in middle-income countries. They found that 1,189 million (72 per cent) of the world's poor live in middle-income countries as compared with 459 million living in low-income countries.
They also discovered that far greater numbers of poor people in middle-income countries are living in 'severe' poverty- 586 million as compared with 285 million in low-income countries. Severe poverty captures the very poorest of the poor - those whose poverty is most intense. Entire regions within middle-income countries also have poverty rates comparable to the world's poorest countries, the findings show.
The poverty measure which produced these findings - the Multidimensional Poverty Index or MPI - takes into account a range of deprivations in areas like education, malnutrition, child mortality, sanitation and services. By measuring directly which deprivations poor people experience together, the research team has produced a high-resolution picture of where the poor live. If people are deprived in one-third or more of the (weighted) indicators they are identified as 'MPI poor'. MPI poor people who are actually deprived in more than half the weighted indicators are identified as 'severely poor'.
The poverty measure was devised jointly by Oxford University's Oxford Poverty and Human Development Initiative (OPHI) and the UNDP's Human Development Report Office for the flagship Human Development Report. The MPI was featured in the 2011 and 2010 Human Development Reports as one of three experimental new indices complementing the Reports' annual Human Development Index.OPHI researchers have now further updated and expanded the MPI, including new analysis of regional disparities in MPI poverty within countries and changes to poverty over time. The OPHI researchers analysed the most recent publicly available household survey data for 109 countries, covering 93 per cent of people living in low and middle-income countries.
OPHI Director, Dr. Sabina Alkire, said: 'If you apply our global poverty measure, you see that most of the world's poor do not live in low-income countries as you might suppose. We found that nearly three-quarters of the poor live in middle-income countries - along with far greater numbers of the poorest of the poor. These findings are startling. We knew from income data that poverty in middle income countries was high - but now we also see that "multidimensionally" poor people in middle-income countries are not just barely poor: there are many severely poor people among them too, people who have simply been bypassed as their nation's comparative wealth increased.'
Dr. José Manuel Roche, who oversaw the MPI calculations with Dr. Alkire in 2011, said: "We use household surveys to see what deprivations each person experiences and create an individual poverty profile. We then build out to examine poverty within states and provinces, countries and world regions. The MPI reveals some dramatic disparities in the rates and intensity of poverty within countries, usually hidden by national averages. Hopefully, these findings will help policy makers to focus on delivering some benefits of growth to the poorest."
Key findings about specific countries and regions
*Half of all MPI poor people live in South Asia and 29 per cent in Sub-Saharan Africa. South Asia is home to 827 million MPI poor people, compared with 473 million in Sub-Saharan Africa.*Sub-Saharan Africa has the highest MPI poverty of any world region. However, the poorest 26 sub-national regions of South Asia (home to 519 million MPI poor people), have higher MPI poverty than Sub-Saharan Africa's 38 countries, which 473 million MPI poor people call home. These 26 sub-national regions and 38 countries have comparable rates of multidimensional poverty.
*Nigeria (a middle-income country) is Africa's largest oil producer, but its North East region has higher MPI poverty than the poorest region of Liberia, a low-income country still recovering from a prolonged civil war. The North East of Nigeria also has over five times more MPI poor people than the entire country of Liberia.
*Disparities within countries can be startlingly wide. Overall 41 per cent of people in the Republic of Congo are MPI poor, but in the Likouala region, 74 per cent of people are poor; whereas in Brazzaville, the capital region, 27 per cent of people are poor. In Kenya's regions, the percentage of MPI poor people ranges from 4 to 86 per cent; in Timor-Leste, from 29 to 86 per cent; and in Colombia from 1 to 15 per cent.
*Income classifications hide wide disparities in MPI poverty. In low-income countries, the percentage of people living in MPI poverty ranges from 5 per cent in Kyrgyzstan to 92 per cent in Niger. In lower middle-income countries, this varies from 1 per cent in Georgia to 77 per cent of people in Angola who are MPI poor; and in upper middle-income countries, from 0 per cent in Belarus to 40 per cent in Namibia.
Using updated data for 25 countries, OPHI researchers analysed a total of 109 countries in 2011, with a combined population of 5.3 billion, which represents 79 per cent of the world's population (using 2008 population figures). About 1.65 billion people in the countries covered - 31 per cent of their entire population - live in multidimensional poverty.
One need not have the feeling of a foreign device embedded in your coronary artery as is the experience of patients after having metallic stents, ordinary or drug-eluting, implanted in a surgical procedure.
Thanks to the invention of a new generation device that could be permanently absorbed in our body after it does its intended job.
The device, world’s first drug-eluting “bioresorbable vascular scaffold” has been successfully tried on more than 500 patients with Coronary Artery Disease(CAD) in the world, so far.
Developed by the global healthcare company Abbott, the device “ABSORB EXTEND” will further be tried on 1,000 patients in about 100 centers in Europe, Asia Pacific, Canada and Latin America. It was tested clinically in four centres in Canada itself.
The device is made of polylactide (PLA), a proven biocompatible material that is commonly used in medical implants such as dissolvable sutures. PLA is a biodegradable thermoplastic substance derived from lactic acid. It is used for making compost bags, plant pots, diapers and packaging.
The latest success story is reported from Canada’s Montreal Heart Institute (MHI) which had treated a woman in her sixties with CAD under the leadership of Dr. Jean-François Tanguay, interventional cardiologist and coordinator of the Coronary Unit, as part of the ABSORB EXTEND clinical trial.
“This successful intervention was a first in North America. This breakthrough could change the lives of patients. The woman, diagnosed with a severe lesion to the heart main artery, responded favorably to the procedure. She was discharged after 24 hours and now, one month after, had regained a normal way of life with no more chest pain,” the doctor said.
“Once the vessel can remain open without the extra support, the bioresorbable scaffold is designed to be slowly metabolized until the device dissolves after approximately two years, leaving patients with a treated vessel free of a permanent metallic implant.
With no metal left behind, the vessel has the potential to return to a more natural state. After the device has been metabolized, the patient’s vessel is free to move, flex, pulsate and dilate similar to an untreated vessel,” the doctors claim.
“Treatments for coronary artery disease have progressed tremendously from the days of balloon angioplasties and metal stents leading to improved clinical outcome in our patients,” says Dr. Tanguay.
Also an associate professor of Medicine at the Université de Montréal, he adds,”By effectively opening up a blocked artery without leaving a permanent implant behind in the blood vessel, this bioresorbable vascular scaffold has the potential to revolutionize how we treat our patients.”
But that’s putting new demands on chip designers. Because handhelds are battery powered, energy conservation is at a premium, and many routine tasks that would be handled by software in a PC are instead delegated to special-purpose processors that do just one thing very efficiently. At the same time, handhelds are now so versatile that not everything can be hardwired: Some functions have to be left to software.
A hardware designer creating a new device needs to decide early on which functions will be handled in hardware and which in software. Halfway through the design process, however, it may become clear that something allocated to hardware would run much better in software, or vice versa. At that point, the designer has two choices: Either incur the expense — including time delays — of revising the design midstream, or charge to market with a flawed device.
At the Association for Computing Machinery’s 17th International Conference on Architectural Support for Programming Languages and Operating Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new system that enables hardware designers to specify, in a single programming language, all the functions they want a device to perform. They can thereafter designate which functions should run in hardware and which in software, and the system will automatically churn out the corresponding circuit descriptions and computer code. Revise the designations, and the circuits and code are revised as well. The system also determines how to connect the special-purpose hardware and the general-purpose processor that runs the software, and it alerts designers if they try to implement in hardware a function that will work only in software, or vice versa.
The new system is an extension of the chip-design language BlueSpec, whose theoretical foundations were laid in the 1990s and early 2000s by MIT computer scientist Arvind, the Charles W. and Jennifer C. Johnson Professor of Electrical Engineering and Computer Science, and his students. BlueSpec Inc., a company that Arvind co-founded in 2003, turned that theoretical work into working, commercial code.
As Arvind explains, in the early 1980s, an engineer designing a new chip would begin by drawing pictures of circuit layouts. “People said, ‘This is crazy,’” Arvind says. “‘Why can’t I write this description textually?’” And indeed, 1984 saw the first iteration of Verilog, a language that lets designers describe the components of a chip and automatically converts those descriptions into a circuit diagram.
BlueSpec, in turn, offers an even higher level of abstraction. Instead of describing circuitry, the designer specifies a set of rules that the chip must follow, and BlueSpec converts those specifications into Verilog code. For many designers, this turns out to be much more efficient than worrying about the low-level details of the circuit layout from the outset. Moreover, BlueSpec can often find shortcuts that a human engineer might overlook, using significantly fewer circuit components to implement a given set of rules, and it can guarantee that the resulting chip will actually do what it’s intended to do.
For the new paper, Arvind, his PhD student Myron King, and former graduate student Nirav Dave (now a computer scientist at SRI International) expanded the BlueSpec instruction set so that it can describe more elaborate operations that are possible only in software. They also introduced an annotation scheme, so the programmer can indicate which functions will be implemented in hardware and which in software, and they developed a new compiler that translates the functions allocated to hardware into Verilog and those allocated to software into C++ code.
Today, King says, “if I consider my algorithm just to be a bunch of modules that I’ve hooked together somehow, and I want to move one of these modules into hardware, I actually have to re-implement it. I have to write it again in a different language. What we’re trying to give people is a language where they can describe the algorithm once and then play around with how the algorithm is partitioned.”
King acknowledges that BlueSpec’s semantics — describing an algorithm as a set of rules rather than as a sequence of instructions — “is a radical departure from the way that most people think about software.” And indeed, among chip designers, Verilog is still much more popular than BlueSpec. “But it’s precisely this way of thinking about computation that allows you to generate both hardware and software,” King says.
Rajesh Gupta, the Qualcomm Professor in Embedded Microsystems at the University of California at San Diego, who wasn’t involved in the research, agrees. “Oftentimes, you need a dramatic change, not for the sake of the change, but because the problem demands it,” Gupta says. But, he adds, “hardware design is hard to begin with, and if some group of very smart people at MIT — who are not exactly known for making things simple — comes up with what looks like a very sophisticated model, some people will say, ‘My chances of making a mistake here are so high that I better not risk it.’ And hardware designers tend to be a little bit more conservative, anyway. So that’s why the adoption faces challenges.”
Still, Gupta says, the ability to redraw the partition between hardware and software could be enticing enough to overcome hardware designers’ conservatism. If you’re designing hardware for portable devices, “you need to be more power efficient than you are today,” Gupta says. But, he says, a device that relies too heavily on software requires so many layers of interpretation between the code and the circuitry that “by the time it actually does anything useful, it has done many other things that are useless, which are infrastructural.” To design systems that avoid such unnecessary, energy-intensive work, “you need this integrated view of hardware and software,” he says.
Getting there, however, will entail much more than incremental progress. It will require adopting entirely new technology and surmounting a formidable roster of technological problems. One of most daunting of those – identifying and characterizing the factors that cause contamination of key lithographic components – has begun to yield to investigators in PML's Sensor Science Division, who have made some surprising and counterintuitive discoveries of use to industry.
In general, feature size is proportional to the wavelength of the light aimed at masks and photoresists in the lithography process. Today's super-small features are typically made with "deep" ultraviolet light at 193 nm. "But now we're trying to make a dramatic shift by dropping more than an order of magnitude, down to extreme ultraviolet (EUV) at 13.5 nm," says physicist Shannon Hill of the Ultraviolet Radiation Group. "That's going to be a big change."
This chamber and attached apparatus (see diagram, bottom) are used to introduce various gases to test contamination build-up on multi-layer surfaces.
In fact, it complicates nearly every aspect of lithography. It will necessitate new kinds of plasmas to generate around 100 watts of EUV photons. It demands a high-quality vacuum for the entire photon pathway because EUV light is absorbed by air. And of course it requires the elimination of chemical contaminants on the Bragg-reflector focusing mirrors and elsewhere in the system – contaminants that result from outgassing of materials in the vacuum chamber.
As a rule, the focusing mirrors are expected to last five years and decrease in reflectivity no more than 1 percent in that period. Innovative in-situ cleaning techniques have made that longevity possible for the present deep UV environments. But the EUV regime raises new questions. "How can we gauge how long they're going to last or how often they will have to be cleaned?" says Ultraviolet Group leader Thomas Lucatorto. "Cleaning is done at the expense of productivity, so the industry needs some kind of accelerated testing."
Unfortunately, Hill adds, "You can't even test one of these mirrors until you know how everything outgasses. Ambient hydrocarbon molecules outgassing from all the components will adsorb on the mirror's surface, and then one of these high-energy photons comes along and, through various reactions, the hydrogen goes away and you're left with this amorphous, baked-on carbonaceous deposit."
But what, exactly, is its composition? How long does it take to form, and what conditions make it form faster or slower? To answer those questions, the researchers have been using 13.5 nm photons from the NIST synchrotron in a beam about 1 mm in diameter to irradiate a 12-by-18 mm target in multiple places.
"We built a chamber where we can take a sample, admit one of these contaminant gases at some controlled partial pressure, and then expose it to EUV and see how much carbon is deposited," Hill says. The chamber is kept at 10-10 torr before admission of contaminant gases, and the inside surface plated in gold. "Gold is very inert," Hill explains, "and we want to be able to pump the gases out of the chamber with no traces remaining."
Contamination forms on a clean multi-layer surface (top) when EUV photons react (middle) with gases, resulting in carbonaceous deposits (bottom). Photos: SEMATECH
In the course of building the chamber, "we learned some solid chemistry that was unfortunate," Lucatorto recalls. "These things are typically sealed with copper gaskets. Our stainless steel chamber was coated with gold, and we used copper gaskets. Well, it turns out that gold loves copper. It naturally forms a gold-copper alloy just by being in contact. So we could not take the flange off!"
"We made it even worse," Hill adds, "because we baked these chambers – heated them up to clean them off. So it was effectively welded. We had to get a crowbar and a hammer to get the edges apart. After that, we had our gaskets covered with silver."
The PML team uses two techniques to analyze the EUV-induced contamination – x-ray photoelectron spectroscopy (XPS), which reveals the atomic composition and some information on chemical state, and spectroscopic ellipsometry, which is very sensitive to variations in optical properties – integrated with data from surface scientists at Rutgers University.
"The great thing about spectroscopic ellipsometry," Hill says, "is that it can be done in air and it can map all the spots on the sample in 8 or 9 hours. But being NIST, we're concerned with measuring things accurately. And we've determined if you want to determine how much carbon is present, ellipsometry alone may not be the right way to go – it can give you some misleading answers. XPS is much slower. It takes around 4 hours just to do one spot. But the two techniques give complementary information, so we use both.
"There are several things we wanted to investigate, and one was the pressure scaling of the contamination rate – nanometers of carbon per unit time. Each spot was made in a very controlled way, at a known pressure and EUV dose. The key thing we started finding is that the rate does not scale linearly with pressure. It scales logarithmically. That's not at all what you'd expect. It's counterintuitive, and it has really important implications for the industry. You could spend millions of dollars designing a system in which you were able to lower the background partial pressure by, say, two orders of magnitude. You would think that you'd done a lot. But in fact, you would have only decreased your contamination rate by a factor of two – maybe."
In addition, PML collaborated with the research group at Rutgers that was headed by NIST alumnus Theodore Madey until his death in 2008. "They have a world-class surface-science lab that studies the fundamental physics of adsorption," Hill says. The Rutgers investigators found, contrary to simple models in which all the adsorption sites have the same binding energy, that in fact the measured adsorption energy changes with coverage. "That is," Hill explains, "as you put more and more molecules on, they are more and more weakly bound. That can qualitatively explain the logarithmic relation we found."
EUV lithography requires multiple mirrors (multi-layer Bragg reflectors) to position and focus the EUV beam.
"Shannon and Ted [Madey] were the first to fully explain this and present it to the surface-science community," Lucatorto says. Industry benefits because the work clearly shows manufacturers that they cannot evaluate a product's contamination potential by taking measurements at a single pressure or intensity.
In a parallel line of research, Hill, Lucatorto and the other members of the Ultraviolet Radiation Group – which includes Nadir Faradzhev, Charles Tarrio, and Steve Grantham – along with collaborator Lee Richter of the Surface and Interface Group, are studying the outgassing of different photoresists that may be used in EUV lithography.
The outgas characteristics have to be known in rigorous detail before a wafer and resist can be placed in an enormously expensive lithography apparatus. Using another station on the NIST synchrotron's Beam Line 1, they are exposing the photoresists to 13.5 nm light and measuring the outgassed substances both in the gas phase and as they are "baked" by EUV photons on a witness plate.
"There are commercially available ways to test resists using electrons as proxies for EUV light, under the assumption that the effects are relatively similar and scale in comparable ways," Hill says. "But right now, NIST is the only place available to any company to test these things using photons." So far, the throughput is around two a week.
"We'll get faster," Lucatorto says. "Companies would like us to do 10 or more a week. By comparison, for deep UV lithography – when contamination from outgassing was not as great a concern – resist manufacturers would test thousands of resists each month to refine lithographic quality."
If it sputters, this is caused by the thermal motions of the smallest particles, which interfere with its running. Researchers at the University of Stuttgart and the Stuttgart-based Max Planck Institute for Intelligent Systems have now observed this with a heat engine on the micrometre scale. They have also determined that the machine does actually perform work, all things considered. Although this cannot be used as yet, the experiment carried out by the researchers in Stuttgart shows that an engine does basically work, even if it is on the microscale. This means that there is nothing, in principle, to prevent the construction of highly efficient, small heat engines.
A technology which works on a large scale can cause unexpected problems on a small one. And these can be of a fundamental nature. This is because different laws prevail in the micro- and the macroworld. Despite the different laws, some physical processes are surprisingly similar on both large and small scales. Clemens Bechinger, Professor at the University of Stuttgart and Fellow of the Max Planck Institute for Intelligent Systems, and his colleague Valentin Blickle have now observed one of these similarities.
A Stirling engine in the microworld: In a normal-sized engine, a gas expands and contracts at different temperature and thus moves a piston in a cylinder. Physicists in Stuttgart have created this work cycle with a tiny plastic bead that they trapped in the focus of a laser field. Credit: Fritz Höffeler / Art For Science
"We've developed the world's smallest steam engine, or to be more precise the smallest Stirling engine, and found that the machine really does perform work," says Clemens Bechinger. "This was not necessarily to be expected, because the machine is so small that its motion is hindered by microscopic processes which are of no consequence in the macroworld." The disturbances cause the micromachine to run rough and, in a sense, sputter.
The laws of the microworld dictated that the researchers were not able to construct the tiny engine according to the blueprint of a normal-sized one. In the heat engine invented almost 200 years ago by Robert Stirling, a gas-filled cylinder is periodically heated and cooled so that the gas expands and contracts. This makes a piston execute a motion with which it can drive a wheel, for example.
"We successfully decreased the size of the essential parts of a heat engine, such as the working gas and piston, to only a few micrometres and then assembled them to a machine," says Valentin Blickle. The working gas in the Stuttgart-based experiment thus no longer consists of countless molecules, but of only one individual plastic bead measuring a mere three micrometres (one micrometre corresponds to one thousandth of a millimetre) which floats in water. Since the colloid particle is around 10,000 times larger than an atom, researchers can observe its motion directly in a microscope.
The physicists replaced the piston, which moves periodically up and down in a cylinder, by a focused laser beam whose intensity is periodically varied. The optical forces of the laser limit the motion of the plastic particle to a greater and a lesser degree, like the compression and expansion of the gas in the cylinder of a large heat engine. The particle then does work on the optical laser field. In order for the contributions to the work not to cancel each other out during compression and expansion, these must take place at different temperatures. This is done by heating the system from the outside during the expansion process, just like the boiler of a steam engine. The researchers replaced the coal fire of an old-fashioned steam engine with a further laser beam that heats the water suddenly, but also lets it cool down as soon as it is switched off.
The fact that the Stuttgart machine runs rough is down to the water molecules which surround the plastic bead. The water molecules are in constant motion due to their temperature and continually collide with the microparticle. In these random collisions, the plastic particle constantly exchanges energy with its surroundings on the same order of magnitude as the micromachine converts energy into work. "This effect means that the amount of energy gained varies greatly from cycle to cycle, and even brings the machine to a standstill in the extreme case," explains Valentin Blickle. Since macroscopic machines convert around 20 orders of magnitude more energy, the tiny collision energies of the smallest particles in them are not important.
The physicists are all the more astonished that the machine converts as much energy per cycle on average despite the varying power, and even runs with the same efficiency as its macroscopic counterpart under full load. "Our experiments provide us with an initial insight into the energy balance of a heat engine operating in microscopic dimensions. Although our machine does not provide any useful work as yet, there are no thermodynamic obstacles, in principle, which prohibit this in small dimensions," says Clemens Bechinger. This is surely good news for the design of reliable, highly efficient micromachines.
These findings break new ground in the field of biomedicine because they identify an entirely new control mechanism that can be used to induce the formation of complex organs for transplantation or regenerative medicine applications, according to Michael Levin, Ph.D., professor of biology and director of the Center for Regenerative and Developmental Biology at Tufts University’s School of Arts and Sciences.
What’s especially interesting about this is that in research starting in 1937, Dr. Harold S. Burr, Professor Emeritus, Anatomy at Yale University School of Medicine, discovered that abnormal growth (such as cancer) was preceded by the appearance of abnormal voltage gradients in an organ. In a related discovery, Tufts biologists were able to control the incidence of abnormal eyes by manipulating the voltage gradient in the embryo.
The researchers achieved the most surprising results when they manipulated membrane voltage of cells in the tadpole’s back and tail, well outside of where the eyes could normally form. “The hypothesis is that for every structure in the body there is a specific membrane voltage range that drives organogenesis,” said Tufts post-doctoral fellow Vaibhav P. Pai, Ph.D.
Pai noted, “These were cells in regions that were never thought to be able to form eyes. This suggests that cells from anywhere in the body can be driven to form an eye.” To do this, they changed the voltage gradient of cells in the tadpoles’ back and tail to match that of normal eye cells. The eye-specific gradient drove the cells in the back and tail — which would normally develop into other organs — to develop into eyes.
“These results reveal a new regulator of eye formation during development, and suggest novel approaches for the detection and repair of birth defects affecting the visual system,” he said. “Aside from the regenerative medicine applications of this new technique for eyes, this is a first step to cracking the bioelectric code.”
Signals Turn On Eye Genes
From the outset of their research, the Tufts’ biologists wanted to understand how cells use natural electrical signals to communicate in their task of creating and placing body organs. In recent research, Tufts biologist Dany S. Adams showed that bioelectrical signals are necessary for normal face formation in the Xenopus (frog) embryos. In the current set of experiments, the Levin lab identified and marked hyperpolarized (more negatively charged) cell clusters located in the head region of the frog embryo.
They found that these cells expressed genes that are involved in building the eye called Eye Field Transcription Factors (EFTFs). Sectioning of the embryo through the developed eye and analyzing the eye regions under fluorescence microscopy showed that the hyperpolarized cells contributed to development of the lens and retina. The researchers hypothesized that these cells turned on genes that are necessary for building the eye.
Electric Properties of Cells Can Be Manipulated to Generate Specific Organs
The researchers achieved most surprising results when they manipulated membrane voltage of cells in the tadpole’s back and tail, well outside of where the eyes could normally form.
“The hypothesis is that for every structure in the body there is a specific membrane voltage range that drives organogenesis,” said Pai. “By using a specific membrane voltage, we were able to generate normal eyes in regions that were never thought to be able to form eyes. This suggests that cells from anywhere in the body can be driven to form an eye.”
Levin and his colleagues are pursuing further research, additionally targeting the brain, spinal cord, and limbs. The findings, he said “will allow us to have much better control of tissue and organ pattern formation in general. We are developing new applications of molecular bioelectricity in limb regeneration, brain repair, and synthetic biology.”
Changing the Signals Lead to Defects
Changing the bioelectric code, or depolarizing these cells, also affected normal eye formation. They injected the cells with mRNA encoding ion channels, which are a class of gating proteins embedded in the membranes of the cell. Like gates, each ion channel protein selectively allows a charged particle to pass in and out of the cell.
Using individual ion channels, the researchers changed the membrane potential of these cells. This affected expression of EFTF genes, causing abnormalities to occur: Tadpoles from these experiments were normal except that they had deformed or no eyes at all.
Further, the Tufts biologists were also able to show that they could control the incidence of abnormal eyes by manipulating the voltage gradient in the embryo. “Abnormalities were proportional to the extent of disruptive depolarization,” said Pai. “We developed techniques to raise or lower voltage potential to control gene expression.”
AMSTERDAM, (Reuters) - The Netherlands moved to ban the sale of potent hashish cannabis on Thursday, eroding 40 years of liberal drug policy, over fears that the proceeds were flowing to organised crime gangs.
A parliamentary proposal to prohibit the sale of hashish resin in the Netherlands' famous coffee shops had the backing of both parties in the Liberal-Christian Democrat coalition. The sale of marijuana, the dried bud and leaves of the cannabis plant, will not be affected.
"Almost all of the hash that is sold in Dutch coffee shops is smuggled into the Netherlands by international criminal gangs from countries like Afghanistan, Morocco and Lebanon," said Ard van der Steur, a member of the ruling Liberal Party.
The ban on 'hash', derived from the potent TCH crystals on marijuana buds, will likely be in force by the end of 2013 and possibly sooner if changes to the law are swiftly implemented, he said.
The Netherlands is one of the few countries in the world where marijuana and hash are sold openly, but moves to crackdown on its sale have risen under the conservative government of Prime Minister Mark Rutte.
Another of those backing a ban, Christian Democrat legislator Coskun Coruz, said he hoped the ban would reduce consumption.
Studies show marijuana use in the Netherlands is roughly half that of the United States, where it is illegal.
Hash smokers in Amsterdam doubted a ban would cut use of the drug and said it would be hard to enforce.
"I know enough people to buy hash from if it is banned from coffee shops. I'm sure I'm not going to smoke less," 19-year-old Tommie van den Wouden said as he waited in line to order hash at one coffee shop in Amsterdam.
Ulrich, who works at a coffee shop, said about 40 percent of revenue came from hash sales but coffee shops would not be the only losers.
"If I can't sell hash any more, my customers will buy it on the street. This will also lead to declining tax income for the state," he said.
"I am surprised about these politicians saying they want to ban hash because of links with organised crime, because exactly the same goes for marijuana. The only difference is that most hash comes from abroad, while marijuana is grown locally."
As part of the crackdown, the Netherlands has introduced compulsory membership cards for coffee shops in the south of the country to deter drug tourists from Belgium, France and Germany. The rules came into effect in January but will not be enforced until May.
The government hopes to implement the measure nationwide, a move which would effectively herald the end of the Netherlands' position as a pot smokers' paradise.
While the sale of marijuana and hash is tolerated in the Netherlands, cultivating commercial supplies is illegal, making it complicated for coffee shop owners to acquire stock.
Source: HuffingtonPost.com - Reporting by Tjibbe Hoekstra, Editing by Anthony Deutsch and Ben Harding
Many antibiotics are produced by molds similar to those found on a slice of bread or Roquefort cheese. Penicillium molds are best known for making penicillin, but also produce the not-so-famous mycophenolic acid, a billion-dollar drug used to ward off organ rejection.
However, mycophenolic acid also poisons most microbes, which has had scientists wondering how molds that produce mycophenolic acid can grow in its presence. This general problem is only understood in a few cases. Understanding how some microbes resist high concentrations of antibiotics is important to designing new drugs and deciding how and when to prescribe existing drugs.
The mold Penicillium brevicompactum produces chemicals such as mycophenolic acid that are toxic to other microbes. Credit: Kristian Fog Nielsen, The Technical University of Denmark
Xin Sun, a Ph.D. student in Biology Professor Liz Hedstrom’s laboratory, together with Bjarne Gram Hansen of the Technical University of Denmark, got down to the molecular level to unearth that answer for mycophenolic acid production. Their research was recently reported in The Journal of Biological Chemistry and the Biochemical Journal.
Every drug has a target — in this case a protein to which the drug binds, blocking its normal function. In the case of mycophenolic acid, the target is the protein IMPDH, an enzyme found in every organism. The faster an organism is growing, the more IMPDH it needs. When an infection occurs, immune cells need to grow, so they produce more IMPDH.
Unlike most microbes, Penicillium have two copies of IMPDH.
“What Xin Sun did was to show that this second IMPDH is in fact resistant to mycophenolic acid,” says Hedstrom. “What was puzzling is that you’d expect a change in the drug binding site, but here the drug binding site is identical in both sensitive and resistant targets. Instead, the underlying function of the second IMPDH has changed in clever and sophisticated ways so the drug is no longer effective.”
These findings also provide new insights into another scientific mystery, how antibiotic production evolved in the first place. The team hypothesizes that Penicillium molds gained the second IMPDH through mutation (duplication), which allowed them to make small amounts of mycophenolic acid. Over time, the second IMPDH evolved to become more resistant, allowing the mold to make more mycophenolic acid.