Although this scenario is still several decades away, researchers have been making significant progress in developing early types of biomolecular computers.
In a recent study published in Nano Letters, Computer Science Professor Ehud Shapiro and coauthors from the Weizmann Institute of Science in Rehovot, Israel, have developed a biomolecular computer that can autonomously sense many different types of molecules simultaneously. In the future, this sensing ability could be integrated with a vast biomedical knowledge of diseases to enable computers to decide which drugs to release.
“We envision nanometer-sized computing devices (made of biomolecules) to roam our bodies in search of diseases in their early stage,” coauthor Binyamin Gil from the Weizmann Institute of Science told PhysOrg.com. “These devices would have the ability to sense disease indicators, diagnose the disease, and treat it by administering or activating a therapeutic biomolecule. They could be delivered to the bloodstream or operate inside cells of a specific organ or tissue and be given as a preventive care.”
The development builds on the researchers’ previous demonstration of a biomolecular computer that consists of a two-state system made of biological components (DNA and a restriction enzyme). The computer, which operates in vitro, starts from the Yes state. In each computation step, the computer checks one disease indicator. If all of the indicators for the tested disease are present, the computation ends in the Yes state, namely it makes a positive diagnosis; if at least one disease indicator is not detected, it ends in the No state.
Previously, Shapiro's group showed that this biomolecular computer could detect disease indicators from mRNA expression levels and mutations. In the current study, the researchers have expanded the computer’s ability to also detect disease indicators from miRNAs, proteins, and small molecules such as ATP. At the same time, the computer’s detection method is simpler than before, requiring fewer components and fewer interactions with the disease indicators.
As the researchers explain, sensing a combination of several disease indicators is much more useful than sensing just one, since it allows for better accuracy and greater sensitivity to differences between diseases. For example, they note that in the case of thyroid cancer, the presence of the protein thyroglobulin and the hormone calcitonin can enable a much more reliable diagnosis than if only one of these disease indicators was detected.
Although the ability to detect several disease indicators marks an important step toward in vivo biomolecular computers and programmable drugs, there are still many obstacles that researchers must overcome in the process.
“The biggest challenge is operating such devices in living surrounding like the blood stream or cell's cytoplasm,” Gil said. “Currently we are developing devices that rely on simpler machinery (e.g. no restriction enzyme) or on the cell's own machinery.”
Mike Scharf, the O. Wayne Rollins/Orkin Chair in Molecular Physiology and Urban Entomology, said his laboratory has discovered a cocktail of enzymes from the guts of termites that may be better at getting around the barriers that inhibit fuel production from woody biomass. The Scharf Laboratory found that enzymes in termite guts are instrumental in the insects' ability to break down the wood they eat.
The findings, published in the early online version of the journal PLoS One, are the first to measure the sugar output from enzymes created by the termites themselves and the output from symbionts, small protozoa that live in termite guts and aid in digestion of woody material.
"For the most part, people have overlooked the host termite as a source of enzymes that could be used in the production of biofuels. For a long time it was thought that the symbionts were solely responsible for digestion," Scharf said. "Certainly the symbionts do a lot, but what we've shown is that the host produces enzymes that work in synergy with the enzymes produced by those symbionts. When you combine the functions of the host enzymes with the symbionts, it's like one plus one equals four."
Scharf and his research partners separated the termite guts, testing portions that did and did not contain symbionts on sawdust to measure the sugars created.
Once the enzymes were identified, Scharf and his team worked with Chesapeake Perl, a protein production company in Maryland, to create synthetic versions. The genes responsible for creating the enzymes were inserted into a virus and fed to caterpillars, which then produce large amounts of the enzymes. Tests showed that the synthetic versions of the host termite enzymes also were very effective at releasing sugar from the biomass.
They found that the three synthetic enzymes function on different parts of the biomass.
Two enzymes are responsible for the release of glucose and pentose, two different sugars. The other enzyme breaks down lignin, the rigid compound that makes up plant cell walls.
Lignin is one of the most significant barriers that blocks the access to sugars contained in biomass. Scharf said it's possible that the enzymes derived from termites and their symbionts, as well as synthetic versions, could be more effective at removing that lignin barrier.
Sugars from plant material are essential to creating biofuels. Those sugars are fermented to make products such as ethanol.
"We've found a cocktail of enzymes that create sugars from wood," Scharf said. "We were also able to see for the first time that the host and the symbionts can synergistically produce these sugars."
Next, Scharf said his laboratory and collaborators would work on identifying the symbiont enzymes that could be combined with termite enzymes to release the greatest amount of sugars from woody material. Combining those enzymes would increase the amount of biofuel that should be available from biomass.
The team at Exeter University used ‘phase-change alloys’ that move from an amorphous to fully crystallised state when subject to a current or light pulse.
‘What we are doing is trying to build electronic systems that mimic, in a simple way, the functionality of the basic building blocks of mammalian brains — namely neurons and synapses,’ project lead Prof David Wright of Exeter told The Engineer.
In conventional computers memory and processing units are physically separate, and data has to be continually shunted between the two, creating ‘bottlenecks’.
‘This slows everything down and wastes a lot of power and is the main reason chip manufacturers have moved to multi-core processors,’ Wright said.
The team turned to neurons for inspiration, noting that they make no real distinction between memory and computation. Looking for possible artificial substitutes the researchers came across so-called ‘phase-change materials’ that flip between amorphous and crystal states, in doing so inducing an electrical conductivity difference of up to five orders of magnitude and a large refractive index change.
Using laser pulses to induce the phase changes in germanium-antimony-tellurium (GeSbTe) and silver-indium-antimony-tellurium (AgInSbTe), the team was able to perform basic arithmetic and data storage.
‘A very simple model of a neuron is known as the “integrate and fire” model in which the neuron integrates, or accumulates, excitations applied to its input and fires a pulse along its output after a certain threshold has been passed.
‘We’ve shown that phase-change materials have a natural accumulation and threshold property, which makes them a good candidate for simple implementation of a hardware neuron,’ Wright said.
The upshot of the work is that these phase-change components could potentially be connected in networks via structures akin to synapses — potentially opening up an entirely novel way of computing.
‘The strength of these synaptic connections is altered by learning and experience — a common phrase to describe this is “neurons that fire together, wire together”.
‘We think that phase-change devices could also be used to make synapses — since the electrical resistance, or optical reflectivity, of phase-change materials depends on their excitation history,’ Wright said.
Indeed, shortly after publication of the Exeter work, a team from Stanford University in the US headed by Prof Philip Wong demonstrated just that, creating a nanoscale device with interconnected phase-change components.
Taking inspiration from floating seeds, scientists from the Biomimetics-Innovation-Centre (B-I-C) in Germany have developed a promising new anti-fouling surface that is toxin-free.
The new surface is based on a seed from a species of palm tree that is dispersed by ocean currents. Suspecting that certain seeds may have specialized surfaces that gave them the ability to remain free of fouling to allow them to disperse further, the researchers floated seeds from 50 species in the North Sea for 12 weeks. At the end of the 12-week period, the seeds of 12 species showed no fouling at all.
"We then began by examining the micro-structure of the seeds' surfaces, to see if we could translate them into an artificial surface. The seeds we chose to mimic had a hairy-like structure," says Katrin Mühlenbruch, a PhD researcher at BIC. "This structure might be especially good at preventing fouling because the fibers constantly move, preventing marine organisms from finding a place to settle."
To create an artificial surface similar to the seeds, the researchers used a silicone base with fibers covering the surface. The new surface is currently being trialed by floating it in the sea. Ms. Mühlenbruch says that while the initial results are "quite good," there is still a long way to go.
Following on from the examination of the structure of the seeds' surface, the B-I-C researchers also plan to analyze the chemical composition of the seeds' surface to find out whether this adds to their anti-fouling properties.
"Our aim is to provide a new toxin-free and bio-inspired ship coating," says Ms. Mühlenbruch. "This would prevent environmental damage while allowing ships to operate efficiently."
The brand new Shadow eBike hosts only a bit of wiring hidden away in the front hub, still placing it far ahead its competition which will usually comes entangled within an array of wires on its frame. But beyond its sleek and clean form, this eBike also boasts a USB port, a charging port, an LED battery power display, regenerative breaks and a wheel that doubles up as a generator!
The Shadow eBike’s wireless attributes means that there are no electrical connections exposed to the elements, removing the possibility of accidental severing or short circuiting. All of the bike’s circuitry is in-frame, including its electric motor, lithium polymer battery, magnetic regenerative brakes, throttle and the pedal-assist functions which use a 2.5 GHz frequency-hopping “spread-spectrum technology”. As such, Toronto-based Daymak feel justified in calling the Shadow “the world’s first wireless power-assist electric bicycle.”
Daymak offers the Shadow eBike equipped with either a 250W or 350W electric motor, and a 36V 10AH lithium-ion battery that can provide an average range of around 12 to 15 miles running on just motor power. With pedal-assisted power, this range is extended to 22 to 25 miles. The battery takes between four and five hours to completely recharge and is good for 750 to 800 cycles. The bike’s wheel also doubles up as a generator able to charge devices via the USB port, and a regenerative braking system sends a current back to the batteries.
If you are concerned that the bike’s wireless design leaves it vulnerable to hacking, don’t worry. Daymak says that each Shadow eBike wireless component is paired and the odds of it being affected by outside parties is less than one in a billion. The use of wireless technology also means the Shadow is set up for future upgrades that will one day enable it to interact with smartphones and even PCs.
The bike was set to be released on April 30th, and has been given a retail price of $1,999.
The researchers have developed an artificial DNA "stealth" linkage using click chemistry, a highly-efficient chemical reaction, to join together DNA strands without disrupting the genetic code.
The breakthrough, published online in the journal PNAS this week (27 June), means long sections of DNA can be created quickly and efficiently by chemical methods.
DNA strands are widely used in biological and medical research, and clean and effective methods of making longer sections are of great value. Current techniques rely on the use of enzymes as biological catalysts. Joining DNA chemically is particularly interesting as it does not depend on enzymes so can be carried out on a large scale under a variety of conditions.
Co-author of the paper Tom Brown, Professor of Chemical Biology at the University of Southampton, says: "We believe this is the first example of a chemical method of joining together longer strands of DNA that works well.
"Typically, synthesised DNA strands will be up to 150 bases; beyond that they are very difficult to make. We have doubled that to 300 and we can go further. We can also join together heavily modified DNA strands, used in medical research for example, which normal enzymes might not want to couple together."
The Southampton team investigated whether the artificial links would be tolerated biologically within the bacteria E.coli.
"The genetic code could still be correctly read," says co-investigator Dr Ali Tavassoli.
"The artificial linkages act in stealth as they go undetected by the organism; the gene was functional despite containing 'scars' in its backbone. This opens up all sorts of possibilities."
The team is now hoping to secure funding to explore potential applications of the technology.
Prior to this discovery, such alloys were only able to revert to their original form in the much narrower range of -20 to 80 degrees Celsius. They have published their findings in the journal Science.
Superelastic alloys are metals that revert naturally back to their original shape after being bent or deformed by outside forces once those forces are removed, and are generally created by mixing two or more other metals together in certain combinations.
In this new effort, the research team added a small amount of nickel to an iron based alloy, which according to lead author Toshihiro Omori, in an email interview with Reuters, says makes their product far more elastic than anything else out there. He also said that because the ingredients for the new metal are plentiful, the resultant alloy should be very cheap to produce.
The reason that superelastic allows are able to revert to their prior shape is due to their unique crystal structure that allows all of the atoms it’s made of to shift as one when a force is applied, as opposed to normal metals where the force is diffused through the crystal structure changing it’s composition.
Superelastic alloys are used in many applications such as eyeglasses, antennas, and medical tools and equipment. Omori, says he hopes that this new alloy, because of its ability to revert in virtually any real world temperature conditions, can be used in buildings to protect against earthquake damage, or in other applications where things get hot under stress, such as in cars, airplanes and spacecraft.
Because many tall buildings are supported by metal beams, the thinking goes, if the those metal beams were made of a superelastic alloy, they would be able to snap back to their original positions after each gyration of the ground, rather than suffering compound trauma as the quake continues, making it much less likely that the building would crumble or fall.
TheNew York Timesreported Thursday thatTristane Banon, the French journalistwho has accused disgraced ex-IMF chief Dominique Strauss-Kahn of attempting to rape her back in 2003, was questioned jointly with DSK at a Paris police station for over two hours. This, of course, is on the heels of the high-profile rape allegations againstDSK made by a hotel maid in the U.S. According to theTimes, "The joint questioning, a normal part of sexual assault cases in France, could represent a last legal step for prosecutors before either bringing formal charges, or dropping the case." How does this joint questioning work?
Cécile Dehesdin, a reporter atSlate's French sister site,Slate.fr, says that this joint questioning isn't mandatory. If the accused rapist denies the charges, according to a French organization for women devoted to helping rape victims, the alleged victim can accept the joint questioning, refuse it outright, or ask that the face-to-face happen, not with the police, but with the judge who will eventually be in charge of investigating the case. In France, unlike in the U.S., the judge (juge d'instruction) is in charge of investigating for both the alleged victim and the accused. DSK has called Banon's accusations of rape "imaginary and slanderous" in a recent interview. For her part, Banon has always said sherelished the idea of facing DSK in person:
The police asked me if i'd agree to a face to face, of course I said yes. I'd like him to be facing me and telling me to my face that those are imaginary facts. I'd like to see him try and say that.
In this joint questioing, Dehesdin points out, the victim and alleged attacker are in the same room, but they don't address each other directly. They just answer the police or judge's questions. Whoever has conducted the questioning gives the information to the district attorney, who then decides if there is enough evidence to move forward with the case (updated to add: if the juge d'instruction has done the questioning, he doesn't need permission from the DA to pursue the case).
Even if the D.A. decides not to pursue the case, Banon has some recourse. Since she was questioned by police, she could refile the criminal complaint along with a civil complaint, and at that point the juge d'instruction would have to investigate. According topress reports in France, DSK did admit that he made a pass at her.
A continuous and repetitive thread in the commentary on the decade since 9/11 — one might almost call it an endless and open-ended theme — was the plaintive observation that the struggle against al-Qaeda and its surrogates is somehow a “war without end.” (This is variously rendered as “perpetual war” or “endless war,” just as anti-war articles about the commitment to Iraq used to relentlessly stress the idea that there was “no end in sight.”)
I find it rather hard to see the force of this objection, or indeed this description. Was there ever a time when we involved ourselves in combat, or found ourselves involved, with any certain advance knowledge about the timeline and duration of hostilities? Are there two kinds of war, one of them term-limited? A bit like that other tempting but misleading separation of categories — between “wars of choice” and “wars of necessity” — this proves upon closer scrutiny to be a distinction without much difference.
In order even to aspire to such a nebulous timeline, there would first have to be consensus on when the war actually started. For example, I would say that hostilities between the United States and Saddam Hussein began in the early 1990s, if only at a relatively low level, after he had violated all the conditions of the cease-fire that had allowed him to retain power in 1991, and after he had begun regularly firing upon the planes that patrolled and enforced the cease-fire and the “no-fly” zones. For more than a decade, the only response to this was more air patrols and a reliance on a crumbling regime of sanctions. That really was a case of “no end in sight.” But something tells me that this is not the sort of example that my opponents have in mind.
Then again, one might ask how long we have been at war with al-Qaeda or its equivalents. Since the attack on the World Trade Center in 1993? Since the destruction of the U.S. embassies in Africa? Since the near-sinking of the USS Cole in Aden harbor in 2000? Even to invite these questions is to arouse the unnerving suspicion that there was quite a long period during which al-Qaeda was at war with us, but we did not understand that we were at war with it. It was precisely that queasy feeling that was beginning to creep over some of us a while before the events of a decade ago dispelled most doubts. And it would have been just as true to say “no end in sight” on Sept. 12, 2001, as it would be to say it today — more true, if anything. So once again, those who want to set the clock must be crystal clear about when they think the confrontation started running.
Attitudes toward length are often a good clue to attitudes toward outcome. During the Bosnian conflict, those of us who favoured using force to lift the siege of Sarajevo were accused of advocating a tactic that would “lengthen” the war. Even in the trivial sense of being true by definition (anything that denied Gen. Ratko Mladic a cheap, easy and swift victory over civilians was necessarily war-prolonging to some extent), this wasn’t true in any serious way. The relatively brief bombardment of Serbian artillery positions had the effect of exposing the hollowness of Mladic’s military strength: Within an amazingly short time, Slobodan Milosevic himself was at Dayton asking for terms. One might phrase it like this: Intervention slightly lengthened hostilities in the short term, but drastically shortened them in the long term. (Milosevic later misinterpreted the Dayton agreements as lenience and tried to repeat his Bosnian tactics in Kosovo. But even if this could be construed as war-prolonging, it also led to the eventual defeat of his army and overthrow of his regime, and thus to a conclusive finish.)
John Moore/Getty Images
A U.S. Marine runs to avoid sniper fire during an operation in Ramadi on Jan. 17, 2007, in the Anbar province of Iraq.
Arguments about duration are often of great historical significance, going far beyond the battles of mere hindsight. For instance, the conventional wisdom among historians holds that United States military intervention in Europe in 1917 had the salutary effect of persuading the German high command that, with another fresh and well-equipped force deployed against it, it could not hope to prevail against the British and French alliance. But another explanation of the same events shows the war on the Western Front actually being prolonged. Before President Woodrow Wilson abandoned neutrality and committed American forces in strength, the Germans had been fighting with exceptional success. Their prowess had led to calls, especially in London, for a negotiated peace. But the arrival of a new ally dissipated all such talk and compelled the Germans to fight until the bitter end. Not only that, but when peace terms were finally discussed, the French were allowed and enabled to press their most vindictive economic and territorial claims against Germany. That the Versailles Treaty led to the rise of Naziism and thus to the “Second” World War, or rather Part 2 of the first one, is a conclusion that few historians now dispute. So short-war advocates should know to beware of what they ask for.
A final objection to the dogma of brief engagements is more commonsensical. On the whole, perhaps it is best not to tell your opponent in advance of the date when you plan to withdraw your forces. Many American generals, we understand, were critical of the president’s original decision to announce a deadline for the endgame in Afghanistan. Certainly, there seem to be upsetting signs of Afghan national army units, in particular, basing their calculations on who can be counted on to be still present as the months go by. Difficult to blame people for consulting their own self-interest in this blunt way.
Human history seems to register many more years of conflict than of tranquillity. In one sense, then, it is fatuous to whine that war is endless. We do have certain permanent enemies—the totalitarian state; the nihilist/terrorist cell—with which “peace” is neither possible nor desirable. Acknowledging this, and preparing for it, might give us some advantages in a war that seems destined to last as long as civilization is willing to defend itself.
A new analysis by researchers at Lawrence Berkeley National Laboratory (Berkeley Lab) now is challenging that notion, one widely held in both the United States and China.
Well before mid-century, according to a new study by Berkeley Lab's China Energy Group, that nation's energy use will level off, even as its population edges past 1.4 billion. "I think this is very good news,'' says Mark Levine, co-author of the report, "China's Energy and Carbon Emissions Outlook to 2050," and director of the group. "There's been a perception that China's rising prosperity means runaway growth in energy consumption. Our study shows this won't be the case." Along with China's rise as a world economic power have come a rapid climb in energy use and a related boost in human-made carbon dioxide emissions. In fact, China overtook the United States in 2007 as the world's leading emitter of greenhouse gases.
Yet according to this new forecast, the steeply rising curve of energy demand in China will begin to moderate between 2030 and 2035 and flatten thereafter. There will come a time -- within the next two decades -- when the number of people in China acquiring cars, larger homes, and other accouterments of industrialized societies will peak. It's a phenomenon known as saturation. "Once nearly every household owns a refrigerator, a washing machine, air conditioners and other appliances, and once housing area per capita has stabilized, per household electricity growth will slow,'' Levine explains.
Similarly, China will reach saturation in road and rail construction before the 2030-2035 time frame, resulting in very large decreases in iron and steel demand. Additionally, other energy-intensive industries will see demand for their products flatten.
The Berkeley Lab report also anticipates the widespread use of electric cars, a significant drop in reliance on coal for electricity generation, and a big expansion in the use of nuclear power -- all helping to drive down China's CO2 emissions. Although China has temporarily suspended approvals of new nuclear power plant construction in the wake of the disaster at Japan's Fukushima Daiichi Nuclear Power Station, the long-range forecast remains unchanged.
Key to the new findings is a deeper look at patterns of energy demand in China: a "bottom-up" modeling system that develops projections of energy use in far greater detail than standard methods and which is much more time- and labor-intensive to undertake. Work on the project has been ongoing for the last four years. "Other studies don't have this kind of detail,'' says Levine. "There's no model outside of China that even comes close to having this kind of information, such as our data on housing stock and appliances." Not only does the report examine demand for appliances such as refrigerators and fans, it also makes predictions about adoption of improvements in the energy efficiency of such equipment -- just as Americans are now buying more efficient washing machines, cars with better gas-mileage, and less power-hungry light bulbs.
Berkeley Lab researchers Nan Zhou, David Fridley, Michael McNeil, Nina Zheng, and Jing Ke co-authored the report with Levine. Their study is a "scenario analysis" that forecasts two possible energy futures for China, one an "accelerated improvement scenario" that assumes success for a very aggressive effort to improve energy efficiency, the other a more conservative "continued improvement scenario" that meets less ambitious targets. Yet both of these scenarios, at a different pace, show similar moderation effects and a flattening of energy consumption well before 2050.
Under the more aggressive scenario, energy consumption begins to flatten in 2025, just 14 years from now. The more conservative scenario sees energy consumption rates beginning to taper in 2030. By the mid-century mark, energy consumption under the "accelerated improvement scenario" will be 20 percent below that of the other.
Scenario analysis is also used in more conventional forecasts, but these are typically based on macroeconomic variables such as gross domestic product and population growth. Such scenarios are developed "without reference to saturation, efficiency, or usage of energy-using devices, e.g., air conditioners,'' says the Berkeley Lab report. "For energy analysts and policymakers, this is a serious omission, in some cases calling into question the very meaning of the scenarios.''
The new Berkeley Lab forecast also uses the two scenarios to examine CO2 emissions anticipated through 2050. Under the more aggressive scenario, China's emissions of the greenhouse gas are predicted to peak in 2027 at 9.7 billion metric tons. From then on, they will fall significantly, to about 7 billion metric tons by 2050. Under the more conservative scenario, CO2 emissions will reach a plateau of 12 billion metric tons by 2033, and then trail down to 11 billion metric tons at mid-century.
Several assumptions about China's efforts to "decarbonize" its energy production and consumption are built into the optimistic forecasts for reductions in the growth of greenhouse gas emissions. They include:
A dramatic reduction in coal's share of energy production, to as low as 30 percent by 2050, compared to 74 percent in 2005
An expansion of nuclear power from 8 gigwatts in 2005 to 86 gigawatts by 2020, followed by a rise to as much as 550 gigawatts in 2050
A switch to electric cars. The assumption is that urban private car ownership will reach 356 million vehicles by 2050. Under the "continued improvement scenario," 30 percent of these will be electric; under the "accelerated improvement scenario," 70 percent will be electric.
The 72-page report by Levine and colleagues at Berkeley Lab's Environmental Energy Technologies Division was summarized in a briefing to U.S. Congressional staffers. The study was carried out under contract with the U.S. Department of Energy, using funding from the China Sustainable Energy Program, a partnership of the David and Lucile Packard Foundation and the Energy Foundation.