Immagine
 Trilingual World Observatory: italiano, english, română. GLOBAL NEWS & more... di Redazione
   
 
Di seguito tutti gli interventi pubblicati sul sito, in ordine cronologico.
 
 
By Admin (from 25/02/2012 @ 08:03:29, in ro - Stiinta si Societate, read 3578 times)

Galaxia noastra gazduieste mult mai multe planete decât se credea pâna acum. Astronomii au anuntat ca fiecare dintre cele 100 de miliarde de stele existente în Calea Lactee are, în medie, cel putin o planeta ca partener.

Descoperirea subliniaza o schimbare radicala în modul în care sunt percepute sistemele planetare din cosmos de catre oamenii de stiinta. Propriul nostru sistem solar, considerat a fi unic pâna de curând, este doar unul dintre miliarde.

Galaxia noastră găzduieşte sute de miliarde de planete

Pâna în aprilie 1994, niciun alt sistem solar nu fusese descoperit, dar de atunci numarul acestora este într-o continua crestere. Telescopul spatial Kepler descopera noi sisteme solare în mod constant.

"Planetele sunt regula, nu exceptia", a explicat Arnaud Cassan, astronomul sef al Institutului de Astrofizica din Paris. Acesta a coordonat o echipa de 42 de oameni de stiinta, dedicând 6 ani studiului a milioane de stele situate în centrul galaxiei Calea Lactee. Cercetarea reprezinta cel mai amanuntit efort de a masura prevalenta planetelor în galaxia noastra.

Pentru a estima numarul planetelor, Dr. Cassan si colegii sai au studiat 100 de milioane de stele situate la distante de 3.000 - 25.000 de ani-lumina de Pamânt. Apoi, numarul de planete descoperite a fost comparat cu cele din alte studii, în cadrul carora s-au folosit alte tehnici de detectare, pentru a crea un esantion statistic reprezentativ pentru galaxia noastra al stelelor si al planetelor ce le orbiteaza.

Conform calculelor cercetatorilor, majoritatea stelelor aflate în Calea Lactee (care numara cel putin 100 de miliarde, conform ultimelor estimari) au una sau mai multe planete.

Aproximativ 66% dintre stele gazduiesc o planeta cu o masa de 5 ori mai mare decât cea a Terrei, iar jumatate dintre stele au o planeta cu o masa similara cu cea a lui Neptun (de 17 ori cât cea a Pamântului). Aproape 20% dintre stelele detectate sunt orbitate de o planeta gazoasa gigantica (precum Jupiter, sau chiar mai mare).

"Putem alege orice stea, la întâmplare - cu siguranta exista o planeta ce o orbiteaza", a afirmat Uffe Grae Jorgensen, astronom la Universitatea din Copenhaga, Danemarca.

Cercetatorii au facut o alta descoperire care pâna acum parea de domeniul SF: milioane de planete pot orbita doua stele.

"Începem sa descoperim un sistem planetar nou, care nu seamana deloc cu ceea ce exista în sistemul nostru solar", a explicat William Welsh, astronom la Universitatea San Diego State.

Deoarece Calea Lactee gazduieste mult mai multe planete decât se credea pâna acum, sansele ca una dintre acestea sa gazduiasca forme de viata sunt mai mari, au conchis cercetatorii.

Sursa: Wall Street Journal - via descopera.ro

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa
 

Sul fatto che le web serie siano la nuova grande promessa per il futuro della televisione se ne discute ormai da mesi. Ma negli ultimi giorni le notizie riguardanti l’arrivo di nuovi ambiziosissimi spettacoli televisivi sul Web si sono susseguite a un ritmo ubriacante: Netflix sta per lanciare una serie con protagonista Steve Van Zandt de I Soprano (boom!), Tom Hanks ha in cantiere una serie animata che uscirà in esclusiva per Yahoo (boom!), House of Cards, il nuovo show commissionato da Netflix, coinvolgerà Kevin Spacey e David Fincher (doppio boom!). Che cosa sta succedendo? Perché di colpo le star di Hollywood si stanno buttando sullo streaming online?

Ora ci arriviamo. Ma prima, un po’ di storia. Tutto è cominciato verso la fine degli anni ’90, quando alcuni produttori televisivi ebbero la brillante idea di girare episodi televisi dedicati unicamente al pubblico del Web. Nel febbraio del 1997, la Nbc cominciò a pubblicare in esclusiva sul proprio sito gli episodi di uno spin off della serie Homicide. Nel frattempo, in Rete spopolavano le serie animate di Magic Butter. Al tempo, gli episodi web (o webisodes) erano poco più che un esperimento collaterale. Ci vollero alcuni anni prima di vedere comparire le prime serie specificamente pensate per l’ecosistema web.

Red vs Blue (2003, serie creata a partire dalle immagini di gioco di Halo), The Guild (2007, microepisodi da tre minuti incentrati su un gruppo di persone dipendenti dagli Mmorpg),  Doctor Horrible’s Sing-along Blog (2008, divertissment del buon Joss Wheadon su uno scienzato pazzo fallito interpretato da Neil Patrick Harris), sono alcuni esempi di web serie di successo create negli ultimi 10 anni. Si tratta comunque di piccole produzioni, niente a che vedere con i carrozzoni milionari che nel frattempo occupano i canali televisivi (il primo episodio di Broadwalk Empire è costato qualcosa come 60 milioni di dollari). La stessa Pioneer One, web serie di fantascienza attiva da poco più di un anno, è nata da una stanza di college e da due ragazzi appassionati di cinema. Perché allora di colpo spuntano tutte queste operazioni milionarie? Cos’è cambiato negli ultimi mesi?

Č cambiato che dopo una sequela di false partenze, quest’anno potrebbe essere quello buono per le Smart tv. Per capirlo basta fare un giro a Las Vegas in questi giorni, al CES 2012, dove LG, Samsung e Panasonic hanno presentato la loro personale ricetta per il Connected tv. In particolare, bisogna sottolineare il fatto che Panasonic, per lo sviluppo del nuovo VIERA Connect, ha deciso di rivolgersi a MySpace. “ Siamo pronti a portare l’intrattenimento e la televisione un passo avanti nel futuro, integrando l’esperienza dei social network” ha dichiarato il co-proprietario di MySpace, Justin Timberlake (altra star col botto) “ Questa è l’evoluzione di una delle nostre più grandi invenzioni, la televisione.

E oggi non dobbiamo più raggrupparci davanti a un solo apparecchio per vederla in compagnia ”.

Insomma, se fino a qualche tempo fa le Smart tv erano un progetto ancora in via di sviluppo, oggi le tv connessi sono una realtà. Ma per ottenere una vera e propria televisione condivisa, è necessario riempire questi apparecchi connessi con contenuti di qualità. Questo spiega la foga con cui Netflix, Hulu e simili si stanno adoperando a impalcare web-show ad alto budget. I più attenti l’avevano già previsto due mesi fa, quando Disney Interactive Media e YouTube avevano annunciato di voler investire 15 milioni di dollari nella produzione di serie web animate. I sospetti avevano poi trovato una mezza conferma il mese seguente, quando YouTube annunciò il completamento del suo restyling. Lo storico hub di videoclip online aveva dismesso i suoi tradizionali abiti per rafforzare il suo lato social e concentrarsi sulla creazione di canali web personalizzabili, una sorta di riadattamento dell’approccio televisivo in chiave web. Con la funzione AutoPlay, gli utenti potevano personalizzare il proprio canale YouTube e appoggiare la schiena alla sedia guardando carrellate di video scorrere ininterrottamente sullo schermo.

YouTube nel frattempo ha anche cominciato a investire decine di milioni di dollari nella produzione di contenuti di qualità, seguito a stretto giro da Yahoo, Hulu e Netflix.Un altro segnale inconfondibile, arriva proprio da Netflix. Quello che era nato come un servizio di noleggio DVD via posta, e che nel 2008 aveva cominciato a noleggiare video online, ora comincia a estendere il proprio raggio d’azione anche in Europa. Proprio in queste ore Netflix è entrato in piena operatività nel Regno Unito, fornendo contenuti on-demand al prezzo di 7 euro mensili. L’arrivo del servizio in Italia è certo, ma ancora non è dato sapere con che tempi e modalità.

Comunque sia, le novità che abbiamo esposto dimostrano che in pentola sta bollendo qualcosa di troppo grosso per fallire. Se un anno fa la prima Google tv è scivolata nel baratro, insieme a milioni di euro, proprio a causa della mancanza di contenuti (e piattaforme disposte a fornirli), lo scenario per il 2012 è decisamente mutato. Sarà davvero l’anno delle Connected tv e delle web serie? Staremo a vedere. 

Fonte: wired.it

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa
 

Using a specially designed facility, UCLA stem cell scientists have taken human skin cells, reprogrammed them into cells with the same unlimited property as embryonic stem cells, and then differentiated them into neurons while completely avoiding the use of animal-based reagents and feeder conditions throughout the process.Generally, stem cells are grown using mouse "feeder" cells, which help the stem cells flourish and grow. But such animal-based products can lead to unwanted variations and contamination, and the cells must be thoroughly tested before they can be deemed safe for use in humans.

The UCLA study represents the first time scientists have derived induced pluripotent stem (iPS) cells with the potential for clinical use and differentiated them into neurons in animal origin–free conditions using commercially available reagents to facilitate broad application, said Saravanan Karumbayaram, the first author of the study and an associate researcher with the Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at UCLA.

The Broad Center researchers also developed a set of standard operating procedures for the process so that other scientists can benefit from the derivation and differentiation techniques. The process was performed under good manufacturing practices (GMP) protocols, which are tightly controlled and regulated, so the cells created meet all the standards required for use in humans.

"Developments in stem cell research show that pluripotent stem cells ultimately will be translated into therapies, so we are working to develop the methods and systems needed to make the cells safe for human use," Karumbayaram said.

The study was published Dec. 7 in the early online edition of the inaugural issue of the peer-reviewed journal Stem Cells Translational Medicine, a new journal that seeks to bridge stem cell research and clinical trials.

Karumbayaram tested six different animal-free media formulations before arriving at a composition that generated the most robust pluripotent stem cells. He combined two commercial media solutions to create his own mix and tried different concentrations of an important growth factor.

"The colonies we get are of very good quality and are quite stable," said Karumbayaram, who compared his animal-free colonies to those created conventionally using mouse feeder cells. Efficiency did suffer. Fewer colonies were created using the animal-free feeders, but the colonies did remain stable for at least 20 passages.

The neurons that resulted from the process started life as a small skin-punch biopsy from a volunteer. The skin cells were then reprogrammed to become pluripotent stem cells with the ability to make any cell in the human body. These iPS cells were grown in colonies and were later coaxed into becoming neural precursor cells and, finally, neurons.

The animal-free cells were compared at every step in the process to cells produced by typical animal-based methods, Karumbayaram said, and were found to be of very similar quality.

"We were very excited when we saw the first colonies growing, because we were not sure it would be possible to derive and grow cells completely animal-free," he said.

Because the cells were grown in a special facility designed to culture animal-free cells, the testing and examination required to make clinical-grade cells should be much simpler, said William Lowry, senior author of the study and an assistant professor of molecular, cell and developmental biology in the UCLA Division of Life Sciences.

To date, at least 15 animal-free iPS cell lines have been created at the Broad Stem Cell Research Center.

"It's critical to note that we are nowhere near ready to use these cells in the clinic," Lowry said. "We are working to develop methods to make sure these cells are genetically stable and will be as safe as possible for human use. The main goal of this project was to generate a platform that will one day allow translation of stem cells to the clinic."

Source: Medical Xpress

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa
 

Creveti fara ochi si anemone cu tentacule albe au fost fotografiate în apropierea unor fisuri în planseul oceanic, prin care iese apa a carei temperatura poate ajunge pâna la 450 de grade Celsius.

Vietăţi nemaivăzute au fost descoperite în jurul celor mai adânci izvoare hidrotermale submarine (VIDEO)

Izvoarele termale submarine, botezate Beebe Vent Field, în onoarea primului om de stiinta care s-a aventurat în adâncul oceanelor, au fost descoperite în Marea Caraibelor, la sud de Insulele Cayman.

În 2010, geochimistul Doug Connelly de la Centrul Britanic de Oceanografie si biologul Jon Copley de la Universitatea Southampton au folosit un robot submarin, cu capacitatea de a se scufunda la adâncimi mari, pentru a analiza fundul marii. Astfel au fost descoperite noi izvoare hidrotermale submarine care se ridica aproape trei kilometri de pe fundul marii, în apropiere de muntele submarin Dent.

Descoperirea arata ca aceste izvoare hidrotermale submarine sunt mult mai raspândite decât se credea initial. În plus, camerele de luat vederi de pe robotul submarin au surprins imagini uimitoare ale unor specii noi, printre care si un crevete depigmentat, cu aspect fantomatic, botezat Rimicaris hybisae si care creste în colonii de pâna la 2000 de exemplare pe metru patrat. Lipsiti de ochi normali, acesti creveti au un organ sensibil la lumina situat pe spate si care îi ajuta sa navigheze.

O alta specie înrudita, numita Rimicaris exoculata a fost descoperita în zona unui izvor hidrotermal submarin aflat la 4000 de kilometri de Dorsala Atlantica.

Pe lânga crevetele pal, la muntele Dent au fost gasite si alte vietati precum un peste cu aspect serpentiform, o specie necunoscuta de melci si un crustaceu amfipod, a carui înfatisare aminteste de un purice.

Sursa: AFP - via descopera.ro

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa
 

Dna e  rna sono oggi le uniche molecole della vita: si auto-organizzano, si replicano e si traducono in enzimi e proteine, sono presenti nelle cellule di ogni essere vivente. Ma, forse, non sono state le sole nella lunga storia della Terra. Nell’elenco dei possibili candidati spunta, infatti, una terza molecola: il  tna, in cui lo zucchero treosio sostituisce, rispettivamente, il desossiribosio e il ribosio di dna e rna.

Il  dna e l’ rna sono infatti molecole molto complesse, probabilmente troppo per essere state le prime forme di materiale genetico a comparire. Ecco, allora, che vari gruppi di ricerca fanno le loro ipotesi e testano la possibilità che in miliardi di anni si siano evolute (per poi scomparire) altre configurazioni. Il  tna è un’ipotesi che ha già diversi anni. Ora,  John Chaput e il suo team del Center for Evolutionary Medicine and Informatics, presso il Biodesign Institute dell’Arizona State University hanno creato delle molecole di Tna e ne hanno seguito l’evoluzione per la prima volta su un substrato in cui era presente di volta in volta una proteina diversa.

Le molecole si sono dimostrate in grado di auto-organizzarsi in forme tridimensionali complesse e di agganciare la proteina, sviluppando un alto grado di affinità. Lo studio è stato pubblicato su  Nature Chemistry e suggerisce che in futuro si possano far evolvere enzimi adatti a sostenere una prima  forma di vita basata sul tna. 

Come riporta New Scientist però, è improbabile che il tna sia stato un precursore di dna e rna perché, sebbene la sua struttura sia più semplice e più piccola, resta comunque molto complessa. C’è poi il fatto, ovviamente, che non è stata mai individuata in alcun organismo vivente. La ricerca, però, è importante anche alla luce delle informazioni che si potrebbero avere dalle prossime missioni spaziali in cerca di  vita su Marte e su altri corpi celesti.

Attualmente si pensa che la prima  molecola della vita in grado di duplicarsi sia stata l’ rna; recentemente, però, si sta facendo strada l’ipotesi che all’inizio vi fossero piuttosto dei  mix di acidi nucleici, come proposto dal premio Nobel 2009  Jack Szostak della Harvard University. In questo mosaico, potrebbero essere stati presenti vari cugini del nostro materiale genetico.  New Scientist ne elenca alcuni: il  pna (acido peptidonucleico), lo gna (acido gliconucleico) e l' ana (amyloid nucleic acid).

Fonte: wired.it

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa
 

But that’s putting new demands on chip designers. Because handhelds are battery powered, energy conservation is at a premium, and many routine tasks that would be handled by software in a PC are instead delegated to special-purpose processors that do just one thing very efficiently. At the same time, handhelds are now so versatile that not everything can be hardwired: Some functions have to be left to software.

A hardware designer creating a new device needs to decide early on which functions will be handled in hardware and which in software. Halfway through the design process, however, it may become clear that something allocated to hardware would run much better in software, or vice versa. At that point, the designer has two choices: Either incur the expense — including time delays — of revising the design midstream, or charge to market with a flawed device.

At the Association for Computing Machinery’s 17th International Conference on Architectural Support for Programming Languages and Operating Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new system that enables hardware designers to specify, in a single programming language, all the functions they want a device to perform. They can thereafter designate which functions should run in hardware and which in software, and the system will automatically churn out the corresponding circuit descriptions and computer code. Revise the designations, and the circuits and code are revised as well. The system also determines how to connect the special-purpose hardware and the general-purpose processor that runs the software, and it alerts designers if they try to implement in hardware a function that will work only in software, or vice versa.

The new system is an extension of the chip-design language BlueSpec, whose theoretical foundations were laid in the 1990s and early 2000s by MIT computer scientist Arvind, the Charles W. and Jennifer C. Johnson Professor of Electrical Engineering and Computer Science, and his students. BlueSpec Inc., a company that Arvind co-founded in 2003, turned that theoretical work into working, commercial code.

As Arvind explains, in the early 1980s, an engineer designing a new chip would begin by drawing pictures of circuit layouts. “People said, ‘This is crazy,’” Arvind says. “‘Why can’t I write this description textually?’” And indeed, 1984 saw the first iteration of Verilog, a language that lets designers describe the components of a chip and automatically converts those descriptions into a circuit diagram.

BlueSpec, in turn, offers an even higher level of abstraction. Instead of describing circuitry, the designer specifies a set of rules that the chip must follow, and BlueSpec converts those specifications into Verilog code. For many designers, this turns out to be much more efficient than worrying about the low-level details of the circuit layout from the outset. Moreover, BlueSpec can often find shortcuts that a human engineer might overlook, using significantly fewer circuit components to implement a given set of rules, and it can guarantee that the resulting chip will actually do what it’s intended to do.

For the new paper, Arvind, his PhD student Myron King, and former graduate student Nirav Dave (now a computer scientist at SRI International) expanded the BlueSpec instruction set so that it can describe more elaborate operations that are possible only in software. They also introduced an annotation scheme, so the programmer can indicate which functions will be implemented in hardware and which in software, and they developed a new compiler that translates the functions allocated to hardware into Verilog and those allocated to software into C++ code.

Today, King says, “if I consider my algorithm just to be a bunch of modules that I’ve hooked together somehow, and I want to move one of these modules into hardware, I actually have to re-implement it. I have to write it again in a different language. What we’re trying to give people is a language where they can describe the algorithm once and then play around with how the algorithm is partitioned.”

King acknowledges that BlueSpec’s semantics — describing an algorithm as a set of rules rather than as a sequence of instructions — “is a radical departure from the way that most people think about software.” And indeed, among chip designers, Verilog is still much more popular than BlueSpec. “But it’s precisely this way of thinking about computation that allows you to generate both hardware and software,” King says.

Rajesh Gupta, the Qualcomm Professor in Embedded Microsystems at the University of California at San Diego, who wasn’t involved in the research, agrees. “Oftentimes, you need a dramatic change, not for the sake of the change, but because the problem demands it,” Gupta says. But, he adds, “hardware design is hard to begin with, and if some group of very smart people at MIT — who are not exactly known for making things simple — comes up with what looks like a very sophisticated model, some people will say, ‘My chances of making a mistake here are so high that I better not risk it.’ And hardware designers tend to be a little bit more conservative, anyway. So that’s why the adoption faces challenges.”

Still, Gupta says, the ability to redraw the partition between hardware and software could be enticing enough to overcome hardware designers’ conservatism. If you’re designing hardware for portable devices, “you need to be more power efficient than you are today,” Gupta says. But, he says, a device that relies too heavily on software requires so many layers of interpretation between the code and the circuitry that “by the time it actually does anything useful, it has done many other things that are useless, which are infrastructural.” To design systems that avoid such unnecessary, energy-intensive work, “you need this integrated view of hardware and software,” he says.

Source: PhysOrg

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa
 

Getting there, however, will entail much more than incremental progress. It will require adopting entirely new technology and surmounting a formidable roster of technological problems. One of most daunting of those – identifying and characterizing the factors that cause contamination of key lithographic components – has begun to yield to investigators in PML's Sensor Science Division, who have made some surprising and counterintuitive discoveries of use to industry.

In general, feature size is proportional to the wavelength of the light aimed at masks and photoresists in the lithography process. Today's super-small features are typically made with "deep" ultraviolet light at 193 nm. "But now we're trying to make a dramatic shift by dropping more than an order of magnitude, down to extreme ultraviolet (EUV) at 13.5 nm," says physicist Shannon Hill of the Ultraviolet Radiation Group. "That's going to be a big change."

This chamber and attached apparatus (see diagram, bottom) are used to introduce various gases to test contamination build-up on multi-layer surfaces.

In fact, it complicates nearly every aspect of lithography. It will necessitate new kinds of plasmas to generate around 100 watts of EUV photons. It demands a high-quality vacuum for the entire photon pathway because EUV light is absorbed by air. And of course it requires the elimination of chemical contaminants on the Bragg-reflector focusing mirrors and elsewhere in the system – contaminants that result from outgassing of materials in the vacuum chamber.

As a rule, the focusing mirrors are expected to last five years and decrease in reflectivity no more than 1 percent in that period. Innovative in-situ cleaning techniques have made that longevity possible for the present deep UV environments. But the EUV regime raises new questions. "How can we gauge how long they're going to last or how often they will have to be cleaned?" says Ultraviolet Group leader Thomas Lucatorto. "Cleaning is done at the expense of productivity, so the industry needs some kind of accelerated testing."

Unfortunately, Hill adds, "You can't even test one of these mirrors until you know how everything outgasses. Ambient hydrocarbon molecules outgassing from all the components will adsorb on the mirror's surface, and then one of these high-energy photons comes along and, through various reactions, the hydrogen goes away and you're left with this amorphous, baked-on carbonaceous deposit."

But what, exactly, is its composition? How long does it take to form, and what conditions make it form faster or slower? To answer those questions, the researchers have been using 13.5 nm photons from the NIST synchrotron in a beam about 1 mm in diameter to irradiate a 12-by-18 mm target in multiple places.

"We built a chamber where we can take a sample, admit one of these contaminant gases at some controlled partial pressure, and then expose it to EUV and see how much carbon is deposited," Hill says. The chamber is kept at 10-10 torr before admission of contaminant gases, and the inside surface plated in gold. "Gold is very inert," Hill explains, "and we want to be able to pump the gases out of the chamber with no traces remaining."


Contamination forms on a clean multi-layer surface (top) when EUV photons react (middle) with gases, resulting in carbonaceous deposits (bottom). Photos: SEMATECH

In the course of building the chamber, "we learned some solid chemistry that was unfortunate," Lucatorto recalls. "These things are typically sealed with copper gaskets. Our stainless steel chamber was coated with gold, and we used copper gaskets. Well, it turns out that gold loves copper. It naturally forms a gold-copper alloy just by being in contact. So we could not take the flange off!"

"We made it even worse," Hill adds, "because we baked these chambers – heated them up to clean them off. So it was effectively welded. We had to get a crowbar and a hammer to get the edges apart. After that, we had our gaskets covered with silver."

The PML team uses two techniques to analyze the EUV-induced contamination – x-ray photoelectron spectroscopy (XPS), which reveals the atomic composition and some information on chemical state, and spectroscopic ellipsometry, which is very sensitive to variations in optical properties – integrated with data from surface scientists at Rutgers University.

"The great thing about spectroscopic ellipsometry," Hill says, "is that it can be done in air and it can map all the spots on the sample in 8 or 9 hours. But being NIST, we're concerned with measuring things accurately. And we've determined if you want to determine how much carbon is present, ellipsometry alone may not be the right way to go – it can give you some misleading answers. XPS is much slower. It takes around 4 hours just to do one spot. But the two techniques give complementary information, so we use both.

"There are several things we wanted to investigate, and one was the pressure scaling of the contamination rate – nanometers of carbon per unit time. Each spot was made in a very controlled way, at a known pressure and EUV dose. The key thing we started finding is that the rate does not scale linearly with pressure. It scales logarithmically. That's not at all what you'd expect. It's counterintuitive, and it has really important implications for the industry. You could spend millions of dollars designing a system in which you were able to lower the background partial pressure by, say, two orders of magnitude. You would think that you'd done a lot. But in fact, you would have only decreased your contamination rate by a factor of two – maybe."

In addition, PML collaborated with the research group at Rutgers that was headed by NIST alumnus Theodore Madey until his death in 2008. "They have a world-class surface-science lab that studies the fundamental physics of adsorption," Hill says. The Rutgers investigators found, contrary to simple models in which all the adsorption sites have the same binding energy, that in fact the measured adsorption energy changes with coverage. "That is," Hill explains, "as you put more and more molecules on, they are more and more weakly bound. That can qualitatively explain the logarithmic relation we found."


EUV lithography requires multiple mirrors (multi-layer Bragg reflectors) to position and focus the EUV beam.

"Shannon and Ted [Madey] were the first to fully explain this and present it to the surface-science community," Lucatorto says. Industry benefits because the work clearly shows manufacturers that they cannot evaluate a product's contamination potential by taking measurements at a single pressure or intensity.

In a parallel line of research, Hill, Lucatorto and the other members of the Ultraviolet Radiation Group – which includes Nadir Faradzhev, Charles Tarrio, and Steve Grantham – along with collaborator Lee Richter of the Surface and Interface Group, are studying the outgassing of different photoresists that may be used in EUV lithography.

The outgas characteristics have to be known in rigorous detail before a wafer and resist can be placed in an enormously expensive lithography apparatus. Using another station on the NIST synchrotron's Beam Line 1, they are exposing the photoresists to 13.5 nm light and measuring the outgassed substances both in the gas phase and as they are "baked" by EUV photons on a witness plate.

"There are commercially available ways to test resists using electrons as proxies for EUV light, under the assumption that the effects are relatively similar and scale in comparable ways," Hill says. "But right now, NIST is the only place available to any company to test these things using photons." So far, the throughput is around two a week.

"We'll get faster," Lucatorto says. "Companies would like us to do 10 or more a week. By comparison, for deep UV lithography – when contamination from outgassing was not as great a concern – resist manufacturers would test thousands of resists each month to refine lithographic quality."

Source: NIST

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa
 

Compania Life Technologies Corp. a declarat recent ca tehnologia sa, ce va permite secventierea genomului uman într-o singura zi, la pretul de 1000 USD, va fi dezvoltata în colaborare cu Baylor College of Medicine, Yale School of Medicine si Broad Institute of Cambridge.

Analiza ADN-ului va fi pentru toate buzunarele

O alta companie americana, Illumina din San Diego, sustine ca va introduce o noua tehnologie, care va decoda complet genomul uman în decurs de 24 de ore.

Decodarea genomului uman în timp record va aduce mari avantaje medicinei, în special pentru cazurile în care cadrele medicale trebuie sa stabileasca vulnerabilitatea pacientilor la diferite afectiuni, riscul de alergii la anume produse medicamentoase sau sa încerce tratamente în premiera.

Costul de 1.000 $ pentru decodarea genomului este asemanator cu suma ceruta astazi de majoritatea laboratoarelor, declara Chris Nussbaum, co-director la Genom Sequencing and Analysis Porgram din cadrul Board Institute. Singura diferenta este durata de timp pâna la livrarea datelor cerute.

Pe de alta parte, Richard Gibbs, director al Human Genome Sequencing Center din Baylor declara: "Vom vedea daca aparatul se comporta la înaltimea asteptarilor noastre în termeni de costuri si exactitate. Noi ne pastram optimismul".

Sursa: AFP - via descopera.ro

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa
 

Ha iniziato con un semplice circuito audio e si è ritrovato miliardario. La storia di Ray Dolby, pioniere delle tecnologie digitali nato a Portland (Oregon, Usa) il 18 gennaio 1933, passa letteralmente attraverso il muro del suono. Nel 1949, a soli 16 anni, il suo lavoretto part-time alla Ampex gli permette di mettere le mani sul primo registratore a nastro sul mercato. Così, dopo una laurea a Stanford e un dottorato a Cambridge, il giovane Dolby decide di mettersi in proprio e cambiare il modo in cui percepiamo i suoni incisi sui nastri magnetici.

Esatto, quel Dolby

Tutto inizia con la costruzione del primo compansore, un dispositivo elettronico capace di ridurre il rumore di fondo e i disturbi all'interno dei segnali acustici. L'idea, nata nel 1965 quando Dolby attraversa l'Oceano per fondare i Dolby Labs in Inghilterra, riscosse un grande successo tra gli studi di registrazione professionali. Così, tre anni più tardi, la nuova versione del circuito – il Dolby B-type – venne integrato all'interno dei registratori commerciali: era l'inizio di una grande scalata al successo. 

Nel 1976, Dolby torna negli States e stabilisce definitivamente l'azienda a San Francisco, la città dove aveva trascorso gran parte della sua infanzia. Nel frattempo, grazie a un brevetto riconosciuto nel '69, era nato il Dolby Sound System, ossia la tecnologia che ha dato una svolta agli effetti audio del cinema. In pratica, si trattava di un sistema per migliorare la qualità del parlato all'interno delle pellicole, dove spesso colonna sonora e dialoghi si mescolavano con scarsa qualità. Tanto per capire, il primo film a utilizzare il sistema Dolby è stato un capolavoro del cinema: Arancia Meccanica.

Con il passare del tempo, la tecnologia audio ha fatto altri passi in avanti tenendosi a stretto contatto con il mondo del cinema. Nel 1992, l'atmosfera di Batman il Ritorno è diventata a tutti gli effetti molto più avvolgente di qualsiasi altro film mai proiettato fino a allora. Il capolavoro di Tim Burton è stato il primo a sperimentare l'uso del sistema surround Dolby Stereo Digital, dove la traccia audio veniva scomposta in diversi canali, ciascuno collegato ad amplificatori collocati di fronte, ai lati e alle spalle del pubblico.

Nell'arco di pochi anni, il suono inizia a circondare gli spettatori anche dentro le loro case. Nel 1995 il sistema surround viene applicato all' home vision e si conferma come uno degli standard audio preferiti dai produttori cinematografici. Così, l'impresa fondata da Dolby cresce a dismisura e nel 2005 viene quotata in borsa. Ma nel 2011, dopo 45 anni di attività, il papà del sorround ha lasciato il direttivo dell'azienda per ritirarsi a vita privata e godersi il gruzzolo accumulato nel tempo. Secondo la rivista Forbes, Dolby è uno dei 400 uomini più ricchi d'America: si posiziona al 144° posto con un patrimonio da 2,9 miliardi di dollari.

Fonte: wired.it

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa
 

Researchers at the U.S. Department of Energy's (DOE) Argonne National Laboratory have developed an extraordinarily efficient two-step process that electrolyzes, or separates, hydrogen atoms from water molecules before combining them to make molecular hydrogen (H2), which can be used in any number of applications from fuel cells to industrial processing.

Easier routes to the generation of hydrogen have long been a target of scientists and engineers, principally because the process to create the gas requires a great deal of energy. Approximately 2 percent of all electric power generated in the United States is dedicated to the production of molecular hydrogen, so scientists and engineers are searching for any way to cut that figure. "People understand that once you have hydrogen you can extract a lot of energy from it, but they don't realize just how hard it is to generate that hydrogen in the first place," said Nenad Markovic, an Argonne senior chemist who led the research.

This image depicts the series of reactions by which water is separated into hydrogen molecules and hydroxide (OH-) ions. The process is initiated by nickel-hydroxide clusters (green) embedded on a platinum framework (gray). Credit: Flikr

While a great deal of hydrogen is created by reforming natural gas at high temperatures, that process creates carbon-dioxide emissions. "Water electrolyzers are by far the cleanest way of producing hydrogen," Markovic said. "The method we've devised combines the capabilities of two of the best materials known for water-based electrolysis."

Most previous experiments in water-based electrolysis rely on special metals, like platinum, to adsorb and recombine reactive hydrogen intermediates into stable molecular hydrogen. Markovic's research focuses on the previous step, which involves improving the efficiency by which an incoming water molecule would disassociate into its fundamental components. To do this, Markovic and his colleagues added clusters of a metallic complex known as nickel-hydroxide—Ni(OH)2. Attached to a platinum framework, the clusters tore apart the water molecules, allowing for the freed hydrogen to be catalyzed by the platinum.

"One of the most important points of this experiment is that we're combining two materials with very different benefits," said Markovic. "The advantage of using both oxides and metals in conjunction dramatically improves the catalytic efficiency of the whole system."

According to Argonne materials scientist George Crabtree, who helped to initiate the establishment of Argonne's energy conversion program, the researchers' success is attributable to their ability to work on what are known as "single-crystal" systems—defect-free materials that allow scientists to accurately predict how certain materials will behave at the atomic level. "We have not only increased catalytic activity by a factor of 10, but also now understand how each part of the system works. By scaling up from the single crystal to a real-world catalyst, this work illustrates how fundamental understanding leads quickly to innovative new technologies."

This work, supported by the DOE Office of Science, is reported in the December 2 issue of Science.

Source: Argonne National Laboratory

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa
 
Ci sono 12787 persone collegate

< maggio 2024 >
L
M
M
G
V
S
D
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
   
             

Titolo
en - Global Observatory (605)
en - Science and Society (594)
en - Video Alert (346)
it - Osservatorio Globale (503)
it - Scienze e Societa (555)
it - Video Alerta (132)
ro - Observator Global (399)
ro - Stiinta si Societate (467)
ro - TV Network (143)
z - Games Giochi Jocuri (68)

Catalogati per mese - Filed by month - Arhivate pe luni:

Gli interventi piů cliccati

Ultimi commenti - Last comments - Ultimele comentarii:
Now Colorado is one love, I'm already packing suitcases;)
14/01/2018 @ 16:07:36
By Napasechnik
Nice read, I just passed this onto a friend who was doing some research on that. And he just bought me lunch since I found it for him smile So let me rephrase that Thank you for lunch! Whenever you ha...
21/11/2016 @ 09:41:39
By Anonimo
I am not sure where you are getting your info, but great topic. I needs to spend some time learning much more or understanding more. Thanks for fantastic information I was looking for this info for my...
21/11/2016 @ 09:40:41
By Anonimo


Titolo





05/05/2024 @ 13:28:01
script eseguito in 846 ms