Trilingual World Observatory: italiano, english, română. GLOBAL NEWS & more... di Redazione
Di seguito tutti gli interventi pubblicati sul sito, in ordine cronologico.

Compania Life Technologies Corp. a declarat recent ca tehnologia sa, ce va permite secventierea genomului uman într-o singura zi, la pretul de 1000 USD, va fi dezvoltata în colaborare cu Baylor College of Medicine, Yale School of Medicine si Broad Institute of Cambridge.

Analiza ADN-ului va fi pentru toate buzunarele

O alta companie americana, Illumina din San Diego, sustine ca va introduce o noua tehnologie, care va decoda complet genomul uman în decurs de 24 de ore.

Decodarea genomului uman în timp record va aduce mari avantaje medicinei, în special pentru cazurile în care cadrele medicale trebuie sa stabileasca vulnerabilitatea pacientilor la diferite afectiuni, riscul de alergii la anume produse medicamentoase sau sa încerce tratamente în premiera.

Costul de 1.000 $ pentru decodarea genomului este asemanator cu suma ceruta astazi de majoritatea laboratoarelor, declara Chris Nussbaum, co-director la Genom Sequencing and Analysis Porgram din cadrul Board Institute. Singura diferenta este durata de timp pâna la livrarea datelor cerute.

Pe de alta parte, Richard Gibbs, director al Human Genome Sequencing Center din Baylor declara: "Vom vedea daca aparatul se comporta la înaltimea asteptarilor noastre în termeni de costuri si exactitate. Noi ne pastram optimismul".

Sursa: AFP - via

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa

Getting there, however, will entail much more than incremental progress. It will require adopting entirely new technology and surmounting a formidable roster of technological problems. One of most daunting of those – identifying and characterizing the factors that cause contamination of key lithographic components – has begun to yield to investigators in PML's Sensor Science Division, who have made some surprising and counterintuitive discoveries of use to industry.

In general, feature size is proportional to the wavelength of the light aimed at masks and photoresists in the lithography process. Today's super-small features are typically made with "deep" ultraviolet light at 193 nm. "But now we're trying to make a dramatic shift by dropping more than an order of magnitude, down to extreme ultraviolet (EUV) at 13.5 nm," says physicist Shannon Hill of the Ultraviolet Radiation Group. "That's going to be a big change."

This chamber and attached apparatus (see diagram, bottom) are used to introduce various gases to test contamination build-up on multi-layer surfaces.

In fact, it complicates nearly every aspect of lithography. It will necessitate new kinds of plasmas to generate around 100 watts of EUV photons. It demands a high-quality vacuum for the entire photon pathway because EUV light is absorbed by air. And of course it requires the elimination of chemical contaminants on the Bragg-reflector focusing mirrors and elsewhere in the system – contaminants that result from outgassing of materials in the vacuum chamber.

As a rule, the focusing mirrors are expected to last five years and decrease in reflectivity no more than 1 percent in that period. Innovative in-situ cleaning techniques have made that longevity possible for the present deep UV environments. But the EUV regime raises new questions. "How can we gauge how long they're going to last or how often they will have to be cleaned?" says Ultraviolet Group leader Thomas Lucatorto. "Cleaning is done at the expense of productivity, so the industry needs some kind of accelerated testing."

Unfortunately, Hill adds, "You can't even test one of these mirrors until you know how everything outgasses. Ambient hydrocarbon molecules outgassing from all the components will adsorb on the mirror's surface, and then one of these high-energy photons comes along and, through various reactions, the hydrogen goes away and you're left with this amorphous, baked-on carbonaceous deposit."

But what, exactly, is its composition? How long does it take to form, and what conditions make it form faster or slower? To answer those questions, the researchers have been using 13.5 nm photons from the NIST synchrotron in a beam about 1 mm in diameter to irradiate a 12-by-18 mm target in multiple places.

"We built a chamber where we can take a sample, admit one of these contaminant gases at some controlled partial pressure, and then expose it to EUV and see how much carbon is deposited," Hill says. The chamber is kept at 10-10 torr before admission of contaminant gases, and the inside surface plated in gold. "Gold is very inert," Hill explains, "and we want to be able to pump the gases out of the chamber with no traces remaining."

Contamination forms on a clean multi-layer surface (top) when EUV photons react (middle) with gases, resulting in carbonaceous deposits (bottom). Photos: SEMATECH

In the course of building the chamber, "we learned some solid chemistry that was unfortunate," Lucatorto recalls. "These things are typically sealed with copper gaskets. Our stainless steel chamber was coated with gold, and we used copper gaskets. Well, it turns out that gold loves copper. It naturally forms a gold-copper alloy just by being in contact. So we could not take the flange off!"

"We made it even worse," Hill adds, "because we baked these chambers – heated them up to clean them off. So it was effectively welded. We had to get a crowbar and a hammer to get the edges apart. After that, we had our gaskets covered with silver."

The PML team uses two techniques to analyze the EUV-induced contamination – x-ray photoelectron spectroscopy (XPS), which reveals the atomic composition and some information on chemical state, and spectroscopic ellipsometry, which is very sensitive to variations in optical properties – integrated with data from surface scientists at Rutgers University.

"The great thing about spectroscopic ellipsometry," Hill says, "is that it can be done in air and it can map all the spots on the sample in 8 or 9 hours. But being NIST, we're concerned with measuring things accurately. And we've determined if you want to determine how much carbon is present, ellipsometry alone may not be the right way to go – it can give you some misleading answers. XPS is much slower. It takes around 4 hours just to do one spot. But the two techniques give complementary information, so we use both.

"There are several things we wanted to investigate, and one was the pressure scaling of the contamination rate – nanometers of carbon per unit time. Each spot was made in a very controlled way, at a known pressure and EUV dose. The key thing we started finding is that the rate does not scale linearly with pressure. It scales logarithmically. That's not at all what you'd expect. It's counterintuitive, and it has really important implications for the industry. You could spend millions of dollars designing a system in which you were able to lower the background partial pressure by, say, two orders of magnitude. You would think that you'd done a lot. But in fact, you would have only decreased your contamination rate by a factor of two – maybe."

In addition, PML collaborated with the research group at Rutgers that was headed by NIST alumnus Theodore Madey until his death in 2008. "They have a world-class surface-science lab that studies the fundamental physics of adsorption," Hill says. The Rutgers investigators found, contrary to simple models in which all the adsorption sites have the same binding energy, that in fact the measured adsorption energy changes with coverage. "That is," Hill explains, "as you put more and more molecules on, they are more and more weakly bound. That can qualitatively explain the logarithmic relation we found."

EUV lithography requires multiple mirrors (multi-layer Bragg reflectors) to position and focus the EUV beam.

"Shannon and Ted [Madey] were the first to fully explain this and present it to the surface-science community," Lucatorto says. Industry benefits because the work clearly shows manufacturers that they cannot evaluate a product's contamination potential by taking measurements at a single pressure or intensity.

In a parallel line of research, Hill, Lucatorto and the other members of the Ultraviolet Radiation Group – which includes Nadir Faradzhev, Charles Tarrio, and Steve Grantham – along with collaborator Lee Richter of the Surface and Interface Group, are studying the outgassing of different photoresists that may be used in EUV lithography.

The outgas characteristics have to be known in rigorous detail before a wafer and resist can be placed in an enormously expensive lithography apparatus. Using another station on the NIST synchrotron's Beam Line 1, they are exposing the photoresists to 13.5 nm light and measuring the outgassed substances both in the gas phase and as they are "baked" by EUV photons on a witness plate.

"There are commercially available ways to test resists using electrons as proxies for EUV light, under the assumption that the effects are relatively similar and scale in comparable ways," Hill says. "But right now, NIST is the only place available to any company to test these things using photons." So far, the throughput is around two a week.

"We'll get faster," Lucatorto says. "Companies would like us to do 10 or more a week. By comparison, for deep UV lithography – when contamination from outgassing was not as great a concern – resist manufacturers would test thousands of resists each month to refine lithographic quality."

Source: NIST

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa

But that’s putting new demands on chip designers. Because handhelds are battery powered, energy conservation is at a premium, and many routine tasks that would be handled by software in a PC are instead delegated to special-purpose processors that do just one thing very efficiently. At the same time, handhelds are now so versatile that not everything can be hardwired: Some functions have to be left to software.

A hardware designer creating a new device needs to decide early on which functions will be handled in hardware and which in software. Halfway through the design process, however, it may become clear that something allocated to hardware would run much better in software, or vice versa. At that point, the designer has two choices: Either incur the expense — including time delays — of revising the design midstream, or charge to market with a flawed device.

At the Association for Computing Machinery’s 17th International Conference on Architectural Support for Programming Languages and Operating Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new system that enables hardware designers to specify, in a single programming language, all the functions they want a device to perform. They can thereafter designate which functions should run in hardware and which in software, and the system will automatically churn out the corresponding circuit descriptions and computer code. Revise the designations, and the circuits and code are revised as well. The system also determines how to connect the special-purpose hardware and the general-purpose processor that runs the software, and it alerts designers if they try to implement in hardware a function that will work only in software, or vice versa.

The new system is an extension of the chip-design language BlueSpec, whose theoretical foundations were laid in the 1990s and early 2000s by MIT computer scientist Arvind, the Charles W. and Jennifer C. Johnson Professor of Electrical Engineering and Computer Science, and his students. BlueSpec Inc., a company that Arvind co-founded in 2003, turned that theoretical work into working, commercial code.

As Arvind explains, in the early 1980s, an engineer designing a new chip would begin by drawing pictures of circuit layouts. “People said, ‘This is crazy,’” Arvind says. “‘Why can’t I write this description textually?’” And indeed, 1984 saw the first iteration of Verilog, a language that lets designers describe the components of a chip and automatically converts those descriptions into a circuit diagram.

BlueSpec, in turn, offers an even higher level of abstraction. Instead of describing circuitry, the designer specifies a set of rules that the chip must follow, and BlueSpec converts those specifications into Verilog code. For many designers, this turns out to be much more efficient than worrying about the low-level details of the circuit layout from the outset. Moreover, BlueSpec can often find shortcuts that a human engineer might overlook, using significantly fewer circuit components to implement a given set of rules, and it can guarantee that the resulting chip will actually do what it’s intended to do.

For the new paper, Arvind, his PhD student Myron King, and former graduate student Nirav Dave (now a computer scientist at SRI International) expanded the BlueSpec instruction set so that it can describe more elaborate operations that are possible only in software. They also introduced an annotation scheme, so the programmer can indicate which functions will be implemented in hardware and which in software, and they developed a new compiler that translates the functions allocated to hardware into Verilog and those allocated to software into C++ code.

Today, King says, “if I consider my algorithm just to be a bunch of modules that I’ve hooked together somehow, and I want to move one of these modules into hardware, I actually have to re-implement it. I have to write it again in a different language. What we’re trying to give people is a language where they can describe the algorithm once and then play around with how the algorithm is partitioned.”

King acknowledges that BlueSpec’s semantics — describing an algorithm as a set of rules rather than as a sequence of instructions — “is a radical departure from the way that most people think about software.” And indeed, among chip designers, Verilog is still much more popular than BlueSpec. “But it’s precisely this way of thinking about computation that allows you to generate both hardware and software,” King says.

Rajesh Gupta, the Qualcomm Professor in Embedded Microsystems at the University of California at San Diego, who wasn’t involved in the research, agrees. “Oftentimes, you need a dramatic change, not for the sake of the change, but because the problem demands it,” Gupta says. But, he adds, “hardware design is hard to begin with, and if some group of very smart people at MIT — who are not exactly known for making things simple — comes up with what looks like a very sophisticated model, some people will say, ‘My chances of making a mistake here are so high that I better not risk it.’ And hardware designers tend to be a little bit more conservative, anyway. So that’s why the adoption faces challenges.”

Still, Gupta says, the ability to redraw the partition between hardware and software could be enticing enough to overcome hardware designers’ conservatism. If you’re designing hardware for portable devices, “you need to be more power efficient than you are today,” Gupta says. But, he says, a device that relies too heavily on software requires so many layers of interpretation between the code and the circuitry that “by the time it actually does anything useful, it has done many other things that are useless, which are infrastructural.” To design systems that avoid such unnecessary, energy-intensive work, “you need this integrated view of hardware and software,” he says.

Source: PhysOrg

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa

Dna e  rna sono oggi le uniche molecole della vita: si auto-organizzano, si replicano e si traducono in enzimi e proteine, sono presenti nelle cellule di ogni essere vivente. Ma, forse, non sono state le sole nella lunga storia della Terra. Nell’elenco dei possibili candidati spunta, infatti, una terza molecola: il  tna, in cui lo zucchero treosio sostituisce, rispettivamente, il desossiribosio e il ribosio di dna e rna.

Il  dna e l’ rna sono infatti molecole molto complesse, probabilmente troppo per essere state le prime forme di materiale genetico a comparire. Ecco, allora, che vari gruppi di ricerca fanno le loro ipotesi e testano la possibilità che in miliardi di anni si siano evolute (per poi scomparire) altre configurazioni. Il  tna è un’ipotesi che ha già diversi anni. Ora,  John Chaput e il suo team del Center for Evolutionary Medicine and Informatics, presso il Biodesign Institute dell’Arizona State University hanno creato delle molecole di Tna e ne hanno seguito l’evoluzione per la prima volta su un substrato in cui era presente di volta in volta una proteina diversa.

Le molecole si sono dimostrate in grado di auto-organizzarsi in forme tridimensionali complesse e di agganciare la proteina, sviluppando un alto grado di affinità. Lo studio è stato pubblicato su  Nature Chemistry e suggerisce che in futuro si possano far evolvere enzimi adatti a sostenere una prima  forma di vita basata sul tna. 

Come riporta New Scientist però, è improbabile che il tna sia stato un precursore di dna e rna perché, sebbene la sua struttura sia più semplice e più piccola, resta comunque molto complessa. C’è poi il fatto, ovviamente, che non è stata mai individuata in alcun organismo vivente. La ricerca, però, è importante anche alla luce delle informazioni che si potrebbero avere dalle prossime missioni spaziali in cerca di  vita su Marte e su altri corpi celesti.

Attualmente si pensa che la prima  molecola della vita in grado di duplicarsi sia stata l’ rna; recentemente, però, si sta facendo strada l’ipotesi che all’inizio vi fossero piuttosto dei  mix di acidi nucleici, come proposto dal premio Nobel 2009  Jack Szostak della Harvard University. In questo mosaico, potrebbero essere stati presenti vari cugini del nostro materiale genetico.  New Scientist ne elenca alcuni: il  pna (acido peptidonucleico), lo gna (acido gliconucleico) e l' ana (amyloid nucleic acid).


Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa

Creveti fara ochi si anemone cu tentacule albe au fost fotografiate în apropierea unor fisuri în planseul oceanic, prin care iese apa a carei temperatura poate ajunge pâna la 450 de grade Celsius.

Vietăţi nemaivăzute au fost descoperite în jurul celor mai adânci izvoare hidrotermale submarine (VIDEO)

Izvoarele termale submarine, botezate Beebe Vent Field, în onoarea primului om de stiinta care s-a aventurat în adâncul oceanelor, au fost descoperite în Marea Caraibelor, la sud de Insulele Cayman.

În 2010, geochimistul Doug Connelly de la Centrul Britanic de Oceanografie si biologul Jon Copley de la Universitatea Southampton au folosit un robot submarin, cu capacitatea de a se scufunda la adâncimi mari, pentru a analiza fundul marii. Astfel au fost descoperite noi izvoare hidrotermale submarine care se ridica aproape trei kilometri de pe fundul marii, în apropiere de muntele submarin Dent.

Descoperirea arata ca aceste izvoare hidrotermale submarine sunt mult mai raspândite decât se credea initial. În plus, camerele de luat vederi de pe robotul submarin au surprins imagini uimitoare ale unor specii noi, printre care si un crevete depigmentat, cu aspect fantomatic, botezat Rimicaris hybisae si care creste în colonii de pâna la 2000 de exemplare pe metru patrat. Lipsiti de ochi normali, acesti creveti au un organ sensibil la lumina situat pe spate si care îi ajuta sa navigheze.

O alta specie înrudita, numita Rimicaris exoculata a fost descoperita în zona unui izvor hidrotermal submarin aflat la 4000 de kilometri de Dorsala Atlantica.

Pe lânga crevetele pal, la muntele Dent au fost gasite si alte vietati precum un peste cu aspect serpentiform, o specie necunoscuta de melci si un crustaceu amfipod, a carui înfatisare aminteste de un purice.

Sursa: AFP - via

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa

Using a specially designed facility, UCLA stem cell scientists have taken human skin cells, reprogrammed them into cells with the same unlimited property as embryonic stem cells, and then differentiated them into neurons while completely avoiding the use of animal-based reagents and feeder conditions throughout the process.Generally, stem cells are grown using mouse "feeder" cells, which help the stem cells flourish and grow. But such animal-based products can lead to unwanted variations and contamination, and the cells must be thoroughly tested before they can be deemed safe for use in humans.

The UCLA study represents the first time scientists have derived induced pluripotent stem (iPS) cells with the potential for clinical use and differentiated them into neurons in animal origin–free conditions using commercially available reagents to facilitate broad application, said Saravanan Karumbayaram, the first author of the study and an associate researcher with the Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at UCLA.

The Broad Center researchers also developed a set of standard operating procedures for the process so that other scientists can benefit from the derivation and differentiation techniques. The process was performed under good manufacturing practices (GMP) protocols, which are tightly controlled and regulated, so the cells created meet all the standards required for use in humans.

"Developments in stem cell research show that pluripotent stem cells ultimately will be translated into therapies, so we are working to develop the methods and systems needed to make the cells safe for human use," Karumbayaram said.

The study was published Dec. 7 in the early online edition of the inaugural issue of the peer-reviewed journal Stem Cells Translational Medicine, a new journal that seeks to bridge stem cell research and clinical trials.

Karumbayaram tested six different animal-free media formulations before arriving at a composition that generated the most robust pluripotent stem cells. He combined two commercial media solutions to create his own mix and tried different concentrations of an important growth factor.

"The colonies we get are of very good quality and are quite stable," said Karumbayaram, who compared his animal-free colonies to those created conventionally using mouse feeder cells. Efficiency did suffer. Fewer colonies were created using the animal-free feeders, but the colonies did remain stable for at least 20 passages.

The neurons that resulted from the process started life as a small skin-punch biopsy from a volunteer. The skin cells were then reprogrammed to become pluripotent stem cells with the ability to make any cell in the human body. These iPS cells were grown in colonies and were later coaxed into becoming neural precursor cells and, finally, neurons.

The animal-free cells were compared at every step in the process to cells produced by typical animal-based methods, Karumbayaram said, and were found to be of very similar quality.

"We were very excited when we saw the first colonies growing, because we were not sure it would be possible to derive and grow cells completely animal-free," he said.

Because the cells were grown in a special facility designed to culture animal-free cells, the testing and examination required to make clinical-grade cells should be much simpler, said William Lowry, senior author of the study and an assistant professor of molecular, cell and developmental biology in the UCLA Division of Life Sciences.

To date, at least 15 animal-free iPS cell lines have been created at the Broad Stem Cell Research Center.

"It's critical to note that we are nowhere near ready to use these cells in the clinic," Lowry said. "We are working to develop methods to make sure these cells are genetically stable and will be as safe as possible for human use. The main goal of this project was to generate a platform that will one day allow translation of stem cells to the clinic."

Source: Medical Xpress

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa

Sul fatto che le web serie siano la nuova grande promessa per il futuro della televisione se ne discute ormai da mesi. Ma negli ultimi giorni le notizie riguardanti l’arrivo di nuovi ambiziosissimi spettacoli televisivi sul Web si sono susseguite a un ritmo ubriacante: Netflix sta per lanciare una serie con protagonista Steve Van Zandt de I Soprano (boom!), Tom Hanks ha in cantiere una serie animata che uscirà in esclusiva per Yahoo (boom!), House of Cards, il nuovo show commissionato da Netflix, coinvolgerà Kevin Spacey e David Fincher (doppio boom!). Che cosa sta succedendo? Perché di colpo le star di Hollywood si stanno buttando sullo streaming online?

Ora ci arriviamo. Ma prima, un po’ di storia. Tutto è cominciato verso la fine degli anni ’90, quando alcuni produttori televisivi ebbero la brillante idea di girare episodi televisi dedicati unicamente al pubblico del Web. Nel febbraio del 1997, la Nbc cominciò a pubblicare in esclusiva sul proprio sito gli episodi di uno spin off della serie Homicide. Nel frattempo, in Rete spopolavano le serie animate di Magic Butter. Al tempo, gli episodi web (o webisodes) erano poco più che un esperimento collaterale. Ci vollero alcuni anni prima di vedere comparire le prime serie specificamente pensate per l’ecosistema web.

Red vs Blue (2003, serie creata a partire dalle immagini di gioco di Halo), The Guild (2007, microepisodi da tre minuti incentrati su un gruppo di persone dipendenti dagli Mmorpg),  Doctor Horrible’s Sing-along Blog (2008, divertissment del buon Joss Wheadon su uno scienzato pazzo fallito interpretato da Neil Patrick Harris), sono alcuni esempi di web serie di successo create negli ultimi 10 anni. Si tratta comunque di piccole produzioni, niente a che vedere con i carrozzoni milionari che nel frattempo occupano i canali televisivi (il primo episodio di Broadwalk Empire è costato qualcosa come 60 milioni di dollari). La stessa Pioneer One, web serie di fantascienza attiva da poco più di un anno, è nata da una stanza di college e da due ragazzi appassionati di cinema. Perché allora di colpo spuntano tutte queste operazioni milionarie? Cos’è cambiato negli ultimi mesi?

Č cambiato che dopo una sequela di false partenze, quest’anno potrebbe essere quello buono per le Smart tv. Per capirlo basta fare un giro a Las Vegas in questi giorni, al CES 2012, dove LG, Samsung e Panasonic hanno presentato la loro personale ricetta per il Connected tv. In particolare, bisogna sottolineare il fatto che Panasonic, per lo sviluppo del nuovo VIERA Connect, ha deciso di rivolgersi a MySpace. “ Siamo pronti a portare l’intrattenimento e la televisione un passo avanti nel futuro, integrando l’esperienza dei social network” ha dichiarato il co-proprietario di MySpace, Justin Timberlake (altra star col botto) “ Questa è l’evoluzione di una delle nostre più grandi invenzioni, la televisione.

E oggi non dobbiamo più raggrupparci davanti a un solo apparecchio per vederla in compagnia ”.

Insomma, se fino a qualche tempo fa le Smart tv erano un progetto ancora in via di sviluppo, oggi le tv connessi sono una realtà. Ma per ottenere una vera e propria televisione condivisa, è necessario riempire questi apparecchi connessi con contenuti di qualità. Questo spiega la foga con cui Netflix, Hulu e simili si stanno adoperando a impalcare web-show ad alto budget. I più attenti l’avevano già previsto due mesi fa, quando Disney Interactive Media e YouTube avevano annunciato di voler investire 15 milioni di dollari nella produzione di serie web animate. I sospetti avevano poi trovato una mezza conferma il mese seguente, quando YouTube annunciò il completamento del suo restyling. Lo storico hub di videoclip online aveva dismesso i suoi tradizionali abiti per rafforzare il suo lato social e concentrarsi sulla creazione di canali web personalizzabili, una sorta di riadattamento dell’approccio televisivo in chiave web. Con la funzione AutoPlay, gli utenti potevano personalizzare il proprio canale YouTube e appoggiare la schiena alla sedia guardando carrellate di video scorrere ininterrottamente sullo schermo.

YouTube nel frattempo ha anche cominciato a investire decine di milioni di dollari nella produzione di contenuti di qualità, seguito a stretto giro da Yahoo, Hulu e Netflix.Un altro segnale inconfondibile, arriva proprio da Netflix. Quello che era nato come un servizio di noleggio DVD via posta, e che nel 2008 aveva cominciato a noleggiare video online, ora comincia a estendere il proprio raggio d’azione anche in Europa. Proprio in queste ore Netflix è entrato in piena operatività nel Regno Unito, fornendo contenuti on-demand al prezzo di 7 euro mensili. L’arrivo del servizio in Italia è certo, ma ancora non è dato sapere con che tempi e modalità.

Comunque sia, le novità che abbiamo esposto dimostrano che in pentola sta bollendo qualcosa di troppo grosso per fallire. Se un anno fa la prima Google tv è scivolata nel baratro, insieme a milioni di euro, proprio a causa della mancanza di contenuti (e piattaforme disposte a fornirli), lo scenario per il 2012 è decisamente mutato. Sarà davvero l’anno delle Connected tv e delle web serie? Staremo a vedere. 


Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa
By Admin (from 25/02/2012 @ 08:03:29, in ro - Stiinta si Societate, read 3170 times)

Galaxia noastra gazduieste mult mai multe planete decât se credea pâna acum. Astronomii au anuntat ca fiecare dintre cele 100 de miliarde de stele existente în Calea Lactee are, în medie, cel putin o planeta ca partener.

Descoperirea subliniaza o schimbare radicala în modul în care sunt percepute sistemele planetare din cosmos de catre oamenii de stiinta. Propriul nostru sistem solar, considerat a fi unic pâna de curând, este doar unul dintre miliarde.

Galaxia noastră găzduieşte sute de miliarde de planete

Pâna în aprilie 1994, niciun alt sistem solar nu fusese descoperit, dar de atunci numarul acestora este într-o continua crestere. Telescopul spatial Kepler descopera noi sisteme solare în mod constant.

"Planetele sunt regula, nu exceptia", a explicat Arnaud Cassan, astronomul sef al Institutului de Astrofizica din Paris. Acesta a coordonat o echipa de 42 de oameni de stiinta, dedicând 6 ani studiului a milioane de stele situate în centrul galaxiei Calea Lactee. Cercetarea reprezinta cel mai amanuntit efort de a masura prevalenta planetelor în galaxia noastra.

Pentru a estima numarul planetelor, Dr. Cassan si colegii sai au studiat 100 de milioane de stele situate la distante de 3.000 - 25.000 de ani-lumina de Pamânt. Apoi, numarul de planete descoperite a fost comparat cu cele din alte studii, în cadrul carora s-au folosit alte tehnici de detectare, pentru a crea un esantion statistic reprezentativ pentru galaxia noastra al stelelor si al planetelor ce le orbiteaza.

Conform calculelor cercetatorilor, majoritatea stelelor aflate în Calea Lactee (care numara cel putin 100 de miliarde, conform ultimelor estimari) au una sau mai multe planete.

Aproximativ 66% dintre stele gazduiesc o planeta cu o masa de 5 ori mai mare decât cea a Terrei, iar jumatate dintre stele au o planeta cu o masa similara cu cea a lui Neptun (de 17 ori cât cea a Pamântului). Aproape 20% dintre stelele detectate sunt orbitate de o planeta gazoasa gigantica (precum Jupiter, sau chiar mai mare).

"Putem alege orice stea, la întâmplare - cu siguranta exista o planeta ce o orbiteaza", a afirmat Uffe Grae Jorgensen, astronom la Universitatea din Copenhaga, Danemarca.

Cercetatorii au facut o alta descoperire care pâna acum parea de domeniul SF: milioane de planete pot orbita doua stele.

"Începem sa descoperim un sistem planetar nou, care nu seamana deloc cu ceea ce exista în sistemul nostru solar", a explicat William Welsh, astronom la Universitatea San Diego State.

Deoarece Calea Lactee gazduieste mult mai multe planete decât se credea pâna acum, sansele ca una dintre acestea sa gazduiasca forme de viata sunt mai mari, au conchis cercetatorii.

Sursa: Wall Street Journal - via

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa

A team of researchers from led by Guillaume Gervais from McGill’s Physics Department and Mike Lilly from Sandia National Laboratories, have managed to develop one of the smallest electronic circuits in the world using nanowires spaced across each other by a distance so small, it has to be measured at an atomic level.

Miniaturization has been the dominant trend in the digital industry for years, and nano-electronics, with which scientists have been fiddling for the past 20 years, is considered as the next obvious step, allowing for even smaller and powerful electronic devices.

“People have been working on nanowires for 20 years,” says Sandia lead researcher Mike Lilly. “At first, you study such wires individually or all together, but eventually you want a systematic way of studying the integration of nanowires into nanocircuitry. That’s what’s happening now. It’s important to know how nanowires interact with each other rather than with regular wires.”

While nanowires have been studied extensively in the past, this current study is the first of its kind to approach how the wires in an electronic circuit interact with one another when packed so tightly together. The researchers used gallium-arsenide nanowire structures which they placed one above the other, separated by only a few atomic layers of extremely pure, home-grown crystal – two wires separated by only about 150 atoms or 15 nanometers (nm).

At this extremely tiny scale, new properties and characterisctics arise, along with inherent issues in the path of the researcher’s study. For one, the nano-wires have been envisoned as a 1-D structure, very different from your usual, bulk 3-D wire common in any kind of electrical device. Through these types of wires, current can only flow in one direction, not horizontally, vertically, back/forward like in a typical 3-D capable.

“In the long run, our test device will allow us to probe how 1-D conductors are different from 2-D and 3-D conductors,” Lilly said. “They are expected to be very different, but there are relatively few experimental techniques that have been used to study the 1-D ground state.”

At the nanoscale, also, the behavior of the circuit is described by quantum physics. In our case, by the introduction of Coulomb’s drag effect. This force operates between wires, and is inversely proportional to the square of the clearance. This is why in conventional circuitry, where the gap between wires is quite visible, this drag force can be considered negligible, however at nanodistances, the force becomes large enough for it to disturb electrons. This causes the current flowing through to the nanowires to march in opposite directions.

This means that a current in one wire can produce a current in the other one that is either in the same or the opposite direction.

“The amount is very small,” said Lilly, “and we can’t measure it. What we can measure is the voltage of the other wire.”

Coulomb’s drag effect is still not very well understood at this time, however what is know is that “enough electrons get knocked along that they provide positive source at one wire end, negative at the other,” Lilly said.

Yes, nanowires will allow for a even smaller scale of the digital world, however this is just the most visible benefit, out of many which are set to revolutionize electronics in the following decades.

One of the biggest hassles scientists working in the field of electronics at this time is how to control dissipated heat, the energy lost to the environment. This is a great concern to computer designers especially since millions of integrated circuits are currently employed in most devices today, and the heat generated by them has to be controlled. Well-known theorist Markus Büttiker speculates that it may be possible to harness the energy lost as heat in one wire by using other wires nearby. Basically, as the distance is smaller, the heat generated will be smaller as adjacent wires can easily absorb those minute quantities.

Also, speed will be a parameter which will be improved, as smaller distances translate in shorter time for signals to travel from one point to another. In this present research, the Sandia National Laboratories experiment rendered an unexpected voltage increases of up to 25 percent.

Source: ZME Science

Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa

Un conto è mandare un modulo con equipaggio nello Spazio, un altro è fare in modo che riesca a agganciarsi a una seconda navicella senza il minimo intoppo. Negli anni '60, pensare di poter eseguire una manovra di docking in orbita era tutt'altro che scontato. Deve essere per questo che, quando le navicelle sovietiche Soyuz 4 e Soyuz 5 si sono incontrate a 224 chilometri di altitudine, il centro di controllo di Baikonur è rimasto con il fiato sospeso. Era il 16 gennaio 1969: due cosmonauti russi indossano le tute spaziali e prendono al volo il passaggio offerto da un collega. Un vero successo, ma il ritorno a casa non è stato felice per tutti.

Andiamo con ordine. Il 14 gennaio la Soyuz 4 viene spedita in orbita con a bordo un solo uomo, Vladimir Shatalov. Il suo compito è quello di attendere l'arrivo dei tre colleghi della Soyuz 5, che lasciano il pianeta solo il giorno dopo. L'obiettivo della missione è quello di far incontrare le due navicelle in modo tale da compiere il primo trasferimento di equipaggio mai tentato nella storia. Infatti, i progetti spaziali sovietici miravano alla realizzazione di una base spaziale permanente: un buon motivo per fare un po' di pratica nelle operazioni di aggancio.

Così, verso le 8 di mattina del 16 gennaio, la Soyuz 5 – con a bordo Boris Volynov, Aleksei Yeliseyev e Yevgeny Khrunov – si avvicina alla navicella compagna agganciandosi con una manovra da manuale. E non è poco, visto che per tutti e quattro i cosmonauti – guarda a caso – si trattava della prima missione in orbita. Il resto dell'operazione è, letteralmente, una passeggiata. Yeliseyev e Khrunov indossano le tute spaziali, salutano Volynov e uno alla volta raggiungono Shatalov a bordo della Soyuz 4. Tutto va a gonfie vele, grandi strette di mano, e tutti pronti a tornare verso casa.

Dopo essersi separate, le due navicelle continuano a orbitare intorno alla Terra in attesa di entrare in contatto con le stazioni di controllo e descrivere la rotta di rientro sul pianeta. Il trio a bordo della Soyuz 4 riceve l'ok per il rientro dopo la mezzanotte e alle 7 del mattino del 17 gennaio è già atterrata senza un graffio in Kazakistan. Tutto perfetto, quasi fosse stata una scampagnata tra amici. Ma per Volynov, l'unico rimasto a bordo della Soyuz 5, il viaggio di ritorno si trasforma in un vero e proprio incubo.

Infatti, il modulo di rientro – una piccola capsula adatta giusto a contenere il pilota – non riesce a distaccarsi dal modulo di servizio grazie a cui la Soyuz 5 è arrivata fin lassù. Č un bel problema, perché nel frattempo la navicella è entrata in fase di discesa e non può più fermarsi. Solo che l'assetto del modulo è completamente sballato, e la parte più vulnerabile della fusoliera viene esposta direttamente all'attrito causato dal rientro in atmosfera.

In una manciata di minuti, le guarnizioni della Soyuz 5 si fondono, sprigionando fumi tossici che rischiano di intossicare Volynov. Č un po' come guidare una macchina contromano con l'abitacolo in preda a un incendio: avvincente quando capita nei film, ma non quando sei a centinaia di chilometri di altitudine e rischi di finire incenerito. Per fortuna, il calore sprigionato dal rientro mette fuori uso gli elementi di connessione tra i moduli, e la capsula orienta il suo schermo protettivo nella direzione giusta: la Terra.

Tuttavia per Volynov i guai non sono ancora finito. Anche se la Soyuz 5 sta rientrando con il giusto assetto, infatti, al momento dell'atterraggio i paracadute e i razzi di frenata non funzionano bene. Lo schianto è tale che l'astronauta si rompe i denti. Altra sorpresa, aprendo il portellone della capsula il russo scopre di essere atterrato nel posto sbagliato: è in mezzo ai monti Urali, a 37°C sotto zero. Nella sfortuna la buona sorte non lo abbandona e riesce a trovare riparo nelle vicinanze e i soccorritori lo trovano sano e salvo davanti a un bel fuoco scoppiettante. Nonostante la brutta avventura, il suo amore per lo Spazio non si affievolisce, e sette anni più tardi, Volynov è di nuovo in orbita.


Articolo (p)Link Commenti Commenti (0)  Storico Storico  Stampa Stampa
Ci sono 2511 persone collegate

< settembre 2021 >

en - Global Observatory (605)
en - Science and Society (594)
en - Video Alert (346)
it - Osservatorio Globale (503)
it - Scienze e Societa (555)
it - Video Alerta (132)
ro - Observator Global (399)
ro - Stiinta si Societate (467)
ro - TV Network (149)
z - Games Giochi Jocuri (68)

Catalogati per mese - Filed by month - Arhivate pe luni:

Gli interventi piů cliccati

Ultimi commenti - Last comments - Ultimele comentarii:
Hi, it's Nathan!Pretty much everyone is using voice search with their Siri/Google/Alexa to ask for services and products now, and next year, it'll be EVERYONE of your customers. Imagine what you are ...
15/01/2019 @ 17:58:25
By Nathan
Now Colorado is one love, I'm already packing suitcases;)
14/01/2018 @ 16:07:36
By Napasechnik
Nice read, I just passed this onto a friend who was doing some research on that. And he just bought me lunch since I found it for him smile So let me rephrase that Thank you for lunch! Whenever you ha...
21/11/2016 @ 09:41:39
By Anonimo


18/09/2021 @ 10:06:47
script eseguito in 797 ms