Sometimes it's better to spend your time on Twitter following things for which it is difficult to feel contempt, things without feelings.
The post Inanimate Objects You Really Should Follow on Twitter appeared first on WIRED.
Boeing's Echo Seeker is a 32-foot-long autonomous sub that can hang out in the abyss for up to three days.
The post Boeing’s New Autonomous Sub Can Dive to 20,000 Feet Deep appeared first on WIRED.
Forget selfies. Celebrities are all about the Hillfie now.
The post Kim Kardashian Gets the Hillary Clinton Selfie She Always Wanted appeared first on WIRED.
The SF-Breeze could encourage more commuters to travel on the water rather than above or below it, and cut our reliance on polluting fossil fuel.
The post Hydrogen Power Could Actually Make SF Commutes Suck Less appeared first on WIRED.
How the US military figured out how to make self-stable cheese ... and helped invent Cheetos to boot.
Researchers at ETH Zurich have developed a modulator that is a 100 times smaller than conventional modulators, so it can now be integrated into electronic circuits. Transmitting large amounts of data via the Internet requires high-performance electro-optic modulators — devices that convert electrical signals (used in computers and cell phones) into light signals (used in fiber-optic cables).
Today, huge amounts of data are sent incredibly fast through fiber-optic cables as light pulses. For that purpose they first have to be converted from electrical signals, which are used by computers and telephones, into optical signals. Today’s electro-optic modulators are more complicated and large, compared with electronic devices that can be as small as a few micrometers.
The plasmon trick
To build the smallest possible modulator they first need to focus a light beam whose intensity they want to modulate into a very small volume. The laws of optics, however, dictate that such a volume cannot be smaller than the wavelength of the light itself. Modern telecommunications use near-infrared laser light with a wavelength of 1500 nanometers (1.5 micrometers), which sets the lower limit for the size of a modulator.
To beat that limit and to make the device even smaller, the light is first turned into surface-plasmon-polaritons. Plasmon-polaritons are a combination of electromagnetic fields and electrons that propagate along a surface of a metal strip. At the end of the strip they are converted back to light once again. The advantage of this detour is that plasmon-polaritons can be confined in a much smaller space than the light they originated from.
The modulator is much smaller than conventional devices so it consumes very little energy — only a few thousandth of a Watt at a data transmission rate of 70 Gigabits per second. This corresponds to about 100th of the energy consumption of commercial models. And that means more data can be transmitted at higher speeds. The device is also cheaper to produce.
The research is described in a paper in the journal Nature Photonics.
Abstract of All-plasmonic Mach–Zehnder modulator enabling optical high-speed communication at the microscale
Optical modulators encode electrical signals to the optical domain and thus constitute a key element in high-capacity communication links. Ideally, they should feature operation at the highest speed with the least power consumption on the smallest footprint, and at low cost. Unfortunately, current technologies fall short of these criteria. Recently, plasmonics has emerged as a solution offering compact and fast devices. Yet, practical implementations have turned out to be rather elusive. Here, we introduce a 70 GHz all-plasmonic Mach–Zehnder modulator that fits into a silicon waveguide of 10 μm length. This dramatic reduction in size by more than two orders of magnitude compared with photonic Mach–Zehnder modulators results in a low energy consumption of 25 fJ per bit up to the highest speeds. The technology suggests a cheap co-integration with electronics.
Researchers at Purdue University have created a new “plasmonic oxide material” that could make possible modulator devices for optical communications (fiber optics, used for the Internet and cable television) that are at least 10 times faster than conventional technologies.
The optical material, made of aluminum-doped zinc oxide (AZO) also requires less power than other “all-optical” semiconductor devices. That is essential for the faster operation, which would otherwise generate excessive heat with the increase transmission speed.
The material has been shown to work in the near-infrared range of the spectrum, which is used in optical communications, and it is compatible with the CMOS semiconductor manufacturing process used to construct integrated circuits.
Faster optical transistors replace silicon
The researchers have proposed creating an “all-optical plasmonic modulator using CMOS-compatible materials,” or an optical transistor, which allows for the speedup compared to systems that use silicon chips.
A cycle takes about 350 femtoseconds to complete in the new AZO films, which is roughly 5,000 times faster than crystalline silicon.
The researchers “doped” zinc oxide with aluminum (thus the AZO), meaning the zinc oxide is impregnated with aluminum atoms to alter the material’s optical properties. Doping the zinc oxide causes it to behave like a metal at certain wavelengths and like a dielectric at other wavelengths.
The AZO also makes it possible to “tune” the optical properties of metamaterials.
The ongoing research is funded by the Air Force Office of Scientific Research, a Marie Curie Outgoing International Fellowship, the National Science Foundation, and the Office of Naval Research.
Abstract of Epsilon-near-zero Al-doped ZnO for ultrafast switching at telecom wavelengths
Transparent conducting oxides have recently gained great attention as CMOS-compatible materials for applications in nanophotonics due to their low optical loss, metal-like behavior, versatile/tailorable optical properties, and established fabrication procedures. In particular, aluminum-doped zinc oxide (AZO) is very attractive because its dielectric per-mittivity can be engineered over a broad range in the near-IR and IR. However, despite all these beneficial features, the slow (>100 ps) electron-hole recombination time typical of these compounds still represents a fundamental limitation impeding ultrafast optical modulation. Here we report the first epsilon-near-zero AZO thin films that simultaneously exhibit ultrafast carrier dynamics (excitation and recombination time below 1 ps) and an outstanding reflectance modulation up to 40% for very low pump fluence levels (<4 mJ∕cm2) at a telecom wavelength of 1.3 μm. The unique properties of the demonstrated AZO thin films are the result of a low-temperature fabrication procedure promoting deep-level defects within the film and an ultrahigh carrier concentration. © 2015 Optical Society of America
An international group of 26 experts, including prominent genetic engineers and fruit fly geneticists, has unanimously recommended a series of preemptive measures to safeguard gene drive research from accidental (or intentional) release from laboratories.
RNA-guided gene drives are genetic elements — found naturally in the genomes of most of the world’s organisms — that increase the chance of the gene they carry being passed on to all offspring. So they can quickly spread through populations if not controlled.
Looking to these natural systems, researchers around the world, including some scientists, are developing synthetic gene drives that could one day be leveraged by humans to purposefully alter the traits of wild populations of organisms to prevent disease transmission and eradicate invasive species.
What could possibly go wrong?
These synthetic gene drives, designed using an RNA-guided gene editing system called CRISPR, could one day improve human health and the environment by preventing mosquitoes and ticks from spreading diseases such as malaria and Lyme; by promoting sustainable agriculture through control of crop pests without the use of toxic pesticides and herbicides; and by protecting at-risk ecosystems from the spread of destructive, invasive species such as rats or cane toads.
However, the development of RNA-guided gene drive technology calls for enhanced safety measures. That’s because its capability to also affect shared ecosystems if organisms containing synthetic gene drives are accidentally or deliberately released from a laboratory. This potential risk is especially relevant with highly mobile species such as fruit flies or mosquitoes.
“One of the great successes of engineering is the development of safety features, such as the rounding of sharp corners on objects and the invention of airbags for cars, and in biological engineering we want to emulate the process of designing safety features in ways relevant to the technologies we develop,” said Wyss Core Faculty member George Church, Ph.D., who leads the Synthetic Biology Platform at the Wyss Institute. Church is also the Robert Winthrop Professor of Genetics at Harvard Medical School and Professor of Health Sciences and Technology at Harvard and MIT.
At the Wyss Institute, enhanced protocols for safely and securely researching emerging biotechnologies, including RNA–guided gene drives, have already been formally implemented. The safeguards were put in place proactively, step–by–step, in direct parallel with the development of the first RNA-guided gene drives at the Wyss Institute.
The working documents have been made publicly available by the Institute to encourage widespread adoption of multi-tier confinement and risk assessment procedures. Church was instrumental in the design of the enhanced biosafety and biosecurity protocols.
Now, research teams from the Wyss Institute and University of California, San Diego — the only two groups to have published work on RNA-guided CRISPR gene drives — have proactively assembled an international group of 26 experts, including prominent genetic engineers and fruit fly geneticists, to unanimously recommend a series of preemptive measures to safeguard gene drive research.
Open-access research recommended
Led by Wyss Institute Technology Development Fellow, Kevin Esvelt, Ph.D., and UC San Diego Professor of Cell and Developmental Biology Ethan Bier, Ph.D., the 26 authors of this consensus recommendation, which is published online in Science Express journal and includes representatives from every major group known to be working on gene drives, calls for all researchers to use multiple confinement strategies in order to prevent the accidental alteration of wild populations.
The group also provides explicit recommended guidelines for regulatory authorities evaluating proposed new work. And Esvelt and others are hopeful that the field of gene drive research is so nascent that it may be possible to build a community of scientists that share their research with the public throughout the development process.
“This would promote collaboration and avoid needless duplication of efforts among different research groups while allowing diverse voices to help guide the development of a technology that could improve our shared world,” said Esvelt. “And eventually, it might inspire a similar shift towards full transparency in other scientific fields of collective public importance.”
“The scientific community has a responsibility to the public and to the environment to constantly assess how new biotechnologies could potentially impact our world,” said Wyss Institute Founding Director Donald E. Ingber, M.D., Ph.D.
“This proactive consensus recommendation — reached in an extraordinary demonstration of the power of scientific collaboration over competition — provides concrete, useful guidelines for safeguarding our shared ecosystem while ensuring that remarkable breakthroughs, such as synthetic gene drives, can be applied to their full potential for the greater good.”
Wyss Institute at Harvard University | CRISPER-Cas9: Gene Drive
This animation explains how gene drives could one day be used to spread gene alterations through targeted wild populations over many generations, for purposes such as preventing spread of insect-borne disease and controlling invasive plant species. To ensure gene drives have the potential to be used for the greater good in the future, Wyss Institute Technology Development Fellow Kevin Esvelt, Ph.D., has co-led an international consensus of 26 scientists to recommend safeguards to prevent synthetic gene drive research from having any accidental impacts on the world’s shared ecosystems.
Abstract of A mucosal vaccine against Chlamydia trachomatis generates two waves of protective memory T cells
INTRODUCTION: Administering vaccines through nonmucosal routes often leads to poor protection against mucosal pathogens, presumably because such vaccines do not generate memory lymphocytes that migrate to mucosal surfaces. Although mucosal vaccination induces mucosa-tropic memory lymphocytes, few mucosal vaccines are used clinically; live vaccine vectors pose safety risks, whereas killed pathogens or molecular antigens are usually weak immunogens when applied to intact mucosa. Adjuvants can boost immunogenicity; however, most conventional mucosal adjuvants have unfavorable safety profiles. Moreover, the immune mechanisms of protection against many mucosal infections are poorly understood.
RATIONALE: One case in point is Chlamydia trachomatis (Ct), a sexually transmitted intracellular bacterium that infects >100 million people annually. Mucosal Ct infections can cause female infertility and ectopic pregnancies. Ct is also the leading cause of preventable blindness in developing countries and induces pneumonia in infants. No approved vaccines exist to date. Here, we describe a Ct vaccine composed of ultraviolet light–inactivated Ct (UV-Ct) conjugated to charge-switching synthetic adjuvant nanoparticles (cSAPs). After immunizing mice with live Ct, UV-Ct, or UV-Ct–cSAP conjugates, we characterized mucosal immune responses to uterine Ct rechallenge and dissected the underlying cellular mechanisms.
RESULTS: In previously uninfected mice, Ct infection induced protective immunity that depended on CD4 T cells producing the cytokine interferon-γ, whereas uterine exposure to UV-Ct generated tolerogenic Ct-specific regulatory T cells, resulting in exacerbated bacterial burden upon Ct rechallenge. In contrast, mucosal immunization with UV-Ct–cSAP elicited long-lived protection. This differential effect of UV-Ct–cSAP versus UV-Ct was because the former was presented by immunogenic CD11b+CD103– dendritic cells (DCs), whereas the latter was presented by tolerogenic CD11b–CD103+ DCs. Intrauterine or intranasal vaccination, but not subcutaneous vaccination, induced genital protection in both conventional and humanized mice. Regardless of vaccination route, UV-Ct–cSAP always evoked a robust systemic memory T cell response. However, only mucosal vaccination induced a wave of effector T cells that seeded the uterine mucosa during the first week after vaccination and established resident memory T cells (TRM cells). Without TRM cells, mice were suboptimally protected, even when circulating memory cells were abundant. Optimal Ct clearance required both early uterine seeding by TRM cells and infection-induced recruitment of a second wave of circulating memory cells.
CONCLUSIONS: Mucosal exposure to both live Ct and inactivated UV-Ct induces antigen-specific CD4 T cell responses. While immunogenic DCs present the former to promote immunity, the latter is instead targeted to tolerogenic DCs that exacerbate host susceptibility to Ct infection. By combining UV-Ct with cSAP nanocarriers, we have redirected noninfectious UV-Ct to immunogenic DCs and achieved long-lived protection. This protective vaccine effect depended on the synergistic action of two memory T cell subsets with distinct differentiation kinetics and migratory properties. The cSAP technology offers a platform for efficient mucosal immunization that may also be applicable to other mucosal pathogens.
Scientists say they have cracked the secret of why some people live a healthy and physically independent life over the age of 100: keeping inflammation down and telomeres long.
Newcastle University’s Institute for Ageing in the U.K. and Keio University School of Medicine note that severe inflammation is part of many diseases in the old, such as diabetes or diseases attacking the bones or the body’s joints, and chronic inflammation can develop from any of them.
The study was published online in an open-access paper in EBioMedicine, a new open-access journal jointly supported by the journals Cell and Lancet.
“Centenarians and supercentenarians are different,” said Professor Thomas von Zglinicki, from Newcastle University’s Institute for Ageing, and lead author. “Put simply, they age slower. They can ward off diseases for much longer than the general population.”
Keeping telomeres long
The researchers studied groups of people aged 105 and over (semi-supercentenarians), those 100 to 104 (centenarians), and those nearly 100 and their offspring. They measured a number of health markers they believe contribute towards successful aging, including blood cell numbers, metabolism, liver and kidney function, inflammation, and telomere length.
Scientists expected to see a continuous shortening of telomeres with age. But what they found was that the children of centenarians, who have a good chance of becoming centenarians themselves, maintained their telomeres at a “youthful” level corresponding to about 60 years of age — even when they became 80 or older.
“Our data reveals that once you’re really old [meaning centenarians and those older than 100], telomere length does not predict further successful aging, said von Zglinicki. “However, it does show that [they] maintain their telomeres better than the general population, which suggests that keeping telomeres long may be necessary or at least helpful to reach extreme old age.”Lower inflammation levels
Centenarian offspring maintained lower levels of markers for chronic inflammation. These levels increased in the subjects studied with age including centenarians and older, but those who were successful in keeping previously keeping them low had the best chance to maintain good cognition, independence, and longevity.
“It has long been known that chronic inflammation is associated with the aging process in younger, more ‘normal’ populations, but it’s only very recently we could mechanistically prove that inflammation actually causes accelerated aging in mice,” von Zglinicki said.
“This study, showing for the first time that inflammation levels predict successful aging even in the extreme old, makes a strong case to assume that chronic inflammation drives human aging too. … Designing novel, safe anti-inflammatory or immune-modulating medication has major potential to improve healthy lifespan.”
Data from three studies combined
Data was collated by combining three community-based group studies: Tokyo Oldest Old Survey on Total Health, Tokyo Centenarians Study, and Japanese Semi-Supercentenarians Study.
The research comprised 1,554 individuals, including 684 centenarians and (semi-) supercentenarians, 167 pairs of offspring and unrelated family of centenarians, and 536 very old people. The total group covered ages from around 50 up to the world’s oldest man at 115 years.
However, “presently available potent anti-inflammatories are not suited for long-term treatment of chronic inflammation because of their strong side-effects,” said Yasumichi Arai, Head of the Tokyo Oldest Old Survey on Total Health cohort and first author of the study.
Abstract of Inflammation, But Not Telomere Length, Predicts Successful Ageing at Extreme Old Age: A Longitudinal Study of Semi-supercentenarians
To determine the most important drivers of successful ageing at extreme old age, we combined community-based prospective cohorts: Tokyo Oldest Old Survey on Total Health (TOOTH), Tokyo Centenarians Study (TCS) and Japanese Semi-Supercentenarians Study (JSS) comprising 1554 individuals including 684 centenarians and (semi-)supercentenarians, 167 pairs of centenarian offspring and spouses, and 536 community-living very old (85 to 99 years). We combined z scores from multiple biomarkers to describe haematopoiesis, inflammation, lipid and glucose metabolism, liver function, renal function, and cellular senescence domains. In Cox proportional hazard models, inflammation predicted all-cause mortality with hazard ratios (95% CI) 1.89 (1.21 to 2.95) and 1.36 (1.05 to 1.78) in the very old and (semi-)supercentenarians, respectively. In linear forward stepwise models, inflammation predicted capability (10.8% variance explained) and cognition (8.6% variance explained) in (semi-) supercentenarians better than chronologic age or gender. The inflammation score was also lower in centenarian offspring compared to age-matched controls with Δ (95% CI) = − 0.795 (− 1.436 to − 0.154). Centenarians and their offspring were able to maintain long telomeres, but telomere length was not a predictor of successful ageing in centenarians and semi-supercentenarians. We conclude that inflammation is an important malleable driver of ageing up to extreme old age in humans.
There’s a big problem with big data: the huge RAM memory required. Now MIT researchers have developed a new system called “BlueDBM” that should make servers using flash memory as efficient as those using conventional RAM for several common big-data applications, while preserving their power and cost savings.
Here’s the context: Data sets in areas such as genomics, geological data, and daily twitter feeds can be as large as 5TB to 20 TB. Complex data queries in such data sets require high-speed random-access memory (RAM). But that would require a huge cluster with up to 100 servers, each with 128GB to 256GBs of DRAM (dynamic random access memory).
Flash memory (used in smart phones and other portable devices) could provide an alternative to conventional RAM for such applications. It’s about a tenth as expensive, and it consumes about a tenth as much power. The problem: it’s also a tenth as fast.
But at the International Symposium on Computer Architecture in June, the MIT researchers, with colleagues at Quanta Computer, presented experimental evidence showing that if conventional servers executing a distributed computation have to go to disk for data even 5 percent of the time, their performance falls to a level that’s comparable with flash anyway.
In fact, they found that for a 10.5-terabyte computation, just 20 servers with 20 terabytes’ worth of flash memory each could do as well as 40 servers with 10 terabytes’ worth of RAM, and could consume only a fraction as much power. This was even without the researchers’ new techniques for accelerating data retrieval from flash memory.
“This is not a replacement for DRAM [dynamic RAM] or anything like that,” says Arvind, the Johnson Professor of Computer Science and Engineering at MIT, whose group performed the new work. “But there may be many applications that can take advantage of this new style of architecture. Which companies recognize — everybody’s experimenting with different aspects of flash. We’re just trying to establish another point in the design space.”
The researchers were able to make a network of flash-based servers competitive with a network of RAM-based servers by moving a little computational power off of the servers and onto the chips that control the flash drives. By preprocessing some of the data on the flash drives before passing it back to the servers, those chips can make distributed computation much more efficient. And since the preprocessing algorithms are wired into the chips, they dispense with the computational overhead associated with running an operating system, maintaining a file system, and the like.
With hardware contributed by some of their sponsors — Quanta, Samsung, and Xilinx — the researchers built a prototype network of 20 servers. Each server was connected to a field-programmable gate array, or FPGA, a kind of chip that can be reprogrammed to mimic different types of electrical circuits. Each FPGA, in turn, was connected to two half-terabyte — or 500-gigabyte — flash chips and to the two FPGAs nearest it in the server rack.
Because the FPGAs were connected to each other, they created a very fast network that allowed any server to retrieve data from any flash drive. They also controlled the flash drives, which is no simple task: The controllers that come with modern commercial flash drives have as many as eight different processors and a gigabyte of working memory.
Finally, the FPGAs also executed the algorithms that preprocessed the data stored on the flash drives. The researchers tested three such algorithms, geared to three popular big-data applications. One is image search, or trying to find matches for a sample image in a huge database. Another is an implementation of Google’s PageRank algorithm, which assesses the importance of different Web pages that meet the same search criteria. And the third is an application called Memcached, which big, database-driven websites use to store frequently accessed information.
FPGAs are about one-tenth as fast as purpose-built chips with hardwired circuits, but they’re much faster than central processing units using software to perform the same computations. Ordinarily, either they’re used to prototype new designs, or they’re used in niche products whose sales volumes are too small to warrant the high cost of manufacturing purpose-built chips.
But the MIT and Quanta researchers’ design suggests a new use for FPGAs: A host of applications could benefit from accelerators like the three the researchers designed. And since FPGAs are reprogrammable, they could be loaded with different accelerators, depending on the application. That could lead to distributed processing systems that lose little versatility while providing major savings in energy and cost.
“Many big-data applications require real-time or fast responses,” says Jihong Kim, a professor of computer science and engineering at Seoul National University. “For such applications, BlueDBM” — the MIT and Quanta researchers’ system — “is an appealing solution.”
Relative to some other proposals for streamlining big-data analysis, “The main advantage of BlueDBM might be that it can easily scale up to a lot bigger storage system with specialized accelerated supports,” Kim says.
Abstract of BlueDBM: An Appliance for Big Data Analytics
Complex data queries, because of their need for random accesses, have proven to be slow unless all the data can be accommodated in DRAM. There are many domains, such as genomics, geological data and daily twitter feeds where the datasets of interest are 5TB to 20 TB. For such a dataset, one would need a cluster with 100 servers, each with 128GB to 256GBs of DRAM, to accommodate all the data in DRAM. On the other hand, such datasets could be stored easily in the flash memory of a rack-sized cluster. Flash storage has much better random access performance than hard disks, which makes it desirable for analytics workloads. In this paper we present BlueDBM, a new system architecture which has flashbased storage with in-store processing capability and a low-latency high-throughput inter-controller network. We show that BlueDBM outperforms a flash-based system without these features by a factor of 10 for some important applications. While the performance of a ram-cloud system falls sharply even if only 5%~10% of the references are to the secondary storage, this sharp performance degradation is not an issue in BlueDBM. BlueDBM presents an attractive point in the cost-performance trade-off for Big Data analytics.