You may not know the name Ivan Askwith, but he's the brains behind some of the most successful crowdfunding campaigns ever.
The post The Secret Weapon Behind Super Troopers 2’s $3M Fundraiser appeared first on WIRED.
This new Galaxy flagship is beautiful, it's powerful, it's really easy to use. It's the best Android phone you can buy right now.
Like magic, Steinway's new piano, the Spirio, records a pianist and plays the music back perfect.
The post Steinway’s New Piano Can Play a Perfect Concerto by Itself appeared first on WIRED.
NY's East Village Radio showed that the Internet could make radio better--until licensing laws forced it to shut down last year. Now it's back. Is EVR here to stay?
The post Radio Is Soulless. Can These Radicals Make It Great Again? appeared first on WIRED.
Ask someone to name a woman scientist, and they'll offer one name: Marie Curie. It's hobbling the representation of women in STEM, and it's time to cut it out.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
A deadly and global war is going on, and it's for sand. Photographer Adam Ferguson traveled to India for WIRED to document the people and places most affected by the industry.
The post Photos: Inside India’s Illegal, Bloody Sand Mining Industry appeared first on WIRED.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
A self-driving car equipped by GM spinoff Delphi Automotive completed today a historic, 3,500-mile journey across the U.S. from San Franscisco to New York.
The trip demonstrated the full capabilities of its active safety technologies with the longest automated drive ever attempted in North America. The coast-to-coast trip, launched in San Francisco on March 22, covered approximately 3,500 miles.
Demonstrated on the streets of Las Vegas at CES 2015, Delphi’s automated driving vehicle leveraged a full suite of technologies and features to make this trip possible, including:
- Radar, vision and Advanced Drive Assistance Systems (ADAS)
- Multi-domain controller: High-end microprocessor to seamlessly drive multiple features and functions
- V2V/V2X: Wireless vehicle communication technology extends the range of existing ADAS functionality
- Intelligent software that enables the vehicle to make complex, human-like decisions for real-world automated driving
- Traffic Jam Assist
- Automated Highway Pilot with Lane Change (on-ramp to off-ramp highway pilot)
- Automated Urban Pilot
- Automated Parking and Valet
Delphi’s active safety technologies enable the vehicle to instantaneously make complex decisions, like stopping and then proceeding at a four-way stop, timing a highway merge or calculating the safest maneuver around a bicyclist on a city street.
Many of these driving scenarios have been a limitation for much of the current technology on the market today.
Delphi | Delphi’s Automated Driving Vehicle
UNSWTV | Google Maps for the Body
Biomedical engineer Melissa Knothe Tate at the University of New South Wales (UNSW) in Australia is using previously proprietary semiconductor technology to zoom through organs of the human body, down to the level of a single cell.
The imaging technology, developed by high-tech German optical and industrial measurement manufacturer Zeiss, was originally developed to scan silicon wafers for defects.
UNSW Professor Melissa Knothe Tate, the Paul Trainor Chair of Biomedical Engineering, is leading the project, which is using semiconductor technology to explore osteoporosis and osteoarthritis.
Using Google algorithms, Tate — an engineer and expert in cell biology and regenerative medicine – is able to zoom in and out from the scale of the whole joint down to the cellular level “just as you would with Google Maps,” reducing to “a matter of weeks analyses that once took 25 years to complete”.
Her team is also using cutting-edge microtome and MRI technology to examine how movement and weight bearing affects the movement of molecules within joints, exploring the relationship between blood, bone, lymphatics and muscle.
“For the first time we have the ability to go from the whole body down to how the cells are getting their nutrition and how this is all connected,” said Professor Knothe Tate. “This could open the door to as yet unknown new therapies and preventions.”
Tate is the first to use the system in humans. She has forged a pioneering partnership with the Cleveland Clinic, Brown and Stanford Universities, Zeiss, and Google to help crunch terabytes of data gathered from human hip studies.
Similar research is underway at Harvard University and Heidelberg in Germany to map neural pathways and connections in the brains of mice.
Tate presented several papers on her research into the human hip and osteoarthritis at the peer-reviewed Orthopedic Research Society meeting in Las Vegas.
Numerous studies have explored molecular transport within specific tissues but there has been little research on exchange between different kinds of tissue such as cartilage and bone.
Tate has already demonstrated a link between molecular transport through blood, muscle and bone, and disease status in osteoarthritic guinea pigs.
Like humans, guinea pigs develop osteoarthritis as they age. The condition is increasingly believed to be the result of a breakdown in cellular communication.
Understanding the molecular signaling and traffic between tissues could unlock a range of treatments, including physical therapies and preventative exercise routines, Tate said.
Critical to this work has been the development of microscopy that allows seamless imaging of organs and tissues across length scales – centimeters at the whole-joint level down to nanometer-sized molecules – as well as the capacity to sift and analyze huge sets of data.
“These are terabyte-sized data sets so the Google maps algorithms are helping us take this tremendous amount of information and use it effectively. They’re the traffic controllers, if you like.”
“Advanced research instrumentation provides a technological platform to answer the hardest, unanswered questions in science, opening up avenues for fundamental discoveries, the implications of which may be currently unfathomable yet which will ultimately pave the way to engineer better human health and quality of life as we age.”
A research team from the University of Houston has created an algorithm that allowed a man to grasp a bottle and other objects with a prosthetic hand, controlled only by his thoughts.
The technique, demonstrated with a 56-year-old man whose right hand had been amputated, uses non-invasive brain monitoring, capturing brain activity to determine what parts of the brain are involved in grasping an object.
With that information, researchers created a computer program, or brain-machine interface (BMI), that harnessed the subject’s intentions and allowed him to successfully grasp objects, including a water bottle and a credit card. The subject grasped the selected objects 80 percent of the time using a high-tech bionic hand fitted to the amputee’s stump.
Previous studies involving either surgically implanted electrodes or myoelectric control, which relies upon electrical signals from muscles in the arm, have shown similar success rates, according to the researchers. But the non-invasive method offers several advantages, says Jose Luis Contreras-Vidal, a neuroscientist and engineer at UH. It avoids the risks of surgically implanting electrodes by measuring brain activity via scalp electroencephalogram, or EEG. And myoelectric systems aren’t an option for all people, because they require that neural activity from muscles relevant to hand grasping remain intact.
The results of the study were published March 30 in Frontiers in Neuroscience.
The work, funded by the National Science Foundation, demonstrates for the first time EEG-based BMI control of a multi-fingered prosthetic hand for grasping by an amputee. It also could lead to the development of better prosthetics, Contreras-Vidal said.
Using non-invasive EEG also offers a new understanding of the neuroscience of grasping and will be applicable to rehabilitation for other types of injuries, including stroke and spinal cord injury, the researchers said.
T he study subjects were tested using a 64-channel active EEG, with electrodes attached to the scalp to capture brain activity. Brain activity was recorded in multiple areas, including the motor cortex and areas known to be used in action observation and decision-making, and occurred between 50 milliseconds and 90 milliseconds before the hand began to grasp.
That provided evidence that the brain predicted the movement, rather than reflecting it, he said.
“Current upper limb neuroprosthetics restore some degree of functional ability, but fail to approach the ease of use and dexterity of the natural hand, particularly for grasping movements,” the researchers wrote, noting that work with invasive cortical electrodes has been shown to allow some hand control but not at the level necessary for all daily activities.
“Further, the inherent risks associated with surgery required to implant electrodes, along with the long-term stability of recorded signals, is of concern. Here we show that it is feasible to extract detailed information on intended grasping movements to various objects in a natural, intuitive manner, from a plurality of scalp EEG signals.”
Until now, this was thought to be possible only with brain signals acquired invasively inside or on the surface of the brain.
Researchers first recorded brain activity and hand movement in the able-bodied volunteers as they picked up five objects, each chosen to illustrate a different type of grasp: a soda can, a compact disc, a credit card, a small coin and a screwdriver. The recorded data were used to create decoders of neural activity into motor signals, which successfully reconstructed the grasping movements.
They then fitted the amputee subject with a computer-controlled neuroprosthetic hand and told him to observe and imagine himself controlling the hand as it moved and grasped the objects.
The subject’s EEG data, along with information about prosthetic hand movements gleaned from the able-bodied volunteers, were used to build the algorithm.
Contreras-Vidal said additional practice, along with refining the algorithm, could increase the success rate to 100 percent.
Abstract of Global cortical activity predicts shape of hand during grasping
Recent studies show that the amplitude of cortical field potentials is modulated in the time domain by grasping kinematics. However, it is unknown if these low frequency modulations persist and contain enough information to decode grasp kinematics in macro-scale activity measured at the scalp via electroencephalography (EEG). Further, it is unclear as to whether joint angle velocities or movement synergies are the optimal kinematics spaces to decode. In this offline decoding study, we infer from human EEG, hand joint angular velocities as well as synergistic trajectories as subjects perform natural reach-to-grasp movements. Decoding accuracy, measured as the correlation coefficient (r) between the predicted and actual movement kinematics, was r = 0.49 ± 0.02 across fifteen hand joints. Across the first three kinematic synergies, decoding accuracies were r = 0.59 ± 0.04, 0.47 ± 0.06 and 0.32 ± 0.05. The spatial-temporal pattern of EEG channel recruitment showed early involvement of contralateral frontal-central scalp areas followed by later activation of central electrodes over primary sensorimotor cortical areas. Information content in EEG about the grasp type peaked at 250 ms after movement onset. The high decoding accuracies in this study are significant not only as evidence for time-domain modulation in macro-scale brain activity, but for the field of brain-machine interfaces as well. Our decoding strategy, which harnesses the neural ‘symphony’ as opposed to local members of the neural ensemble (as in intracranial approaches), may provide a means of extracting information about motor intent for grasping without the need for penetrating electrodes and suggests that it may be soon possible to develop non-invasive neural interfaces for the control of prosthetic limbs.
Carnegie Mellon University | NeuroElectro.org description
Carnegie Mellon University researchers have used data mining to create neuroelectro.org, a publicly available website that acts like Wikipedia, indexing the decades worth of physiological data collected about the billions of neurons in the brain.
The site aims to help accelerate the advance of neuroscience research by providing a centralized resource for collecting and comparing this “brain big data.”
A description of the data available and some of the analyses that can be performed using the site are published online by the Journal of Neurophysiology.
The neurons in the brain can be divided into approximately 300 different types based on their physical and functional properties. The data is scattered across tens of thousands of papers in the scientific literature.
“If we want to think about building a brain or re-engineering the brain, we need to know what parts we’re working with,” said Nathan Urban, interim provost and director of Carnegie Mellon’s BrainHub, a global initiative that focuses on how the structure and activity of the brain give rise to complex behaviors. neuroscience initiative.
Shreejoy J. Tripathy, who worked in Urban’s lab when he was a graduate student in the joint Carnegie Mellon/University of Pittsburgh Center for the Neural Basis of Cognition (CNBC) Program in Neural Computation, selected more than 10,000 published papers that contained physiological data describing how neurons responded to various inputs.
He used text mining algorithms to “read” each of the papers. The software found the portions of each paper that identified the type of neuron studied and then isolated the electrophysiological data related to the properties of that neuronal type. It also retrieved information about how each of the experiments in the literature was completed, and corrected the data to account for any differences that might be caused by the format of the experiment. Overall, Tripathy, who is now a postdoc at the University of British Columbia, was able to collect and standardize data for approximately 100 different types of neurons.
Urban and his group validated much of the data, but they also created a mechanism that allows site users to flag data for further evaluation. Users also can contribute new data with minimal intervention from site administrators, similar to Wikipedia.
“It’s a dynamic environment in which people can collect, refine and add data,” said Urban, who is the Dr. Frederick A. Schwertz Distinguished Professor of Life Sciences and a member of the CNBC. “It will be a useful resource to people doing neuroscience research all over the world.”
Ultimately, the website will help researchers find groups of neurons that share the same physiological properties, which could provide a better understanding of how a neuron functions. For example, if a researcher finds that a type of neuron in the brain’s neocortex fires spontaneously, they can look up other neurons that fire spontaneously and access research papers that address this type of neuron. Using that information, they can quickly form hypotheses about whether or not the same mechanisms are at play in both the newly discovered and previously studied neurons.
To demonstrate how neuroelectro.org could be used, the researchers compared the electrophysiological data from more than 30 neuron types that had been most heavily studied in the literature. They found many expected similarities between the different types of neurons, and some similarities that were a surprise to researchers, representing promising areas for future research.
Expanding the database
In ongoing work, the Carnegie Mellon researchers are comparing the data on neuroelectro.org with other kinds of data, including data on neurons’ patterns of gene expression. For example, Urban’s group is using another publicly available resource, the Allen Brain Atlas, to find whether groups of neurons with similar electrical function have similar gene expression.
“It would take a lot of time, effort and money to determine both the physiological properties of a neuron and its gene expression,” Urban said. “Our website will help guide this research, making it much more efficient.”
This research was funded by the National Science Foundation, the National Institute on Deafness and other Communication Disorders, the National Institute of Mental Health, and the Pennsylvania Department of Health’s Commonwealth Universal Research Enhancement Program.
Co-authors of the study include: Shawn D. Burton of Carnegie Mellon; Richard C. Gerkin, formerly of Carnegie Mellon and now of Arizona State University; and Matthew Geramita of the University of Pittsburgh.
As the birthplace of artificial intelligence and cognitive psychology, Carnegie Mellon has been a leader in the study of brain and behavior for more than 50 years. The university has created some of the first cognitive tutors, helped to develop the Jeopardy-winning Watson, founded a groundbreaking doctoral program in neural computation, and completed cutting-edge work in understanding the genetics of autism. Building on its strengths in biology, computer science, psychology, statistics and engineering.
Abstract of Brain-wide analysis of electrophysiological diversity yields novel categorization of mammalian neuron types
For decades, neurophysiologists have characterized the biophysical properties of a rich diversity of neuron types. However, identifying common features and computational roles shared across neuron types is made more difficult by inconsistent conventions for collecting and reporting biophysical data. Here, we leverage NeuroElectro, a literature-based database of electrophysiological properties (www.neuroelectro.org), to better understand neuronal diversity — both within and across neuron types — and the confounding influences of methodological variability. We show that experimental conditions (e.g., electrode types, recording temperatures, or animal age) can explain a substantial degree of the literature-reported biophysical variability observed within a neuron type. Critically, accounting for experimental metadata enables massive cross-study data normalization and reveals that electrophysiological data are far more reproducible across labs than previously appreciated. Using this normalized dataset, we find that neuron types throughout the brain cluster by biophysical properties into 6-9 super-classes. These classes include intuitive clusters, such as fast-spiking basket cells, as well as previously unrecognized clusters, including a novel class of cortical and olfactory bulb interneurons that exhibit persistent activity at theta-band frequencies.
Read more of this story at Slashdot.