If you're afraid of artificial intelligences, like Stephen Hawking and Elon Musk, the tables are about to get turned. Researchers at Kazan Federal University made an artificial rat brain feel fear and disgust, and they're hoping to model more emotions soon.
An interdisciplinary team led by Maxim Talanov are modelling emotional states in a simulated rat brain using Lővheim's cube of emotion. Along the three axes of the cube of emotion are the neurotransmitters dopamine, serotonin, and noradrenaline with eight emotions at its corners. According to this theory emotions arise as neurotransmitters fluctuate; for example high levels of dopamine but low serotonin and noradrenaline cause fear.
In the case of Talanov's artificial rat brain, emotions are simulated by redistributing computer power between data storage processes and decision-making. So far the easiest emotions to provoke have been disgust and fear. Talanov and his team are certain other emotions, like joy and excitement, will be simulated in 2-3 years.
Which raises some ethical issues about the status of artificial intelligences. If an A.I. feels the whole spectrum of human emotions, should we consider it conscious and afford the entity the same rights as us? Would a smartcar be considered culpable for murder because it felt road rage, its lawyer arguing "It was programmed that way" or hacked with a 'rage' virus?
Talanov acknowledges there's much more to be done since there's not enough computing power available to model the human brain. "This simluation is about a thousand times smaller than the real work of the cerebral cortex, and the brain only needs 20 watts of power to do its job" he told Nikita Statsenko of Rusbase.ru.
Maybe next time you hear someone peddling the horrors of A.I., take heart that they're probably just as afraid of you as you are of them.
You may also enjoy:
- Philosopher Says We Should Begin Planning Now, So That a Super-Intelligent A.I. Doesn't Kill Us All Off
- The Looming Robot Revolution
- A Modern Kōan Of Consciousness
A continuing bone of contention in modern physics is the strange manner in which our universe seems perfectly tuned to give rise to life. For some, it is evidence that our existence is no accident, while more skeptical thinkers have suggested that the thinking is back to front - and we only see things as perfectly tuned because life was what arose under the conditions of our universe.
The video above is from a recent discussion hosted by the Institute of Art and Ideas titled "A Goldilocks World", featuring philosopher Massimo Pigliucci, M-Theorist and author of Universe or Multiverse? Bernard Carr, and Oxford constructor theorist Chiara Marletto:
Is the universe finely tuned for life? Copernicus and Darwin taught us to be skeptical of feeling we were special. Yet from the size of the electron to the cosmological constant our universe is strangely fine-tuned for life. Is this a spectacularly fortuitous accident? Has the universe been tailored for us or do the theories just make it look that way?
The phenomenon of ball lightning remains largely a mystery to modern science, although it has at least largely become an accepted, if little understood phenomenon. One of the anomalies of ball lightning that have kept it on the outer margins of scientific respectability has been the seeming 'impossibility' of its manifestation and movement - sometimes apparently appearing within buildings and aircraft, or passing through closed windows.
A new theory from Chinese scientist H.-C. Wu, of Zhejiang University, may hold a possible answer to this strange behaviour. Wu has proposed that ball lightning might be 'microwave bubbles' formed from radiation emitted by storms, and this could explain their ability to appear or move within enclosed spaces:
Wu theorizes the microwaves arise from a bunch of electrons accelerated to speeds approaching the speed of light when the Earth is struck by lightning. Specifically, the electrons are accelerated by the strong electric field created as a channel of electrons moves stepwise from the base of a cloud toward the ground, just prior to the bright flash we know as a lightning bolt. “At the tip of a lightning stroke reaching the ground,” Wu says, “a relativistic electron bunch can be produced, which in turn excites intense microwave radiation.”
Regardless of their source, the atmospheric microwaves produce plasma by charging up the surrounding air. The radiation exerts sufficient pressure to push the plasma outward into a bubble, which we see as ball lightning. Microwaves trapped inside continue to generate plasma and so maintain the bubble for its brief lifetime. The ball lightning eventually fades as the radiation held within the bubble is dissipated. On the offhand chance the bubble is ruptured, microwaves can leak out and cause the ball to come to an explosive end.
The presence of microwaves and plasma as components of ball lightning can explain several of its properties. For example, microwaves can pass through panes of glass, which is why windows don’t bar the entrance of ball lightning. Microwaves also tend to make an audible noise when they encounter a person’s inner ear, and the plasma they produce will in turn generate acrid-smelling ozone from atmospheric oxygen.
What sets Wu’s microwave origin theory apart is that it explains how ball lightning can appear inside an aircraft. Electrons, being tiny relative to atoms, are able to pass through the metal shell of an aircraft after being accelerated outside of it via a lightning strike. Microwaves are then emitted by the suped-up electrons inside where they form ball lightning. The electron-microwave-plasma pathway also explains the size of ball lightning, since the length of the electron bunch sped up by a lightning strike matches up with the typical 20-50 centimeter diameter of the resulting microwave bubble.
You might also like:
- "Hypnosis, Trance, and Human Evolution", by Adam Crabtree.
- "The Deep Mystery of the Prime Number", by Owen O'Shea.
- "The Temporal Architecture of Life: A Survey of Environmental Dynamics in Human Health", by Kenneth Smith.
Grab the free PDF of EdgeScience 26 from the SSE website, or purchase a printed copy from MagCloud for just $4.95. Please consider a small donation to help the EdgeScience team continue with this excellent publication, via the link on the right-side of the webpage. And join the SSE if you want to keep up with the latest academic research into the 'edgier' areas of science.
Legends abound of ancient people building their monumental megalithic structures by levitating the massive blocks into place. While such ideas don't seem to have any real evidence to back them up, modern science has figured out one way to pull off this levitation 'magic': by using acoustic waves. Though, rather than 200 ton stones, researchers are using - rather disappointingly - styrofoam balls and water droplets.
Nevertheless, it's a cool effect, and the science behind it is fascinating to boot. Destin Sandlin of the excellent Smarter Every Day YouTube channel walks us through it all in the embedded video above.
When we recently saw that amazing video of Boston Dynamics' new Atlas robot being tested to the max, most of us felt empathy for them being 'bullied'. For those that were wondering what the Atlas was actually thinking during testing, the above video may clear things up...
When it comes to humility, science can dish it out with a big spoon: we've often heard of the inconsequential nature of human beings compared to the size of the cosmos (and in fiction, Douglas Adams riffed on this idea in coming up with the Total Perspective Vortex in The Hitchhiker's Guide to the Galaxy).
While there's plenty of criticism that could be directed at this idea - that physical size is the be-all and end-all of importance (vs intelligence, imagination, purpose etc) - an interesting aside is the fact that, while our bodies seem like specks of dust, they contain systems that are cosmic in size.
One such example is human DNA: our body contains approximately five trillion cells, with 'long' strands of DNA immaculately folded into the tiny space within the cell walls. If you were to take all the DNA in just one person, straighten it out and put it end to end, it could stretch from the Sun to beyond the heliosphere (which some use as the demarcation of the 'edge' of our Solar System). Or to put it another way, the DNA molecules in your body could be stretched out to cover the distance from the Earth to Jupiter and back, ten times over.
But perhaps an even more amazing aspect is the way in which this massive length of DNA molecules is compacted within our tiny cells - it needs to be folded via biological origami in specific ways, so that our genes can work together in different ways.
If you have a gene it is often controlled - like, turned on or off - by another piece of DNA, that can be located very, very far apart from this gene. The chromosome is folded in such a way that the switch which turns the gene on or off is actually touching the gene. So all the DNA in between is looped.
These amazing aspects of DNA are discussed in the fascinating science short below, presented by the esteemed science writer Carl Zimmer:
You might also like:
Maybe it's the Toxoplasmosis gondii talking, but humans love cats. The feeling is mutual since, according to Carlos Driscoll of the University of Oxford, cats domesticated themselves 12,000 years ago in hopes of mooching off unsuspecting Homo sapiens.  Charmed by their inscrutible personalities, we talk back to our feline companions by imitating their vocalisations. Arabs greet kitties with "mawa", the Japanese famously intone "nyan", French and Germans say "miaou" and "miau" respectively. Are these different onomatopoeias representative of human dialects, or are cats of faraway lands influenced by their humans's language?
Cat language is not such a silly prospect to consider. Last year scientists claimed a group of chimpanzees altered their vocalizations after being moved from a Dutch safari park to the Edinburgh Zoo, suggesting they have accents.  Less contentious are the accents of whales, evinced by a study published in the Royal Society Open Science illustrating how whalesong differs between populations of these magnificent beasts.  So why not cats?
Susanne Schötz from Lund University in Sweden is spearheading this maverick study. She told Josh Hrala at Science Alert, "We know that cats vary the melody of their sounds extensively, but we do not know how to interpret this variation. We will record vocalisations of about 30 to 50 cats in different situations - e.g. when they want access to desired locations, when they are content, friendly, happy, hungry, annoyed or even angry - and try to identify any differences in their phonetic patterns. We want to find out to what extent domestic cats are influenced by the language and dialect that humans use to speak to them, because it seems that cats use slightly different dialects in the sounds they produce".
It's going to be a long five years 'til the results are published.
You may also enjoy:
- Interspecies Communication via Psychedelics?
- The Animal Mind Is More Complex Than Some Think
- The Language of the Birds: Hummingbird Vocalisations Eight Times Slower Than Normal Speed
- Why Do Cats Hang Around Us? (Hint: They Can't Open Cans) http://www.washingtonpost.com/wp-dyn/con...
- Debate over chimpanzee 'accent' study - http://www.bbc.com/news/science-environm...
- Individual, unit and vocal clan level identity cues in sperm whale codas - http://rsos.royalsocietypublishing.org/c...
Of the many questions that vex humanity, there is one above all others. It’s a question we’ve been asking ourselves since we realised we could ask ourselves questions. There are a lot of people who think they know the answer, even though there are almost more answers than there are people. Even so, we officially don’t know which of those many answers is the truth.
The question is; where did life come from?
If we gloss over the various theological discussions such a question evokes – if only because we haven’t got that kind of time – we still end up with an encyclopedia volume’s worth of theories, hypotheses, suppositions, and crackpot ideas. Primordial soup, panspermia and pseudo-panspermia, deep-hot-biosphere, the clay hypothesis, and several more. All of those ideas and those unlisted are encompassed under a single term: abiogenesis – which is the idea that life can spontaneously manifest out of non-living components. You might also hear the term biopoiesis tossed about in this conversation, which is just a more specific reference to the three stages of the development of life. But these fancy scientific words are such a small part of the question, it’s unfortunate so many people get hung up on them.
It’s important to understand that none of those theories are correct though. Or, well…we still don’t know which, if any of them, is correct. The front runner in this race is the chemical evolution theory of life, which is a reformed version of the primordial soup idea. Its basic tenets have been proven in laboratory, but just because life can arise in that way, doesn’t mean it did (at least on Earth). The scientific community is still working to bring us an answer, but until we invent time travel, it’s possible we’ll never know for sure.
Of course, what we know about how life began pales in comparison to what we know about when it began on Earth.
Some time ago, I brought you discussion on the likelihood that life has developed elsewhere in the galaxy, and the ways in which we speculate about how much of that life might exist. You’re probably familiar with the Drake Equation, which provides a way of mathematically calculating how many times life should have sprung up in our galaxy, and how many times out of that pool such live might reach a point of intelligent civilisation. It depends on several variables, most of which we have to guess at, but current estimates claim that there should be somewhere in the neighbourhood of 10,000 alien civilizations in the Milky Way.
The problem is (or one of the problems is) we’ve yet to find evidence of such life. And as disappointing as that is, it actually shouldn’t be surprising. As science-fiction author Douglas Adams once wrote:
“Space is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist, but that's just peanuts to space.”
As I pointed out in previous discussion though, there are other considerations, such as Moore’s Law.
Moore’s Law, which is usually applied to the development of computer hardware, has been used by some to suggest that life on Earth is actually older than Earth itself. It’s an interesting idea – though it’s little more than a thought experiment – but it seems like some new developments are starting to back up those conclusions.
The main way scientists test and study the age of life on Earth is by breaking apart really old rocks and analysing the elements they find inside. Certain chemical compounds are to be expected inside rocks, of any age; silica, iron (and other metals), some acids, oxygen, helium…the usual. But sometimes, trapped inside those rocks are compounds that are unexpected, chemicals that are unique to life, such as graphite. Graphite is pure carbon, which is the key component of all life on Earth, and when you find it inside rocks, it’s a sure sign that life existed when that rock was formed. Deductive reasoning at its finest.
Using the above process of elimination, the standard model of biopoiesis tells us that life began on Earth between 3.5-3.83 billion years ago. That number sits well with most scientists mainly because the period of time between 3.8-4.1 billion years ago is largely thought to have been so volatile – it’s known as the heavy bombardment period because of the massive and cataclysmic cosmic impacts that occurred during that time – that the development of life would have been impossible prior to it.
However, new results published in Proceedings of the National Academy of the Sciences in September have thrown that bit of accepted wisdom right out the window. Geochemist at the University of California, Los Angeles, and co-author of the paper, Mark Harrison, explains that he and his team found strong evidence that life began more than 250 million years earlier than previously thought. Analysing some 10,000 zircon fragments from rocks found throughout Western Australia, Harrison found what appeared to be graphite inclusions embedded in 79 zircons (zircons are like diamonds; very hard, and can form around elements from their environment). One of those tiny flecks has been confirmed as graphite, and through radiometric dating, that particular zircon is believed to be 4.1 billion years old.
Harris admits that this finding would have been heretical 25 years ago, but their conclusions are compelling. As seems to be a trend in biopoiesis research, the age of life on Earth just keeps getting older and older. This new information not only suggests an earlier birthdate, but it also says some things about just how resilient life is; it either survived the incredible heat and radiation associated with the heavy bombardment period, or it sprung back up again immediately after. Both of those possibilities are astounding. Not only would the survival of early life through such planetary upheaval be impressive, but the alternative means that genesis happened twice, within a period 300 million years. If that were the case, it suggests that life can form very quickly given the right conditions, lending even more weight to the idea that we should be surrounded by it in the universe.
Earth is a relatively young planet at 4.543 billion years, there are much older just in the Milky Way. If we’ve had four billion years to get where we are, what of others who’ve had five? Eight? Ten?
“The universe is a lot more complicated than you might think, even if you start from a position of thinking that it’s pretty damn complicated to begin with.”
 Elizabeth A. Bell, Patrick Boehnke, T. Mark Harrison, and Wendy L. Mao. Potentially biogenic carbon preserved in a 4.1 billion-year-old zircon. Proceedings of the National Academy of Sciences, vol. 112 no. 47, 14518–14521, doi: 10.1073/pnas.1517557112. September 4, 2015.
Over the years Planck's Principle's been popularized by scientists with respectable credentials who can't get peer-reviewed, even if they put nudies of Jennifer Lawrence in their appendices. It's cold comfort believing The Man's keeping them down and stalling scientific progress, but is that the case?
Over at the National Bureau of Economic Research, a new paper suggests the answer is a resounding yes. But like all topics muddied up with human emotions and foibles, the conclusion is hardly cut-and-dried.
Pierre Azoulay, Christian Fons-Rosen, and Joshua Graff Ziven chose to study the field of academic life sciences. Tons of discoveries have been made over past decades, opening up new frontiers, creating many specialists for those new fields, illustrating a microcosm representative of the whole of science. Drawing upon the vast PubMed database, Azoulay and company determined who were the superstars in a particular field based on their professional achievements and papers. Out of more than 12,000 star scientists, they identified 452 who died suddenly. Their former collaborators, left in a lurch, pretty much stopped publishing at the rate when they were riding their deceased guru's coattails. After all former colleagues would be wary of anyone finding out they hardly did any of the heavy lifting, which is where outsiders come in.
With big shoes to fill, newcomers take the deaths as an opportunity to submit more papers to bridge the gap. Then things get kinda Orwellian:
Our results indicate that these additional contributions by non-collaborators are disproportionately likely to be highly cited and to represent their authors' first foray into the extinct star's subfield. They also are less likely to cite previous research in the field, and especially less likely to cite the deceased star's work at all. Though not necessarily younger on average, these scientists are also less likely to be part of the scientific elite at the time of the star's death.
One of the biggest hurdles outsiders face is being accepted socially and intellectually. In the former case colleagues only review each others manuscripts, collaborating within their own clique. In the latter there's an echo chamber with peers agreeing upon approaches, methodologies, and questions pertinent to their line of inquiry, rather than entertaining new ideas. It's basic schoolyard politics where kids won't let anyone join their club unless they're deemed smart or cool enough.
As for the specter of conspiracy, the paper's authors discovered a mere handful of the 452 deceased researchers were in a position of power in regards to new research. Only three subjects sat on panels determining the merits of grant applications, and another three were journal editors before their death. It's more likely they were murdered by frustrated peers, rather than actively suppressing fresh science.
This isn't the last word on the subject, since this paper raises still more questions.
What is the fate of the fields that these new entrants departed? Do they decay, or instead "merge" with those whose star departed prematurely? Given a finite supply of scientists and the adjustment costs involved in switching scientific focus, one would expect some other field to contract on the margin in the wake of superstar extinction. Is this marginal field more novel, or already established?
You may also enjoy:
- To Celebrate the 100th Birthday of the Late Martin Gardner, Some Skepticism
- Maverick Biologist Rupert Sheldrake Criticizes Attacks by 'Guerilla Skeptics' on Wikipedia
- Biologist Rupert Sheldrake Explains the Ten Dogmas Holding Science Back
- The Myth of the Million Dollar Challenge
Thanks to David Pecotić and Grail-Seeker for sharing this paper!