- VR & GamesRenewable EnergyArticles To Get You ThinkingStar TrekBasketballArchitecture PhotographySoccerEnvironmental DesignGenealogyThe Ancient WorldAndroidSci-FiJapanBeautiful BritainDoodling & DrawingIrelandArt LoversDollhouse MiniaturesPotterheadsK-PopArtificial Intelligence & RoboticsToy Models & CraftsThe Global EconomyThe World of Maps and NavigationChessAmazing EarthLandscapingWoodworkingWeddingsCookingCoffee & TeaCats and Dogs
Post has attachment
Post has attachment
Post has attachment
Post has attachment
Post has attachment
Post has attachment
Just One Glass Of This Drink Keeps Cancer Away
There are many cure for various health diseases and issues the natural way. You just need to use the chance given and try to fight cancer and other diseases using natural remedies. One way to improve your health and fight cancer is to consume a drink made of great leafy vegetables.
#cancer #health #food #explore #drink #healthbenefits #remedy #medicine
There are many cure for various health diseases and issues the natural way. You just need to use the chance given and try to fight cancer and other diseases using natural remedies. One way to improve your health and fight cancer is to consume a drink made of great leafy vegetables.
#cancer #health #food #explore #drink #healthbenefits #remedy #medicine
Post has attachment
Yelp for iOS updated with in-app support for Apple Pay checkout
Popular iOS app Yelp has today been updated with a notable addition: Apple Pay support. Bringing the app to version 11.25.0, the update lets users opt to pay with Apple Pay when booking with local businesses, restaurants, and more.
https://9to5mac.com/2017/10/11/yelp-for-ios-updated-with-in-app-support-for-apple-pay-checkout/
Popular iOS app Yelp has today been updated with a notable addition: Apple Pay support. Bringing the app to version 11.25.0, the update lets users opt to pay with Apple Pay when booking with local businesses, restaurants, and more.
https://9to5mac.com/2017/10/11/yelp-for-ios-updated-with-in-app-support-for-apple-pay-checkout/
Post has attachment
Post has attachment
Post has attachment

Zion Night Sky
The Milky Way above the slick rock country of Zion National Park. An image from a March trip to Zion. I knew there were some great rock formations on the east side of the park I could photograph as an interesting foreground at night.
Here is one of those rock formations with the arc of the Milky Way.
Sony A7S with Samyang 24mm f/1.4 lens
#wildernessphotographer #utah #zion #nationalpark #milkyway #nightscape #sonya7s #samyang
The Milky Way above the slick rock country of Zion National Park. An image from a March trip to Zion. I knew there were some great rock formations on the east side of the park I could photograph as an interesting foreground at night.
Here is one of those rock formations with the arc of the Milky Way.
Sony A7S with Samyang 24mm f/1.4 lens
#wildernessphotographer #utah #zion #nationalpark #milkyway #nightscape #sonya7s #samyang
Post has attachment
Post has attachment
Post has attachment
Low-cost battery from waste graphite
Lithium ion batteries are flammable and the price of the raw material is rising. Are there alternatives? Yes: Empa and ETH Zürich researchers have discovered promising approaches as to how we might produce batteries out of waste graphite and scrap metal.
Kostiantyn Kravchyk works in the group of Maksym Kovalenko. This research group is based at both ETH Zurich and in Empa's Laboratory for Thin Films and Photovoltaics. The two researchers' ambitious goal at the Empa branch is to make a battery out of the most common elements in the Earth's crust – such as magnesium or aluminum. These metals offer a high degree of safety, even if the anode is made of pure metal. This also offers the opportunity to assemble the batteries in a very simple and inexpensive way and to rapidly upscale the production.
Read more at>>
https://phys.org/news/2017-10-low-cost-battery-graphite.html
Lithium ion batteries are flammable and the price of the raw material is rising. Are there alternatives? Yes: Empa and ETH Zürich researchers have discovered promising approaches as to how we might produce batteries out of waste graphite and scrap metal.
Kostiantyn Kravchyk works in the group of Maksym Kovalenko. This research group is based at both ETH Zurich and in Empa's Laboratory for Thin Films and Photovoltaics. The two researchers' ambitious goal at the Empa branch is to make a battery out of the most common elements in the Earth's crust – such as magnesium or aluminum. These metals offer a high degree of safety, even if the anode is made of pure metal. This also offers the opportunity to assemble the batteries in a very simple and inexpensive way and to rapidly upscale the production.
Read more at>>
https://phys.org/news/2017-10-low-cost-battery-graphite.html
Post has attachment
Take a sneak peek of our rehearsals for our performance at the Pro Football Hall of Fame! Check it out tonight during Dallas Cowboys Cheerleaders: Making the Team on CMT!
Watch video: bit.ly/2kLMaoL
#Sports #NFL #Cheerleaders
(Credit: Dallas Cowboys Cheerleaders)
Watch video: bit.ly/2kLMaoL
#Sports #NFL #Cheerleaders
(Credit: Dallas Cowboys Cheerleaders)
Post has attachment
Post has attachment
You can now download the Google Pixel 2's new launcher on any smartphone running Android Lollipop or above — no root required. #News #Google #GooglePixel2
Post has attachment

Autumn Colors 2017
All the beauty of life is made up of light and shadow.
Bifue, Hokkaido
Japan
Photography and Copyright © Sylvia Ting
All Rights Reserved
#japan
#hokkaido
#bifue #美笛
#autumn2017
#nature
#btplandscapepro
+BTP Landscape Pro
#sylvia_photo
All the beauty of life is made up of light and shadow.
Bifue, Hokkaido
Japan
Photography and Copyright © Sylvia Ting
All Rights Reserved
#japan
#hokkaido
#bifue #美笛
#autumn2017
#nature
#btplandscapepro
+BTP Landscape Pro
#sylvia_photo
Post has attachment
Post has attachment
Post has attachment
Post has attachment
Post has attachment

Stuck
ya gotta wonder whats under the snow here! in some places you can now longer walk underneath the big gums since there is 2-3 metres of snow and the lower branches are also partially encased in snow.
Sadly, many of the snow gums in the kosciuszko national park are well and truly dead from previous bush fires, i suspect, unlike typical aussie bush which regenerates quite quickly after even the more severe bushfires, these gums may take many tens of years to come back... if at all.
ya gotta wonder whats under the snow here! in some places you can now longer walk underneath the big gums since there is 2-3 metres of snow and the lower branches are also partially encased in snow.
Sadly, many of the snow gums in the kosciuszko national park are well and truly dead from previous bush fires, i suspect, unlike typical aussie bush which regenerates quite quickly after even the more severe bushfires, these gums may take many tens of years to come back... if at all.
Post has attachment
‹




›
14/10/2017
4 Photos - View album
Icy Icy sticks
Gotta love nature, its amazing how these icy sticks form and how quickly they disappear.
#kosciuszko
Gotta love nature, its amazing how these icy sticks form and how quickly they disappear.
#kosciuszko
Post has attachment
Post has attachment
Post has attachment
Man gets T-Mobile tattoo, receives a free iPhone 8 - Ah, the crazy things people would do nowadays for either social media recognition or some free stuff. Combine both and you just strengthen their resolve. Back on the 6th of October, one Philip Harrison tweeted that he'd happily tattoo the T Mobile logo on a visible place of his body in exchange for an iPhone 8. The tweet does address T-Mo's CEO John Legere, who is quite active on the social media and was quick to reply – deal! On the 8th ...
Post has attachment
Post has attachment
AI is only loosely modeled on the brain. So what if you wanted to do it right? You’d need to do what has been impossible until now: map what actually happens in neurons and nerve fibers.
Here’s the problem with artificial intelligence today says David Cox. Yes, it has gotten astonishingly good, from near-perfect facial recognition to driverless cars and world-champion Go-playing machines. And it’s true that some AI applications don’t even have to be programmed anymore: they’re based on architectures that allow them to learn from experience.
Yet there is still something clumsy and brute-force about it, says Cox a neuroscientist at Harvard.
To build a dog detector, you need to show the program thousands of things that are dogs and thousands that aren’t dogs he says. My daughter only had to see one dog and has happily pointed out puppies ever since.
And the knowledge that today’s AI does manage to extract from all that data can be oddly fragile. Add some artful static to an image, noise that a human wouldn’t even notice, and the computer might just mistake a dog for a dumpster. That’s not good if people are using facial recognition for, say, security on smartphones (see “Is AI Riding a One-Trick Pony?”).
To overcome such limitations, Cox and dozens of other neuroscientists and machine-learning experts joined forces last year for the Machine Intelligence from Cortical Networks (MICrONS) initiative: a $100 million effort to reverse-engineer the brain.
It will be the neuroscience equivalent of a moonshot, says Jacob Vogelstein who conceived and launched MICrONS when he was a program officer for the Intelligence Advanced Research Projects Agency, the U.S. intelligence community’s research arm. (He is now at the venture capital firm Camden Partners in Baltimore.)
MICrONS researchers are attempting to chart the function and structure of every detail in a small piece of rodent cortex.
It’s a testament to the brain’s complexity that a moonshot is needed to map even this tiny piece of cortex, a cube measuring one millimeter on a side, the size of a coarse grain of sand. But this cube is thousands of times bigger than any chunk of brain anyone has tried to detail. It will contain roughly 100,000 neurons and something like a billion synapses, the junctions that allow nerve impulses to leap from one neuron to the next.
It’s an ambition that leaves other neuroscientists awestruck.
I think what they are doing is heroic says Eve Marder who has spent her entire career studying much smaller neural circuits at Brandeis University.
It’s among the most exciting things happening in neuroscience says Konrad Kording who does computational modeling of the brain at the University of Pennsylvania.
The ultimate payoff will be the neural secrets mined from the project’s data, principles that should form what Vogelstein calls “the computational building blocks for the next generation of AI.”
After all, he says, today’s neural networks are based on a decades-old architecture and a fairly simplistic notion of how the brain works.
Essentially, these systems spread knowledge across thousands of densely interconnected “nodes,” analogous to the brain’s neurons. The systems improve their performance by adjusting the strength of the connections. But in most computer neural networks the signals always cascade forward, from one set of nodes to the next.
The real brain is full of feedback: for every bundle of nerve fibers conveying signals from one region to the next, there is an equal or greater number of fibers coming back the other way.
But why? Are those feedback fibers the secret to one-shot learning and so many other aspects of the brain’s immense power? Is something else going on?
MICrONS should provide at least some of the answers, says Princeton University neuroscientist Sebastian Seung who is playing a key role in the mapping effort.
In fact, he says I don’t think we can answer these questions without a project like this.
The MICrONS teams, one led by Cox, one based at Rice University and the Baylor College of Medicine, and a third at Carnegie Mellon, are each pursuing something that is remarkably comprehensive: a reconstruction of all the cells in a cubic millimeter of a rat’s brain, plus a wiring diagram, a “connectome”—showing how every cell is connected to every other cell, and data showing exactly which situations make neurons fire and influence other neurons.
The first step is to look into the rats’ brains and figure out what neurons in that cubic millimeter are actually doing. When the animal is given a specific visual stimulus, such as a line oriented a certain way, which neurons suddenly start firing off impulses, and which neighbors respond?
As recently as a decade ago, capturing that kind of data ranged from difficult to impossible: The tools just never existed Vogelstein says.
It’s true that researchers could slide ultrathin wires into the brain and get beautiful recordings of electrical activity in individual neurons. But they couldn’t record from more than a few dozen at a time because the cells are packed so tightly together.
Researchers could also map the overall geography of neural activity by putting humans and other animals in MRI machines. But researchers couldn’t monitor individual neurons that way: the spatial resolution was about a millimeter at best.
What broke that impasse was the development of techniques for making neurons light up when they fire in a living brain.
To do it, scientists typically seed the neurons with fluorescent proteins that glow in the presence of calcium ions, which surge in abundance whenever a cell fires. The proteins can be inserted into a rodent’s brain chemically, carried in by a benign virus, or even encoded into the neurons’ genome.
The fluorescence can then be triggered in several ways, perhaps most usefully, by a pair of lasers that pump infrared light into the rat through a window set into its skull.
The infrared frequencies allow the photons to penetrate the comparatively opaque neural tissue without damaging anything, before getting absorbed by the fluorescent proteins. The proteins, in turn, combine the energy from two of the infrared photons and release it as a single visible-light photon that can be seen under an ordinary microscope as the animal looks at something or performs any number of other actions.
Andreas Tolias who leads part of the team at Baylor, says this is “revolutionary” because “you can record from every single neuron, even those that are right next to one another.”
Once a team in Cox’s lab has mapped a rat’s neural activity, the animal is killed and its brain is infused with the heavy metal osmium. Then a team headed by Harvard biologist Jeff Lichtman cuts the brain into slices and figures out exactly how the neurons are organized and connected.
That process starts in a basement lab with a desktop machine that works like a delicatessen salami slicer. A small metal plate rises and falls, methodically carving away the tip of what seems to be an amber-colored crayon and adhering the slices to a conveyor belt made of plastic tape. The difference is that the “salami” is actually a tube of hard resin that encases and supports the fragile brain tissue, the moving plate contains an impossibly sharp diamond blade, and the slices are about 30 nanometers thick.
Next, at another lab down the hall, lengths of tape containing several brain slices each are mounted on silicon wafers and placed inside what looks like a large industrial refrigerator. The device is an electron microscope: it uses 61 electron beams to scan 61 patches of brain tissue simultaneously at a resolution of four nanometers.
Each wafer takes about 26 hours to scan. Monitors next to the microscope show the resulting images as they build up in awe-inspiring detail—cell membranes, mitochondria, neurotransmitter-filled vesicles crowding at the synapses. It’s like zooming in on a fractal: the closer you look, the more complexity you see.
Slicing is hardly the end of the story. Even as the scans come pouring out of the microscope You’re sort of making a movie where each slice is deeper says Lichtman, they are forwarded to a team led by Harvard computer scientist Hanspeter Pfister. “Our role is to take the images and extract as much information as we can,” says Pfister.
That means reconstructing all those three-dimensional neurons, with all their organelles, synapses, and other features, from a stack of 2-D slices. Humans could do it with paper and pencil, but that would be hopelessly slow, says Pfister. So he and his team have trained neural networks to track the real neurons. They perform a lot better than all the other methods we’ve used he says.
Each neuron, no matter its size, puts out a forest of tendrils known as dendrites, and each has another long, thin fiber called an axon for transmitting nerve impulses over long distances, completely across the brain, in extreme cases, or even all the way down the spinal cord.
But by mapping a cubic millimeter as MICrONS is doing, researchers can follow most of these fibers from beginning to end and thus see a complete neural circuit. I think we’ll discover things Pfister says. Probably structures we never suspected, and completely new insights into the wiring.
The power of anticipation
Among the questions the MICrONS teams hope to begin answering: What are the brain’s algorithms? How do all those neural circuits actually work? And in particular, what is all that feedback doing?
Many of today’s AI applications don’t use feedback.
Electronic signals in most neural networks cascade from one layer of nodes to the next, but generally not backward. (Don’t be thrown by the term “backpropagation,” which is a way to train neural networks.) That’s not a hard-and-fast rule: “recurrent” neural networks do have connections that go backward, which helps them deal with inputs that change with time.
But none of them use feedback on anything like the brain’s scale. In one well-studied part of the visual cortex, says Tai Sing Lee at Carnegie Mellon, “only 5 to 10 percent of the synapses are listening to input from the eyes.” The rest are listening to feedback from higher levels in the brain.
There are two broad theories about what the feedback is for, says Cox, and one is the notion that the brain is constantly trying to predict its own inputs.
While the sensory cortex is processing this frame of the movie, so to speak, the higher levels of the brain are trying to anticipate the next frame, and passing their best guesses back down through the feedback fibers.
This the only way the brain can deal with a fast-moving environment.
Neurons are really slow Cox says. It can take up to 170 to 200 milliseconds to go from light hitting the retina through all the stages of processing up to the level of conscious perception. In that time, Serena Williams’s tennis serve travels nine meters. So anyone who manages to return that serve must be swinging her racket on the basis of prediction.
And if you’re constantly trying to predict the future, Cox says, then when the real future arrives, you can adjust to make your next prediction better.
That meshes well with the second major theory being explored: that the brain’s feedback connections are there to guide learning. Indeed, computer simulations show that a struggle for improvement forces any system to build better and better models of the world. For example, Cox says, “you have to figure out how a face will appear if it turns.” And that, he says, may turn out to be a critical piece of the one-shot-learning puzzle.
When my daughter first saw a dog says Cox, she didn’t have to learn about how shadows work, or how light bounces off surfaces. She had already built up a rich reservoir of experience about such things, just from living in the world. “So when she got to something like ‘That’s a dog,’ he says, she could add that information to a huge body of knowledge.
If these ideas about the brain’s feedback are correct, they could show up in MICrONS’s detailed map of a brain’s form and function. The map could demonstrate what tricks the neural circuitry uses to implement prediction and learning. Eventually, new AI applications could mimic that process.
Even then, however, we will remain far from answering all the questions about the brain. Knowing neural circuitry won’t teach us everything.
There are forms of cell-to-cell communication that don’t go through the synapses, including some performed by hormones and neurotransmitters floating in the spaces between the neurons. There is also the issue of scale. As big a leap as MICrONS may be, it is still just looking at a tiny piece of cortex for clues about what’s relevant to computation. And the cortex is just the thin outer layer of the brain. Critical command-and-control functions are also carried out by deep-brain structures such as the thalamus and the basal ganglia.
The good news is that MICrONS is already paving the way for future projects that map larger sections of the brain.
Much of the $100 million, Vogelstein says, is being spent on data collection technologies that won’t have to be invented again.
At the same time, MICrONS teams are developing faster scanning techniques, including one that eliminates the need to slice tissue.
The Carnegie Mellon group, working with teams at Harvard, MIT, and the Woods Hole Oceanographic Institution, has devised a way to uniquely label each neuron with a “bar-coding” scheme and then view the cells in great detail by saturating them with a special gel that very gently inflates them to dozens or hundreds of times their normal size.
So the first cubic millimeter will be hard to collect Vogelstein says, but the next will be much easier.
M. Mitchell Waldrop is a freelance writer in Washington, D.C. He is the author of Complexity and The Dream Machine and was formerly an editor at Nature.
Here’s the problem with artificial intelligence today says David Cox. Yes, it has gotten astonishingly good, from near-perfect facial recognition to driverless cars and world-champion Go-playing machines. And it’s true that some AI applications don’t even have to be programmed anymore: they’re based on architectures that allow them to learn from experience.
Yet there is still something clumsy and brute-force about it, says Cox a neuroscientist at Harvard.
To build a dog detector, you need to show the program thousands of things that are dogs and thousands that aren’t dogs he says. My daughter only had to see one dog and has happily pointed out puppies ever since.
And the knowledge that today’s AI does manage to extract from all that data can be oddly fragile. Add some artful static to an image, noise that a human wouldn’t even notice, and the computer might just mistake a dog for a dumpster. That’s not good if people are using facial recognition for, say, security on smartphones (see “Is AI Riding a One-Trick Pony?”).
To overcome such limitations, Cox and dozens of other neuroscientists and machine-learning experts joined forces last year for the Machine Intelligence from Cortical Networks (MICrONS) initiative: a $100 million effort to reverse-engineer the brain.
It will be the neuroscience equivalent of a moonshot, says Jacob Vogelstein who conceived and launched MICrONS when he was a program officer for the Intelligence Advanced Research Projects Agency, the U.S. intelligence community’s research arm. (He is now at the venture capital firm Camden Partners in Baltimore.)
MICrONS researchers are attempting to chart the function and structure of every detail in a small piece of rodent cortex.
It’s a testament to the brain’s complexity that a moonshot is needed to map even this tiny piece of cortex, a cube measuring one millimeter on a side, the size of a coarse grain of sand. But this cube is thousands of times bigger than any chunk of brain anyone has tried to detail. It will contain roughly 100,000 neurons and something like a billion synapses, the junctions that allow nerve impulses to leap from one neuron to the next.
It’s an ambition that leaves other neuroscientists awestruck.
I think what they are doing is heroic says Eve Marder who has spent her entire career studying much smaller neural circuits at Brandeis University.
It’s among the most exciting things happening in neuroscience says Konrad Kording who does computational modeling of the brain at the University of Pennsylvania.
The ultimate payoff will be the neural secrets mined from the project’s data, principles that should form what Vogelstein calls “the computational building blocks for the next generation of AI.”
After all, he says, today’s neural networks are based on a decades-old architecture and a fairly simplistic notion of how the brain works.
Essentially, these systems spread knowledge across thousands of densely interconnected “nodes,” analogous to the brain’s neurons. The systems improve their performance by adjusting the strength of the connections. But in most computer neural networks the signals always cascade forward, from one set of nodes to the next.
The real brain is full of feedback: for every bundle of nerve fibers conveying signals from one region to the next, there is an equal or greater number of fibers coming back the other way.
But why? Are those feedback fibers the secret to one-shot learning and so many other aspects of the brain’s immense power? Is something else going on?
MICrONS should provide at least some of the answers, says Princeton University neuroscientist Sebastian Seung who is playing a key role in the mapping effort.
In fact, he says I don’t think we can answer these questions without a project like this.
The MICrONS teams, one led by Cox, one based at Rice University and the Baylor College of Medicine, and a third at Carnegie Mellon, are each pursuing something that is remarkably comprehensive: a reconstruction of all the cells in a cubic millimeter of a rat’s brain, plus a wiring diagram, a “connectome”—showing how every cell is connected to every other cell, and data showing exactly which situations make neurons fire and influence other neurons.
The first step is to look into the rats’ brains and figure out what neurons in that cubic millimeter are actually doing. When the animal is given a specific visual stimulus, such as a line oriented a certain way, which neurons suddenly start firing off impulses, and which neighbors respond?
As recently as a decade ago, capturing that kind of data ranged from difficult to impossible: The tools just never existed Vogelstein says.
It’s true that researchers could slide ultrathin wires into the brain and get beautiful recordings of electrical activity in individual neurons. But they couldn’t record from more than a few dozen at a time because the cells are packed so tightly together.
Researchers could also map the overall geography of neural activity by putting humans and other animals in MRI machines. But researchers couldn’t monitor individual neurons that way: the spatial resolution was about a millimeter at best.
What broke that impasse was the development of techniques for making neurons light up when they fire in a living brain.
To do it, scientists typically seed the neurons with fluorescent proteins that glow in the presence of calcium ions, which surge in abundance whenever a cell fires. The proteins can be inserted into a rodent’s brain chemically, carried in by a benign virus, or even encoded into the neurons’ genome.
The fluorescence can then be triggered in several ways, perhaps most usefully, by a pair of lasers that pump infrared light into the rat through a window set into its skull.
The infrared frequencies allow the photons to penetrate the comparatively opaque neural tissue without damaging anything, before getting absorbed by the fluorescent proteins. The proteins, in turn, combine the energy from two of the infrared photons and release it as a single visible-light photon that can be seen under an ordinary microscope as the animal looks at something or performs any number of other actions.
Andreas Tolias who leads part of the team at Baylor, says this is “revolutionary” because “you can record from every single neuron, even those that are right next to one another.”
Once a team in Cox’s lab has mapped a rat’s neural activity, the animal is killed and its brain is infused with the heavy metal osmium. Then a team headed by Harvard biologist Jeff Lichtman cuts the brain into slices and figures out exactly how the neurons are organized and connected.
That process starts in a basement lab with a desktop machine that works like a delicatessen salami slicer. A small metal plate rises and falls, methodically carving away the tip of what seems to be an amber-colored crayon and adhering the slices to a conveyor belt made of plastic tape. The difference is that the “salami” is actually a tube of hard resin that encases and supports the fragile brain tissue, the moving plate contains an impossibly sharp diamond blade, and the slices are about 30 nanometers thick.
Next, at another lab down the hall, lengths of tape containing several brain slices each are mounted on silicon wafers and placed inside what looks like a large industrial refrigerator. The device is an electron microscope: it uses 61 electron beams to scan 61 patches of brain tissue simultaneously at a resolution of four nanometers.
Each wafer takes about 26 hours to scan. Monitors next to the microscope show the resulting images as they build up in awe-inspiring detail—cell membranes, mitochondria, neurotransmitter-filled vesicles crowding at the synapses. It’s like zooming in on a fractal: the closer you look, the more complexity you see.
Slicing is hardly the end of the story. Even as the scans come pouring out of the microscope You’re sort of making a movie where each slice is deeper says Lichtman, they are forwarded to a team led by Harvard computer scientist Hanspeter Pfister. “Our role is to take the images and extract as much information as we can,” says Pfister.
That means reconstructing all those three-dimensional neurons, with all their organelles, synapses, and other features, from a stack of 2-D slices. Humans could do it with paper and pencil, but that would be hopelessly slow, says Pfister. So he and his team have trained neural networks to track the real neurons. They perform a lot better than all the other methods we’ve used he says.
Each neuron, no matter its size, puts out a forest of tendrils known as dendrites, and each has another long, thin fiber called an axon for transmitting nerve impulses over long distances, completely across the brain, in extreme cases, or even all the way down the spinal cord.
But by mapping a cubic millimeter as MICrONS is doing, researchers can follow most of these fibers from beginning to end and thus see a complete neural circuit. I think we’ll discover things Pfister says. Probably structures we never suspected, and completely new insights into the wiring.
The power of anticipation
Among the questions the MICrONS teams hope to begin answering: What are the brain’s algorithms? How do all those neural circuits actually work? And in particular, what is all that feedback doing?
Many of today’s AI applications don’t use feedback.
Electronic signals in most neural networks cascade from one layer of nodes to the next, but generally not backward. (Don’t be thrown by the term “backpropagation,” which is a way to train neural networks.) That’s not a hard-and-fast rule: “recurrent” neural networks do have connections that go backward, which helps them deal with inputs that change with time.
But none of them use feedback on anything like the brain’s scale. In one well-studied part of the visual cortex, says Tai Sing Lee at Carnegie Mellon, “only 5 to 10 percent of the synapses are listening to input from the eyes.” The rest are listening to feedback from higher levels in the brain.
There are two broad theories about what the feedback is for, says Cox, and one is the notion that the brain is constantly trying to predict its own inputs.
While the sensory cortex is processing this frame of the movie, so to speak, the higher levels of the brain are trying to anticipate the next frame, and passing their best guesses back down through the feedback fibers.
This the only way the brain can deal with a fast-moving environment.
Neurons are really slow Cox says. It can take up to 170 to 200 milliseconds to go from light hitting the retina through all the stages of processing up to the level of conscious perception. In that time, Serena Williams’s tennis serve travels nine meters. So anyone who manages to return that serve must be swinging her racket on the basis of prediction.
And if you’re constantly trying to predict the future, Cox says, then when the real future arrives, you can adjust to make your next prediction better.
That meshes well with the second major theory being explored: that the brain’s feedback connections are there to guide learning. Indeed, computer simulations show that a struggle for improvement forces any system to build better and better models of the world. For example, Cox says, “you have to figure out how a face will appear if it turns.” And that, he says, may turn out to be a critical piece of the one-shot-learning puzzle.
When my daughter first saw a dog says Cox, she didn’t have to learn about how shadows work, or how light bounces off surfaces. She had already built up a rich reservoir of experience about such things, just from living in the world. “So when she got to something like ‘That’s a dog,’ he says, she could add that information to a huge body of knowledge.
If these ideas about the brain’s feedback are correct, they could show up in MICrONS’s detailed map of a brain’s form and function. The map could demonstrate what tricks the neural circuitry uses to implement prediction and learning. Eventually, new AI applications could mimic that process.
Even then, however, we will remain far from answering all the questions about the brain. Knowing neural circuitry won’t teach us everything.
There are forms of cell-to-cell communication that don’t go through the synapses, including some performed by hormones and neurotransmitters floating in the spaces between the neurons. There is also the issue of scale. As big a leap as MICrONS may be, it is still just looking at a tiny piece of cortex for clues about what’s relevant to computation. And the cortex is just the thin outer layer of the brain. Critical command-and-control functions are also carried out by deep-brain structures such as the thalamus and the basal ganglia.
The good news is that MICrONS is already paving the way for future projects that map larger sections of the brain.
Much of the $100 million, Vogelstein says, is being spent on data collection technologies that won’t have to be invented again.
At the same time, MICrONS teams are developing faster scanning techniques, including one that eliminates the need to slice tissue.
The Carnegie Mellon group, working with teams at Harvard, MIT, and the Woods Hole Oceanographic Institution, has devised a way to uniquely label each neuron with a “bar-coding” scheme and then view the cells in great detail by saturating them with a special gel that very gently inflates them to dozens or hundreds of times their normal size.
So the first cubic millimeter will be hard to collect Vogelstein says, but the next will be much easier.
M. Mitchell Waldrop is a freelance writer in Washington, D.C. He is the author of Complexity and The Dream Machine and was formerly an editor at Nature.
Post has attachment
Post has attachment
The new camera app port brings Pixel 2 features like Face Retouching to the original Pixels. #News #Google #GooglePixel
Post has attachment
NGC 1365: Majestic Island Universe
Image Credit & Copyright: Dietmar Hager, Eric Benson, Torsten Grossmann
Explanation: Barred spiral galaxy NGC 1365 is truly a majestic island universe some 200,000 light-years across. Located a mere 60 million light-years away toward the chemical constellation Fornax, NGC 1365 is a dominant member of the well-studied Fornax galaxy cluster. This impressively sharp color image shows intense star forming regions at the ends of the bar and along the spiral arms, and details of dust lanes cutting across the galaxy's bright core. At the core lies a supermassive black hole. Astronomers think NGC 1365's prominent bar plays a crucial role in the galaxy's evolution, drawing gas and dust into a star-forming maelstrom and ultimately feeding material into the central black hole.
Image Credit & Copyright: Dietmar Hager, Eric Benson, Torsten Grossmann
Explanation: Barred spiral galaxy NGC 1365 is truly a majestic island universe some 200,000 light-years across. Located a mere 60 million light-years away toward the chemical constellation Fornax, NGC 1365 is a dominant member of the well-studied Fornax galaxy cluster. This impressively sharp color image shows intense star forming regions at the ends of the bar and along the spiral arms, and details of dust lanes cutting across the galaxy's bright core. At the core lies a supermassive black hole. Astronomers think NGC 1365's prominent bar plays a crucial role in the galaxy's evolution, drawing gas and dust into a star-forming maelstrom and ultimately feeding material into the central black hole.
Post has attachment
OFFICIAL #F1 Report: Red Bull Racing - Japanese GP Event Recap in Articles and Images: The Japanese GP Event Recap. Race Recaps in Articles and Images. Hope You enjoy them... © Copyrights Apply
Post has attachment
Post has attachment
This is the first of a series of cartoons featuring the near sighted Mister Magoo. He and his nephew, Waldo, are en route to a vacation at a lodge in the mountains and Waldo's banjo playing tends to be rather annoying for Magoo but a nearby grizzly bear is really interested in the banjo. When Waldo falls off a mountain ledge, the bear gets hold of the banjo and starts playing. Magoo is so near sighted, he doesn't know Waldo is gone and thinks the bear playing the banjo is Waldo. When they arrive at their lodge, Magoo and the bear get into more misadventures ...
Magoo Is So Different Looking From The Magoo I Know But There Is No Denying The Voice Of Jim Backus (RIP) ... Funny Episode As We Also Meet Magoo's Nephew Waldo ... My IMDB Rating 7 Out Of 10
Magoo Is So Different Looking From The Magoo I Know But There Is No Denying The Voice Of Jim Backus (RIP) ... Funny Episode As We Also Meet Magoo's Nephew Waldo ... My IMDB Rating 7 Out Of 10
Post has attachment
Post has attachment
Post has attachment
Post has attachment
Post has attachment
Figuring out how to pedal a bike and memorizing the rules of chess require two different types of learning, and now for the first time, researchers have been able to distinguish each type of learning by the brain-wave patterns it produces.
These distinct neural signatures could guide scientists as they study the underlying neurobiology of how we both learn motor skills and work through complex cognitive tasks, says Earl K. Miller the Picower Professor of Neuroscience at the Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences, and senior author of a paper describing the findings in the Oct. 11 edition of Neuron.
When neurons fire, they produce electrical signals that combine to form brain waves that oscillate at different frequencies.
Our ultimate goal is to help people with learning and memory deficits notes Miller. We might find a way to stimulate the human brain or optimize training techniques to mitigate those deficits.
The neural signatures could help identify changes in learning strategies that occur in diseases such as Alzheimer’s, with an eye to diagnosing these diseases earlier or enhancing certain types of learning to help patients cope with the disorder, says Roman F. Loonis a graduate student in the Miller Lab and first author of the paper. Picower Institute research scientist Scott L. Brincat and former MIT postdoc Evan G. Antzoulatos, now at the University of California at Davis, are co-authors.
Explicit versus implicit learning
Scientists used to think all learning was the same, Miller explains, until they learned about patients such as the famous Henry Molaison or “H.M.,” who developed severe amnesia in 1953 after having part of his brain removed in an operation to control his epileptic seizures.
Molaison couldn’t remember eating breakfast a few minutes after the meal, but he was able to learn and retain motor skills that he learned, such as tracing objects like a five-pointed star in a mirror.
H.M. and other amnesiacs got better at these skills over time, even though they had no memory of doing these things before Miller says.
The divide revealed that the brain engages in two types of learning and memory — explicit and implicit.
Explicit learning is learning that you have conscious awareness of, when you think about what you’re learning and you can articulate what you’ve learned, like memorizing a long passage in a book or learning the steps of a complex game like chess Miller explains.
Implicit learning is the opposite. You might call it motor skill learning or muscle memory, the kind of learning that you don’t have conscious access to, like learning to ride a bike or to juggle he adds. By doing it you get better and better at it, but you can’t really articulate what you’re learning.
Many tasks, like learning to play a new piece of music, require both kinds of learning, he notes.
Brain waves from earlier studies
When the MIT researchers studied the behavior of animals learning different tasks, they found signs that different tasks might require either explicit or implicit learning. In tasks that required comparing and matching two things, for instance, the animals appeared to use both correct and incorrect answers to improve their next matches, indicating an explicit form of learning. But in a task where the animals learned to move their gaze one direction or another in response to different visual patterns, they only improved their performance in response to correct answers, suggesting implicit learning.
What’s more, the researchers found, these different types of behavior are accompanied by different patterns of brain waves.
During explicit learning tasks, there was an increase in alpha2-beta brain waves (oscillating at 10-30 hertz) following a correct choice, and an increase delta-theta waves (3-7 hertz) after an incorrect choice. The alpha2-beta waves increased with learning during explicit tasks, then decreased as learning progressed. The researchers also saw signs of a neural spike in activity that occurs in response to behavioral errors, called event-related negativity, only in the tasks that were thought to require explicit learning.
The increase in alpha-2-beta brain waves during explicit learning could reflect the building of a model of the task Miller explains. And then after the animal learns the task, the alpha-beta rhythms then drop off, because the model is already built.
By contrast, delta-theta rhythms only increased with correct answers during an implicit learning task, and they decreased during learning. Miller says this pattern could reflect neural “rewiring” that encodes the motor skill during learning.
This showed us that there are different mechanisms at play during explicit versus implicit learning he notes.
Future Boost to Learning
Loonis says the brain wave signatures might be especially useful in shaping how we teach or train a person as they learn a specific task.
If we can detect the kind of learning that’s going on, then we may be able to enhance or provide better feedback for that individual he says.
For instance, if they are using implicit learning more, that means they’re more likely relying on positive feedback, and we could modify their learning to take advantage of that.
The neural signatures could also help detect disorders such as Alzheimer’s disease at an earlier stage, Loonis says.
In Alzheimer’s, a kind of explicit fact learning disappears with dementia, and there can be a reversion to a different kind of implicit learning he explains. Because the one learning system is down, you have to rely on another one.
Earlier studies have shown that certain parts of the brain such as the hippocampus are more closely related to explicit learning, while areas such as the basal ganglia are more involved in implicit learning. But Miller says that the brain wave study indicates “a lot of overlap in these two systems. They share a lot of the same neural networks.”
These distinct neural signatures could guide scientists as they study the underlying neurobiology of how we both learn motor skills and work through complex cognitive tasks, says Earl K. Miller the Picower Professor of Neuroscience at the Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences, and senior author of a paper describing the findings in the Oct. 11 edition of Neuron.
When neurons fire, they produce electrical signals that combine to form brain waves that oscillate at different frequencies.
Our ultimate goal is to help people with learning and memory deficits notes Miller. We might find a way to stimulate the human brain or optimize training techniques to mitigate those deficits.
The neural signatures could help identify changes in learning strategies that occur in diseases such as Alzheimer’s, with an eye to diagnosing these diseases earlier or enhancing certain types of learning to help patients cope with the disorder, says Roman F. Loonis a graduate student in the Miller Lab and first author of the paper. Picower Institute research scientist Scott L. Brincat and former MIT postdoc Evan G. Antzoulatos, now at the University of California at Davis, are co-authors.
Explicit versus implicit learning
Scientists used to think all learning was the same, Miller explains, until they learned about patients such as the famous Henry Molaison or “H.M.,” who developed severe amnesia in 1953 after having part of his brain removed in an operation to control his epileptic seizures.
Molaison couldn’t remember eating breakfast a few minutes after the meal, but he was able to learn and retain motor skills that he learned, such as tracing objects like a five-pointed star in a mirror.
H.M. and other amnesiacs got better at these skills over time, even though they had no memory of doing these things before Miller says.
The divide revealed that the brain engages in two types of learning and memory — explicit and implicit.
Explicit learning is learning that you have conscious awareness of, when you think about what you’re learning and you can articulate what you’ve learned, like memorizing a long passage in a book or learning the steps of a complex game like chess Miller explains.
Implicit learning is the opposite. You might call it motor skill learning or muscle memory, the kind of learning that you don’t have conscious access to, like learning to ride a bike or to juggle he adds. By doing it you get better and better at it, but you can’t really articulate what you’re learning.
Many tasks, like learning to play a new piece of music, require both kinds of learning, he notes.
Brain waves from earlier studies
When the MIT researchers studied the behavior of animals learning different tasks, they found signs that different tasks might require either explicit or implicit learning. In tasks that required comparing and matching two things, for instance, the animals appeared to use both correct and incorrect answers to improve their next matches, indicating an explicit form of learning. But in a task where the animals learned to move their gaze one direction or another in response to different visual patterns, they only improved their performance in response to correct answers, suggesting implicit learning.
What’s more, the researchers found, these different types of behavior are accompanied by different patterns of brain waves.
During explicit learning tasks, there was an increase in alpha2-beta brain waves (oscillating at 10-30 hertz) following a correct choice, and an increase delta-theta waves (3-7 hertz) after an incorrect choice. The alpha2-beta waves increased with learning during explicit tasks, then decreased as learning progressed. The researchers also saw signs of a neural spike in activity that occurs in response to behavioral errors, called event-related negativity, only in the tasks that were thought to require explicit learning.
The increase in alpha-2-beta brain waves during explicit learning could reflect the building of a model of the task Miller explains. And then after the animal learns the task, the alpha-beta rhythms then drop off, because the model is already built.
By contrast, delta-theta rhythms only increased with correct answers during an implicit learning task, and they decreased during learning. Miller says this pattern could reflect neural “rewiring” that encodes the motor skill during learning.
This showed us that there are different mechanisms at play during explicit versus implicit learning he notes.
Future Boost to Learning
Loonis says the brain wave signatures might be especially useful in shaping how we teach or train a person as they learn a specific task.
If we can detect the kind of learning that’s going on, then we may be able to enhance or provide better feedback for that individual he says.
For instance, if they are using implicit learning more, that means they’re more likely relying on positive feedback, and we could modify their learning to take advantage of that.
The neural signatures could also help detect disorders such as Alzheimer’s disease at an earlier stage, Loonis says.
In Alzheimer’s, a kind of explicit fact learning disappears with dementia, and there can be a reversion to a different kind of implicit learning he explains. Because the one learning system is down, you have to rely on another one.
Earlier studies have shown that certain parts of the brain such as the hippocampus are more closely related to explicit learning, while areas such as the basal ganglia are more involved in implicit learning. But Miller says that the brain wave study indicates “a lot of overlap in these two systems. They share a lot of the same neural networks.”
Wait while more posts are being loaded



