Elucidate

Blog just for my science interests, keeping my personal life outta this one. Nothing I post is my own unless otherwise stated.
neurosciencenews:

Scientists Find Six New Genetic Risk Factors for Parkinson’s
Read the full article Scientists Find Six New Genetic Risk Factors for Parkinson’s at NeuroscienceNews.com.
Using data from over 18,000 patients, scientists have identified more than two dozen genetic risk factors involved in Parkinson’s disease, including six that had not been previously reported. The study, published in Nature Genetics, was partially funded by the National Institutes of Health (NIH) and led by scientists working in NIH laboratories.
The research is in Nature Genetics. (full access paywall)
Research: “Large-scale meta-analysis of genome-wide association data identifies six new risk loci for Parkinson’s disease” by Mike A Nalls, Nathan Pankratz, Christina M Lill, Chuong B Do, Dena G Hernandez, Mohamad Saad, Anita L DeStefano, Eleanna Kara, Jose Bras, Manu Sharma, Claudia Schulte, Margaux F Keller, Sampath Arepalli, Christopher Letson, Connor Edsall, Hreinn Stefansson, Xinmin Liu, Hannah Pliner, Joseph H Lee, Rong Cheng, International Parkinson’s Disease Genomics Consortium (IPDGC), Parkinson’s Study Group (PSG) Parkinson’s Research: The Organized GENetics Initiative (PROGENI), 23andMe, GenePD, NeuroGenetics Research Consortium (NGRC), Hussman Institute of Human Genomics (HIHG), The Ashkenazi Jewish Dataset Investigator, Cohorts for Health and Aging Research in Genetic Epidemiology (CHARGE), North American Brain Expression Consortium (NABEC), United Kingdom Brain Expression Consortium (UKBEC), Greek Parkinson’s Disease Consortium, Alzheimer Genetic Analysis Group, M Arfan Ikram, John P A Ioannidis, Georgios M Hadjigeorgiou, Joshua C Bis, Maria Martinez, Joel S Perlmutter, Alison Goate, Karen Marder, Brian Fiske, Margaret Sutherland, Georgia Xiromerisiou, Richard H Myers, Lorraine N Clark, Kari Stefansson, John A Hardy, Peter Heutink, Honglei Chen, Nicholas W Wood, Henry Houlden, Haydeh Payami, Alexis Brice, William K Scott, Thomas Gasser, Lars Bertram, Nicholas Eriksson, Tatiana Foroud and Andrew B Singleton in Nature Genetics. Published online July 27 2014 doi:10.1038/ng.3043
Image: Scientists used gene chips to help discover new genes that may be involved with Parkinson’s disease. Credit National Human Genome Research Institute.

neurosciencenews:

Scientists Find Six New Genetic Risk Factors for Parkinson’s

Read the full article Scientists Find Six New Genetic Risk Factors for Parkinson’s at NeuroscienceNews.com.

Using data from over 18,000 patients, scientists have identified more than two dozen genetic risk factors involved in Parkinson’s disease, including six that had not been previously reported. The study, published in Nature Genetics, was partially funded by the National Institutes of Health (NIH) and led by scientists working in NIH laboratories.

The research is in Nature Genetics. (full access paywall)

Research: “Large-scale meta-analysis of genome-wide association data identifies six new risk loci for Parkinson’s disease” by Mike A Nalls, Nathan Pankratz, Christina M Lill, Chuong B Do, Dena G Hernandez, Mohamad Saad, Anita L DeStefano, Eleanna Kara, Jose Bras, Manu Sharma, Claudia Schulte, Margaux F Keller, Sampath Arepalli, Christopher Letson, Connor Edsall, Hreinn Stefansson, Xinmin Liu, Hannah Pliner, Joseph H Lee, Rong Cheng, International Parkinson’s Disease Genomics Consortium (IPDGC), Parkinson’s Study Group (PSG) Parkinson’s Research: The Organized GENetics Initiative (PROGENI), 23andMe, GenePD, NeuroGenetics Research Consortium (NGRC), Hussman Institute of Human Genomics (HIHG), The Ashkenazi Jewish Dataset Investigator, Cohorts for Health and Aging Research in Genetic Epidemiology (CHARGE), North American Brain Expression Consortium (NABEC), United Kingdom Brain Expression Consortium (UKBEC), Greek Parkinson’s Disease Consortium, Alzheimer Genetic Analysis Group, M Arfan Ikram, John P A Ioannidis, Georgios M Hadjigeorgiou, Joshua C Bis, Maria Martinez, Joel S Perlmutter, Alison Goate, Karen Marder, Brian Fiske, Margaret Sutherland, Georgia Xiromerisiou, Richard H Myers, Lorraine N Clark, Kari Stefansson, John A Hardy, Peter Heutink, Honglei Chen, Nicholas W Wood, Henry Houlden, Haydeh Payami, Alexis Brice, William K Scott, Thomas Gasser, Lars Bertram, Nicholas Eriksson, Tatiana Foroud and Andrew B Singleton in Nature Genetics. Published online July 27 2014 doi:10.1038/ng.3043

Image: Scientists used gene chips to help discover new genes that may be involved with Parkinson’s disease. Credit National Human Genome Research Institute.

mindblowingscience:

Children Exposed To Religion Have Difficulty Distinguishing Fact From Fiction, Study Finds

The Huffington Post | By Shadee Ashtari

Young children who are exposed to religion have a hard time differentiating between fact and fiction, according to a new study published in the July issue of Cognitive Science.

Researchers presented 5- and 6-year-old children from both public and parochial schools with three different types of stories — religious, fantastical and realistic –- in an effort to gauge how well they could identify narratives with impossible elements as fictional.

The study found that, of the 66 participants, children who went to church or were enrolled in a parochial school were significantly less able than secular children to identify supernatural elements, such as talking animals, as fictional.

By relating seemingly impossible religious events achieved through divine intervention (e.g., Jesus transforming water into wine) to fictional narratives, religious children would more heavily rely on religion to justify their false categorizations.

“In both studies, [children exposed to religion] were less likely to judge the characters in the fantastical stories as pretend, and in line with this equivocation, they made more appeals to reality and fewer appeals to impossibility than did secular children,” the study concluded.

Refuting previous hypotheses claiming that children are “born believers,” the authors suggest that “religious teaching, especially exposure to miracle stories, leads children to a more generic receptivity toward the impossible, that is, a more wide-ranging acceptance that the impossible can happen in defiance of ordinary causal relations.”

According to 2013-2014 Gallup data, roughly 83 percent of Americans report a religious affiliation, and an even larger group — 86 percent — believe in God.

More than a quarter of Americans, 28 percent, also believe the Bible is the actual word of God and should be taken literally, while another 47 percent say the Bible is the inspired word of God.

(via interferon-gamma)

The brachistochrone

This animation is about one of the most significant problems in the history of mathematics: the brachistochrone challenge.

If a ball is to roll down a ramp which connects two points, what must be the shape of the ramp’s curve be, such that the descent time is a minimum?

Intuition says that it should be a straight line. That would minimize the distance, but the minimum time happens when the ramp curve is the one shown: a cycloid.

Johann Bernoulli posed the problem to the mathematicians of Europe in 1696, and ultimately, several found the solution. However, a new branch of mathematics, calculus of variations, had to be invented to deal with such problems. Today, calculus of variations is vital in quantum mechanics and other fields.

(via interferon-gamma)

The interplay between these new theoretical ideas and new high‐quality observational data has catapulted cosmology from the purely theoretical domain and into the field of rigorous experimental science. This process began at the beginning of the twentieth century, with the work of Albert Einstein.
Free chapter from Cosmology: A Very Short Introduction on the history of cosmology and how it extends from myth to science. This chapter is free until 25 September on Very Short Introductions Online. (via oupacademic)

(via afro-dominicano)

medievalpoc:

Medievalpoc Presents: History of POC in Math and Science Week, 8-3-14 through 8-9-14!

Medievalpoc’s first Patreon Milestone Goal has been reached, and the History of POC in Math and Science Week is happening soon! This all-new themed week will focus on the contribution of people of color to the fields of mathematics, science, physics, medicine, natural philosophy, and much, much more!

There will be a focus on primary documents with interactive elements, visual and documentary evidence, innovators and their biographies, and notable personages of color from the Islamic Golden Age, Medieval Europe, African Empires and Universities, Asian images and texts, and discussion about early modern globalization regarding how this knowledge traveled.

If you have an article, image, document, or commentary you would like to submit, here’s your chance to weigh in on this topic! Please use the “Math and Science Week” and any other relevant tags for your submission, and I look forward to hearing about your favorite mathematicians and scientists of color!

(via afro-dominicano)

spaceexp:

The background for Nasa’s Space Shuttle page

mindblowingscience:

Researchers eliminate HIV from cultured human cells for first time

HIV-1, the most common type of the virus that causes AIDS,has proved to be tenacious, inserting its genome permanently into its victims’ DNA, forcing patients to take a lifelong drug regimen to control the virus and prevent a fresh attack. Now, a team of Temple University School of Medicine researchers has designed a way to snip out the integrated HIV-1 genes for good.

"This is one important step on the path toward a permanent cure for AIDS," says Kamel Khalili, PhD, Professor and Chair of the Department of Neuroscience at Temple. Khalili and his colleague, Wenhui Hu, MD, PhD, Associate Professor of Neuroscience at Temple, led the work which marks the first successful attempt to eliminate latent HIV-1 virus from human cells. "It’s an exciting discovery, but it’s not yet ready to go into the clinic. It’s a proof of concept that we’re moving in the right direction," added Dr. Khalili, who is also Director of the Center for Neurovirology and Director of the Comprehensive NeuroAIDS Center at Temple.

In a study published July 21 by theProceedings of the National Academy of Sciences, Khalili and colleagues detail how they created molecular tools to delete the HIV-1 proviral DNA. When deployed, a combination of a DNA-snipping enzyme called a nuclease and a targeting strand of RNA called a guide RNA (gRNA) hunt down the viral genome and excise the HIV-1 DNA. From there, the cell’s gene repair machinery takes over, soldering the loose ends of the genome back together — resulting in virus-free cells.

"Since HIV-1 is never cleared by the immune system, removal of the virus is required in order to cure the disease," says Khalili, whose research focuses on the neuropathogenesis of viral infections. The same technique could theoretically be used against a variety of viruses, he says.

(via scinerds)

neurosciencestuff:

(Image caption: Techniques known as dimensionality reduction can help find patterns in the recorded activity of thousands of neurons. Rather than look at all responses at once, these methods find a smaller set of dimensions — in this case three — that capture as much structure in the data as possible. Each trace in these graphics represents the activity of the whole brain during a single presentation of a moving stimulus, and different versions of the analysis capture structure related either to the passage of time (left) or the direction of the motion (right). The raw data is the same in both cases, but the analyses finds different patterns. Credit: Jeremy Freeman, Nikita Vladimirov, Takashi Kawashima, Yu Mu, Nicholas Sofroniew, Davis Bennett, Joshua Rosen, Chao-Tsung Yang, Loren Looger, Philipp Keller, Misha Ahrens)

New Tools Help Neuroscientists Analyze Big Data

In an age of “big data,” a single computer cannot always find the solution a user wants. Computational tasks must instead be distributed across a cluster of computers that analyze a massive data set together. It’s how Facebook and Google mine your web history to present you with targeted ads, and how Amazon and Netflix recommend your next favorite book or movie. But big data is about more than just marketing.

New technologies for monitoring brain activity are generating unprecedented quantities of information. That data may hold new insights into how the brain works – but only if researchers can interpret it. To help make sense of the data, neuroscientists can now harness the power of distributed computing with Thunder, a library of tools developed at the Howard Hughes Medical Institute’s Janelia Research Campus.

Thunder speeds the analysis of data sets that are so large and complex they would take days or weeks to analyze on a single workstation – if a single workstation could do it at all. Janelia group leaders Jeremy Freeman, Misha Ahrens, and other colleagues at Janelia and the University of California, Berkeley, report in the July 27, 2014, issue of the journal Nature Methods that they have used Thunder to quickly find patterns in high-resolution images collected from the brains of active zebrafish and mice with multiple imaging techniques.

Importantly, they have used Thunder to analyze imaging data from a new microscope that Ahrens and colleagues developed to monitor the activity of nearly every individual cell in the brain of a zebrafish as it behaves in response to visual stimuli. That technology is described in a companion paper published in the same issue of Nature Methods.

Thunder can run on a private cluster or on Amazon’s cloud computing services. Researchers can find everything they need to begin using the open source library of tools at http://freeman-lab.github.io/thunder

New microscopes are capturing images of the brain faster, with better spatial resolution, and across wider regions of the brain than ever before. Yet all that detail comes encrypted in gigabytes or even terabytes of data. On a single workstation, simple calculations can take hours. “For a lot of these data sets, a single machine is just not going to cut it,” Freeman says.

It’s not just the sheer volume of data that exceeds the limits of a single computer, Freeman and Ahrens say, but also its complexity. “When you record information from the brain, you don’t know the best way to get the information that you need out of it. Every data set is different. You have ideas, but whether or not they generate insights is an open question until you actually apply them,” says Ahrens.

Neuroscientists rarely arrive at new insights about the brain the first time they consider their data, he explains. Instead, an initial analysis may hint at a more promising approach, and with a few adjustments and a new computational analysis, the data may begin to look more meaningful. “Being able to apply these analyses quickly — one after the other — is important. Speed gives a researcher more flexibility to explore and get new ideas.”

That’s why trying to analyze neuroscience data with slow computational tools can be so frustrating. “For some analyses, you can load the data, start it running, and then come back the next day,” Freeman says. “But if you need to tweak the analysis and run it again, then you have to wait another night.” For larger data sets, the lag time might be weeks or months.

Distributed computing was an obvious solution to accelerate analysis while exploring the full richness of a data set, but many alternatives are available. Freeman chose to build on a new platform called Spark. Developed at the University of California, Berkeley’s AMPLab, Spark is rapidly becoming a favored tool for large-scale computing across industry, Freeman says. Spark’s capabilities for data caching eliminates the bottleneck of loading a complete data set for all but the initial step, making it well-suited for interactive, exploratory analysis, and for complex algorithms requiring repeated operations on the same data. And Spark’s elegant and versatile application programming interfaces (APIs) help simplify development. Thunder uses the Python API, which Freeman hopes will make it particularly easy for others to adopt, given Python’s increasing use in neuroscience and data science.

To make Spark suitable for analyzing a broad range of neuroscience data – information about connectivity and activity collected from different organisms and with different techniques – Freeman first developed standardized representations of data that were amenable to distributed computing. He then worked to express typical neuroscience workflows into the computational language of Spark.

From there, he says, the biological questions that he and his colleagues were curious about drove development. “We started with our questions about the biology, then came up with the analyses and developed the tools,” he says.

The result is a modular set of tools that will expand as the Janelia team — and the neuroscience community — add new components. “The analyses we developed are building blocks,” says Ahrens. “The development of new analyses for interpreting large-scale recording is an active field and goes hand-in-hand with the development of resources for large-scale computing and imaging. The algorithms in our paper are a starting point.”

Using Thunder, Freeman, Ahrens, and their colleagues analyzed images of the brain in minutes, interacting with and revising analyses without the lengthy delays associated with previous methods. In images taken of a mouse brain with a two-photon microscope, for example, the team found cells in the brain whose activity varied with running speed.

For analyzing much larger data sets, tools such as Thunder are not just helpful, they are essential, the scientists say. This is true for the information collected by the new microscope that Ahrens and colleagues developed for monitoring whole-brain activity in response to visual stimuli.

Last year, Ahrens and Janelia group leader Phillip Keller used high-speed light-sheet imaging to engineer a microscope that captures neuronal activity cell by cell across nearly the entire brain of a larval zebrafish. That microscope produced stunning images of neurons in the zebrafish brain firing while the fish was inactive. But Ahrens wanted to use the technology to study the brain’s activity in more complex situations. Now, the team has combined their original technology with a virtual-reality swim simulator that Ahrens previously developed to provide fish with visual feedback that simulates movement.

In a light sheet microscope, a sheet of laser light scans across a sample, illuminating a thin section at a time. To enable a fish in the microscope to see and respond to its virtual-reality environment, Ahrens’ team needed to protect its eyes. So they programmed the laser to quickly shut off when its light sheet approaches the eye and restart once the area is cleared. Then they introduced a second laser that scans the sample from a different angle to ensure that the region of the brain behind the eyes is imaged. Together, the two lasers image the brain with nearly complete coverage without interfering with the animal’s vision.

Combining these two technologies lets Ahrens monitor activity throughout the brain as a fish adjusts its behavior based on the sensory information it receives. The technique generates about a terabyte of data in an hour – presenting a data analysis challenge that helped motivate the development of Thunder. When Freeman and Ahrens applied their new tools to the data, patterns quickly emerged. As examples, they identified cells whose activity was associated with movement in particular directions and cells that fired specifically when the fish was at rest, and were able to characterize the dynamics of those cells’ activities. Example analyses like these, and example data sets, are available at the website http://research.janelia.org/zebrafish/.

Ahrens now plans to explore more complex questions using the new technology, and both he and Freeman foresee expansion of Thunder. “At every level, this is really just the beginning,” Freeman says.

probablyasocialecologist:

How Japan Plans to Build an Orbital Solar Farm

Here Comes the Sun: Mirrors in orbit would reflect sunlight onto huge solar panels, and the resulting power would be beamed down to Earth. Image: John MacNeill

Imagine looking out over Tokyo Bay from high above and seeing a man-made island in the harbor, 3 kilometers long. A massive net is stretched over the island and studded with 5 billion tiny rectifying antennas, which convert microwave energy into DC electricity. Also on the island is a substation that sends that electricity coursing through a submarine cable to Tokyo, to help keep the factories of the Keihin industrial zone humming and the neon lights of Shibuya shining bright.

But you can’t even see the most interesting part. Several giant solar collectors in geosynchronous orbit are beaming microwaves down to the island from 36 000 km above Earth.

It’s been the subject of many previous studies and the stuff of sci-fi for decades, but space-based solar power could at last become a reality—and within 25 years, according to a proposal from researchers at the Japan Aerospace Exploration Agency (JAXA). The agency, which leads the world in research on space-based solar power systems, now has a technology road map that suggests a series of ground and orbital demonstrations leading to the development in the 2030s of a 1-gigawatt commercial system—about the same output as a typical nuclear power plant.

Continue reading 

Further reading:

(via we-are-star-stuff)

secretlifeofateenblogger:

I keep forgetting what the differences are in the over the counter pain relievers, so I made a handy chart.

(via we-are-star-stuff)