Nothing Special   »   [go: up one dir, main page]

1663 July 2013 - 081413DL

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

LOS ALAMOS SCIENCE AND TECHNOLOGY MAGAZINE

J U LY 2 0 1 3

Early Warning for Biothreats Clean Fuel from Wind and Solar Modeling the Living Ocean Infrastructure Self-Check

1 663
LO S A L A M O S S C I E N C E A N D T E C H N O LO G Y M A G A Z I N E

About Our Name: During World War II, all that the outside world knew of Los Alamos and its top-secret laboratory was the mailing addressP . O. Box 1663, Santa Fe, New Mexico. That box number, still part of our address, symbolizes our historic role in the nations service. About the Logo: Laboratory Directed Research and Development (LDRD) is a competitive, internal program by which Los Alamos National Laboratory is authorized by Congress to invest in research and development that is both highly innovative and vital to our national interests. Whenever 1663 reports on research that received support from LDRD, this logo appears at the end of the article.

About the Cover: This year marks the 70th anniversary of Los Alamos National Laboratory. Originally tasked to develop the atomic bomb, the Laboratory has since broadened its science and engineering mission to include all manners of challenges to our national security. While the cover artwork on this issue highlights some of the faces and achievements of the last 70 years and today, the feature articles focus on the future and some of the Los Alamos research projected to have a major impact in the coming decades.

Los Alamos Firsts


Little Green Men
As Herbert York recalled,* he and fellow physicists Edward Teller, Emil Konopinski, and Enrico Fermi were having lunch at Fuller Lodge in Los Alamos on a late summer day in 1950 when Fermi said, virtually apropos of nothing, Dont you ever wonder where everybody is? Konopinski remembered that Fermi surprised everyone with the question, But where is everybody? Teller, for his part, remembered that the four men had discussed aliens and fasterthan-light space travel as they walked to lunch, so when Fermi later asked, clear out of the blue, Where is everybody? there was general laughter because everyone seemed to understand at once that he was talking about extraterrestrial life. According to York, Fermi concluded that we ought to have been visited long ago and many times over. His question, Where is everybody? thus unmasks a contradiction, often referred to as Fermis paradox, that space aliens should be everywhere but dont seem to be anywhere. The contradiction becomes clearer when one considers that the Sun formed less than five billion years ago, and one of its planets life forms has already sent probes into space. The oldest stars in the galactic disk are at least 8 billion years old, so if alienkind evolves at roughly the same pace as humankind, then the earliest civilizations could have begun exploring the galaxy several billion years before Fermi popped his question. Fermi himself was quite capable of proving that it would take at most a few hundred million years to visit, colonize, or send probes to every star and habitable rock in the Milky Way, regardless of the starting point and assuming slower-than-light travel. In other words, every alien civilization that became space-worthy within the past few billion years could have visited our little planet and populated the rest of the galaxy with themselves and their technology many times over. But theres no conclusive evidence of any of thisno remnants of alien cities or advanced materials on Earth and no detections of interstellar communications or other electromagnetic broadcasts in space. There are many possible ways to explain this absence of evidence. Maybe aliens are content to probe other star systems from afar. Or maybe their ships, space stations, and colonies blanket the galaxy, but we lack the technology to detect them or intercept their communicationsconceivably because the aliens have gone to great lengths to be inconspicuous. And maybe the timing was just off; the last alien cruise ship sailed by before the Earth was open for business. People have even postulated that all advanced societies self-destruct before they have the opportunity to spread among the stars, killed by war, disease, or other pitfalls of their own technology. Or it may simply be that the evidence for alien visitations has yet to be discovered because its covered with jungle, frozen in ice, or hidden in plain sight, unrecognized for what it is. While there are many reasons to explain why we wouldnt see space aliens, theres only one to explain why we would: namely, that whatever transpired to bring about intelligent life on Earth also transpired elsewhere in the galaxy, perhaps millions of times among its several hundred billion stars. By tasking us to ponder the fate of those alien races, Fermis question compels us to ponder our own. If intelligent species abound but leave no trace, will humanity too remain in obscurity? Or if a vast, ancient, alien community is finally revealed, will humanity find a place within it? And if there are no aliens at all, then what will we do with a galaxy all to ourselves?

Jay Schecker
Theoretical and experimental physicist Enrico Fermi
*Eric M. Jones, Where is Everybody? An Account of Fermis Question. Los Alamos report LA_10311-MS.

IN THIS ISSUE
FEATURES

Biosurveillance
PREPARING FOR INFECTIOUS DISEASESWHETHER NATURAL OR MANMADE

Energy for a Rainy Day


ENERGY STORAGE TO ENABLE NATIONAL-SCALE WIND AND SOLAR POWER

10

The Green in the Blue


GLOBAL SIMULATION OF THE OCEANS FOOD SOURCE

16

Sensors, Sensors Everywhere


NEXT-GENERATION INFRASTRUCTURE THAT WILL KEEP TRACK OF ITS OWN NEEDS

18

SPOTLIGHT
INTERPLANETARY MISSION FISSION POWERED BY PLASTIC GRAPHIC MATH

24

CREDIT: HOLST CENTRE

1663 los alamos science and technology magazine JULY 2013

1663 los alamos science and technology magazine JUNE 2013

THIS SPRING, A NEW STRAIN OF FLU EMERGED in China, infecting poultry and causing serious disease in more than a hundred people. In the Middle East, a new coronavirus called MERS emerged last summer and by this June had killed 40 people in eight countries. The World Health Organization has warned that both viruses are particularly lethal and have the potential to cause pandemics. The world is on alert. The good news is that flu is a well-studied enemy, and thus far, MERS is only spreading through very close contact. Scientists are expeditiously studying both pathogens, and if one does evolve to sustained human transmission, global populations will at least have a head start. However, when questioned about preparedness for new emerging diseases or bioterror attacks in general, many scientists believe that, so far, weve just been lucky. If SARS in 2003 had a one-to-four-day incubation period like the flu, instead of two to seven days, it would have been a completely different story, says Los Alamos biophysicist Paul Fenimore. The incubation period is the time it takes to get sick after exposure to the disease. SARS was caused by a new virus from a family that was thought to be minimally pathogenic. Identification of the virus took weeks, and the outbreak was noticed because of the pathogens high virulence and transmissibility. If SARS had been able to spread faster than it did, many, many more lives could have been lost before scientists even knew the cause. Humans have always battled contagion, but in the late 1960s, U.S. Surgeon General William H. Stewart announced it was time to close the book on infectious disease. Sanitation, antibiotics, and vaccines had revolutionized medical care in Western society, and public health efforts began to concentrate on chronic ailments such as heart disease, cancer, and diabetes. Most remaining fear of infectious disease focused on the threat of biowarfareprompting the ratification of the Biological Weapons Convention

in 1975, prohibiting governments from acquiring, retaining, or using biological weapons. Yet today, infectious disease is still a major cause of death worldwide, and the rise of antibiotic resistance is bringing the threat right back to Americas doorstep. Crowded cities, public transportation, and international travel and commerce make it easier for disease to spread than ever before. Furthermore, factors such as climate change and human proximity to industrial agriculture have set the stage for the emergence of new diseases (SARS), known diseases in new environments (West Nile Virus), and new versions of old diseases (drug-resistant tuberculosis). And although many governments may have agreed not to use biological weapons, individuals can use pathogens to cause widespread fear and even death. The 2001 Amerithrax incident, in which anthrax-containing letters were mailed to members of Congress and the media, killing five people, is a sobering example. Interestingly, the way ocials respond to an outbreak makes little distinction between anthropogenic (manmade) or naturally occurring diseases, so preparing for natural outbreaks also prepares society for bioterrorism. Part of this preparation involves biosurveillance: the process of gathering and interpreting information about disease incidence to enable a targeted response that might slow or stop its spread. Ultimately, biosurveillance attempts to predict and prevent epidemics before they start. Los Alamos is home to many biosurveillance innovations, created by a network of more than 30 experimentalists, theorists, modelers, and engineers. Their biosurveillance toolkit includes disease detection, vaccine and antibiotic development, disease forecasting, response analysis, biothreat non-proliferation, and analyses of the relationship between organisms and their environments.

nic Chro ase e s i d

s tiou Infec ase e s i d erro Biot rism arfa Biow re

Preparing for natural outbreaks also prepares society for bioterrorism. Although the likelihood of bioterrorism and biowarfare may be low, the impact of such events is unknown and potentially great when considering the social, political and economic effects that would inevitably follow. In fact, there are few historical examples of bioterrorism or biowarfare, such as the U.S. Amerithrax incident in 2001 that killed five people, while annual deaths from chronic and infectious disease are comparatively predictable in both impact and likelihood. Fortunately, biological approaches to detecting pathogens, treating disease, and predicting spread are useful for all types of biothreats.

Likelihood
Im c pa t
de n y io ll ill ua m nn 32 hs a at

de

n y io ll ill ua m nn 16 hs a at

1663 los alamos science and technology magazine JULY 2013

Small villages in Africa, such as this one in Uganda, are particularly hard-hit by the rise in both HIV and tuberculosis. Access to medical care is limited, and rural clinics have few resources. Reliable, portable diagnostic tests that do not require sending samples to laboratories (which are only found in larger cities) would make a substantial difference in identifying diseases quickly to reduce their spread.

Taking it all in

Pathogens are everywherein plants, animals, and humans. The more that is known about what pathogens are circulating, how they are transmitted, and how they become virulent and drug-resistant, the easier it is to recognize when something goes wrong. Characterizing the circulation of pathogens worldwide means understanding human interaction with animals and the environment. To complicate this, some diseases can jump between species, some animals migrate, some diseases are vector-borne (like malaria, carried by mosquitos), and climate and weather patterns change rapidly.

The more we know about the constant background levels of disease worldwide, the more we can understand and predict when one is going to become an epidemic, says Jeanne Fair, an infectious disease biologist at Los Alamos. And its important to learn about all disease, even when it doesnt look like an epidemic. For example, she notes that by understanding the percentages of viral versus bacterial respiratory illnesses in a particular area through rapid and definitive tests, local doctors could avoid unnecessary antibiotic use. Unnecessary antibiotic use leads to increased prevalence of antibiotic-resistant strains of pathogens, complicating the treatment of infected patients. Many agencies monitor disease, including public health, defense, agricultural, and wildlife organizations; however, they do not regularly work together. In 2011, Los Alamos cohosted a conference called Global Biosurveillance: Enabling Science and Technology, which was one of many international forums to share ideas on how to improve biosurveillance. Among the outcomes was a desire to integrate national and international agencies into a cohesive network to establish baselines for ecosystem risks and threats while enabling data sharing for improved surveillance and responsemuch in the same way that, after 9/11, authorities recognized that it would be beneficial to share information between the CIA, FBI, and local law enforcement. A further observation from the conference was that improvements are needed to help identify disease more quickly and accuratelyand without the need for extensive lab equipment so that it can be done in all parts of the world. In 2012, President Obama issued the National Strategy for Biosurveillance, which echoed many of these points, calling for integration, partnership, and innovation.

Climatology analysis Vector ecology Animal disease

Early reporting

Rapid response

Potential cases prevented/ international spread prevented

-X

-20 Risk prediction Risk alert

-1

Days

10

13

16

19

22

25

28

31

34

37

40

Detection Diagnostics Disease models Medical countermeasures

Threat alert

This timeline of events in a hypothetical disease outbreak shows that detection and diagnosis in the first week, considered early reporting, can enable a rapid response that would significantly reduce the overall spread of the disease. However, looking backwards, there are opportunities to predict coming diseases prior to the first reported case based on broad categories of information such as ecosystem and climate data, vector ecology data (e.g., mosquito populations), and diseases in animal populations.
CREDIT: ADAPTED FROM THE WORLD HEALTH ORGANIZATIONS 2007 WORLD HEALTH REPORT AND A GRAPHIC BY BILL HUFF/DOD

1663 los alamos science and technology magazine JULY 2013

Number of new cases

Ecosystem baseline

Harshini Mukundan (left) watches as fellow Los Alamos bioscientist Elizabeth Hong-Geller loads a vertical electrophoresis gel, used to separate proteins and small molecules. This technique is useful for developing improved methods to detect pathogens.

Invisible enemy

The first step of biosurveillance is detection: where are people sick and what diseases do they have? Diagnostic tests used by doctors at medical clinics and hospital emergency rooms are often the best source of this information. When a person is ill, the pathogen is busy replicating itself while the bodys immune system is likely launching a counterattack. Most laboratory tests target these two events, and scientists at Los Alamos are working to improve both types of tests: those that target the pathogen itself in samples of blood, urine, or sputum (mucus) as well as those that recognize the bodys immune response to current and previous infections by detecting antibodies (molecules that bind to invaders). One Los Alamos team, including biochemists Basil Swanson and Harshini Mukundan, has been specializing in the diagnosis of tuberculosis (TB) for over a decade. TB is a leading cause of death in individuals with HIV/AIDS. Parts of Africa are now struggling with epidemics of both diseases. One of the biggest problems is that many of the current TB tests are likely to fail if the person is also infected with HIV, says Mukundan. The tests come back negative, so people who are co-infected are being sent home without anyone realizing they have TB and without the proper care for the disease. This is partly why it continues to spread. Commonly used methods to detect TB include a skin test (which can produce a false-positive in patients who

already have antibodies from a previous infection or from the TB vaccine), a sputum test (which is laboratory intensive, requires a highly contagious sample, and does not work for all types of TB), and a blood test (which requires costly laboratory equipment not available in all countries). Unfortunately, it is not entirely understood why some of these tests fail in HIV patients. Making use of a recent Los Alamos inventionan optical biosensor that can detect multiple kinds of pathogens the TB team recently developed novel strategies to detect very small concentrations of a tiny sugar called lipoarabinomannan (LAM) that comes from the membranes of TBcausing bacteria. LAM is a virulence factora molecule that reveals the ability to cause diseasesecreted by the bacteria, making it a useful biomarker for indicating of the presence of the TB pathogen. The team has also developed assays for two other biomarkers that together allow for a reliable diagnosis of active TB infection within minutes. Their ultimate goal is to create simple, reliable methods of detecting HIV, active TB, and other diseases in rural settings worldwidein humans and any animal populations of interest. Another TB project at Los Alamos focuses on the effectiveness of antibiotics. With the rising prevalence of drug-resistant TB, many in the field have suggested that perhaps the mutations that make the bacteria resistant also reduce their ability to spread. But recent work by Los Alamos biologists Bette Korber, Karina Yusim, and Shihai Feng, in collaboration with the National Institutes of Health, indicates otherwise. Their work shows that compensatory mutations can restore the fitness of the drug-resistant bacteria (that cause TB) and has confirmed the persistence and spread of drug-resistant organisms in the population.
Clues from the blueprint

A number of studies at Los Alamos are examining the complex relationship between the host organism and the pathogen, as well as the molecular blueprints (DNA and RNA) of pathogens, in order to create detection strategies. Biologist Elizabeth Hong-Geller has been examining small RNA (sRNA) molecules produced by bacteria that are involved in gene regulation during infection. Her work has focused on the bacteria Yersinia pestis, which causes plague. In collaboration with Lab colleagues who determine 3D biomolecular structures and create molecular models, she is trying to identify small molecules that can bind to, and potentially inhibit, key sRNAs for antibiotics and drug design. If a small RNA is involved in virulence and we can block its function, it would be a breakthrough for designing countermeasures against infection, says Hong-Geller.

1663 los alamos science and technology magazine JULY 2013

Also in the genetics arena at Los Alamos, computational biologists Murray Wolinsky, Jason Gans, and Jian Song are experts at developing algorithms to find genetic signatures: unique sections of DNA or RNA that can be used to distinguish quickly between pathogensespecially closely related pathogens. Once a signature is identified, primers made of short sections of DNA are developed using a complementary sequence so that identified regions of the pathogens DNA will specifically bind to the primers. Biologist Norman Doggett has helped develop rapid tests called assays that screen for many types of pathogens at once by introducing sample material (serum or urine that might contain DNA or RNA from a pathogen) to multiple primers, and amplifying, or copying, the ones that find a match. These types of assays are also great for evaluating environmental samples, such as soil or air. After the Amerithrax incident, the U.S. government began routinely monitoring the air for dangerous pathogens in major cities through a program called BioWatch. Los Alamoswhich had already been involved in the analysis of the anthrax used in the lettersstepped in with expertise in analyzing BioWatch samples and optimizating the placement of detectors. There has been a lot of public scrutiny of the BioWatch program, mostly about the possibility of false positives, prompting rigorous assay validation in which Los Alamos also played a key role. The overarching problem remains: it all comes back to the sensitivity and specificity of detection methods. For example, a detector could test positive for Bacillus anthracis, the bacteria that causes anthrax, when really the sample contains Bacillus thuringensis, a non-deadly close relative of Bacillus anthracis. Both bacteria live naturally in the soil and are genetically similar, but only one is a major threat to humans. Scientists have extensively studied the differences between anthrax near-neighbors and have developed discerning tests using signatures that target only the small differences in their genetic codes that account for their

pathogenicity, or ability to cause disease. However, a less well-studied organism may have unknown near-neighbors from which it would be dicult to distinguish. The gold standard for comparing various pathogen strains is to sequence the entire genome. Over the years, genomic sequence data has been amassed in pathogen databases at Los Alamos to aid in the comparative analysis of many viruses, including influenza, HIV, and hepatitis C virus (HCV). During the 2009 swine flu episode, Los Alamos scientists were able to use the influenza database to quickly determine that the culprit was indeed a new strain. Fortunately, the diminishing cost of sequencing is enabling more organisms to be sequenced, thus generating enough information for comprehensive comparative analysis. New techniques that enable sequencing entire communities of organisms at once (metagenomics) or sequencing only genes that are being expressed (transcriptomics) which can change with environmental conditionsare also giving scientists much more information about pathogens. So much data, however, can sometimes be a problem. Numerous redundancies in closely related strains of organisms makes comparisons dicult. To confront this issue, bioinformaticist Patrick Chain and his team at Los Alamos have been developing a database containing only the unique sections of each organisms genetic code. We have been developing methods to essentially screen all known genomes for any identical sequences, track where they are in each genome, and remove them such that they will no longer confound searches for similarities between sequences, says Chain. Overall, no matter what the approach on a molecular level, detection strategies for biosurveillance have the same goal in mind: simple, rapid, field-able methods. Many research projects at Los Alamos have taken on this challenge over the years. In fact, spin-off companies were created around some of these technologies such as a dipstick test (much like a pregnancy test) for the flu and a small, portable flow cytometer that uses sound waves for cell sorting.

1663 los alamos science and technology magazine JULY 2013

#GotFlu

Whether or not patients receive a definitive diagnostic test at their doctor visit, notes about their symptoms are always recorded. Syndromic surveillance describes the idea of screening hospital and clinic records in search of trends or anomalies in patient complaintsprior to diagnostic teststhat might foretell an epidemic or biothreat event. For instance, multiple patients in the month of October complaining of upper respiratory disease with a cough, high fever, and muscle aches may suggest to a doctor that it is the beginning of flu season, even though the doctor may not perform a definitive test on each patient. This approach, however, still requires someone to be sick enough to go to a doctor. Is it possible to detect disease prior to this point? What else do people do when they are feeling under the weather? Purchase over-the-counter drugs, Google their symptoms, or complain to their friends on Twitter? All of these actions produce potentially useful biosurveillance data. Los Alamos biomedical scientist Alina Deshpande leads a research project to analyze all the possible data streams that could be useful for biosurveillance. Her team is studying the relevance of various data streams and developing a systematic approach to determine which data types are useful for which purposes. To achieve this, the team evaluated many currently available sources of data (emergency room and other clinic records, social media, Internet search queries, laboratory records, etc.) for their utility, using criteria such as timeliness, granularity, and credibility. This was done using a commercially available Multi Criteria Decision Analysis (MCDA) software tool that scores data streams based on weighted metrics and assigned values specific to data stream categories, such as early detection or consequence management. Deshpandes team also evaluated historical outbreaks, such as the 2009 swine flu pandemic and the 2010 cholera outbreak in Haiti, to find surveillance windows, or points in time at which early detection or early warning could have made a difference. The team then researched what data streams were available at that time to determine which ones would have been useful. A cross-method analysis was performed between the surveillance window evaluation and the MCDA evaluation to identify data stream categories that

Jason Gans, Murray Wolinsky, Norman Doggett, and Jian Song engage in conversation at their clustera collection of computer servers that work together. The team uses this cluster, along with their specialized algorithms, to identify unique signature sections of DNA or RNA that can be used to detect pathogens.

showed high utility for both methods. In some cases, they found that a data stream might only be optimal for a particular disease in a particular country. With our massive data streams, we have found that diversity is key, says Deshpande. One perfect data stream does not exist. This effort laid the foundation for a collaboration straddling military and civilian health surveillance, and the Los Alamos teams evaluation framework is being considered for disease surveillance as well as other public health initiatives. In addition, the Los Alamos team developed the Biosurveillance Resource Directory (BRD), a relational database that underwent pilot testing by members of the human, plant, and animal disease-surveillance community. The BRD is intended to be a global resource to facilitate rapid information access.

One of the biggest challenges in biosurveillance is determining how to capture and integrate all the useful data streams into actionable information.

Forecasting Spread Social Distancing Drugs and Vaccines Resource Distribution

1663 los alamos science and technology magazine JULY 2013

Sara Del Valle stands in front of the Los Alamos powerwall, where multiple computer simulations display the impact of different diseases and disasters across multiple regions.

Path of the storm

High performance computing at Los Alamos has made possible the development of predictive models to help inform decision makers. Once critical details about a disease outbreak are known, a model can be used to forecast how the epidemic may progress and analyze the effectiveness of proposed countermeasures. The Epidemic Simulation System (EpiSimS) is one of the tools developed at Los Alamos to model epidemics. This model uses several data sources, including U.S. Census data, to create a detailed virtual world in which synthetic people interact and spread disease in a realistic fashion. They go to school, work, and perhaps the grocery store, and they might ride trains or buses at some point during the day before returning home to their families. When an infected individual is introduced, the model can simulate how fast the disease will spread based on the interactions each person hasperson A goes to work and visits person B, then person C, then goes to a store and interacts with person D, etc. By incorporating detailed mixing and activity patterns, EpiSimS can estimate which groups of people will be affected and where. This information helps scientists develop targeted mitigation strategies. A similar system called EpiCast, an epidemiological forecast, was also developed to model epidemics, only faster and with less detail than EpiSimS. For instance, instead of simulating each persons daily interactions, EpiCast uses an average based on empirical surveys and previous modelsthere are X individuals on a given day at home, Y in the workplace, Z out shopping, and so on. Data describing how many people commute from one census tract (a roughly 5000-person subdivision of a county) to another captures

detailed workflow patterns, and long-distance travel data is used to model less regular mobility. This allowed scientists at Los Alamos to do a national simulation of flu season in a few hours, whereas it might take EpiSimS a few hours to do a more detailed simulation of just California, says computational scientist Tim Germann. Both simulations are fairly accurate; they have been validated against historical outbreaks as well as actual recent outbreaks that Los Alamos has been called upon to examine. In 2006, for example, EpiSimS was used to inform the Department of Homeland Security about preparedness for a potential avian flu outbreak. And in 2009, both EpiSimS and EpiCast were used to forecast the spread of swine flu. In both studies, Los Alamos teams investigated how quickly the disease might propagate, as well as how effective various countermeasures would be. Los Alamoss logical next step, a multi-scale epidemiology model (MuSE) that incorporates multiple host organisms, couples larger-scale interactionscounties instead of subdivisionswith the small-scale dynamics of disease spread. MuSE was designed specifically for biosurveillance and has been used in recent years to study rinderpest and foot-and-mouth disease in livestock in the United States, avian flu in Nigeria, and Rift Valley fever in East Africa. Lab scientists are also making use of data from social media, such as Twitter, to inform their models about how peoples behavior might foretell the spread of disease. They discovered that people tend to tweet all sorts of details about their lives, including when they wash their hands and whether or not they have been wearing a facemask. This could be a valuable way to track how the public responds to health warnings or recommendations.

1663 los alamos science and technology magazine JULY 2013

Global Genomics
We tried to find if facemask usage correlated with disease spread, says Sara Del Valle, a computational epidemiologist at Los Alamos. As the incidence of the disease and the public perception of its incidence go up, people wearing and talking about facemasks go up, and as the incidence declines, so do the usage and mentions. This is crucial for understanding and modeling infectious diseases because changes in peoples behavior can affect the spread of an epidemic by reducing their risk of infection.
Integrated response
At the end of the Cold War, the U.S. government worried that nuclear material or biological agents in the former Soviet Union could fall into the wrong hands. To reduce the risk of theft, the United States implemented a number of strategies to help lock up old facilities and secure their contents. The program was called Cooperative Threat Reduction (CTR) and was part of the NunnLugar Act of 1991. In 2009, CTR expanded to enhance global capabilities for detection and diagnostics and to create a cooperative network that includes a wide range of countries, international organizations, and non-government partners. The purpose was to prevent, reduce, mitigate, and eliminate common threats to national security and global stability. Los Alamos scientists, drawing upon their success with the Human Genome Project, began multiple efforts in 2012 to share their extensive genomics expertise as part of the CTR effort. One such initiative involves a collaboration between Los Alamos and the Johns Hopkins University Applied Physics Laboratory to create a library of diagnostic assays for especially dangerous pathogens to aid in rapid detection of disease. In another CTR initiative, scientists from Los Alamos have been instrumental in helping establish sustainable genome science programs at the newly formed Center for Public Health Research (CPHR) in Tbilisi, in the Republic of Georgia, and, in collaboration with the Sandia National Laboratories, the Center for Genomics Science at the Jordan University of Science and Technology (JUST) in Irbid, Jordan. Los Alamos has already delivered sequencing equipment and computer systems and conducted in-house training in Tbilisi as well as helped with the plans for a major renovation in Irbid to house the JUST Center. Scientists from both facilities have also traveled to Los Alamos to receive further informatics training, including a recent workshop in June 2013 that included CTR partners from South Africa as well. The Los Alamos CTR program is continuing to expand, says Chris Detter of the Los Alamos Emerging Threats Program Ofce. We will continue to support these international partners by providing specic protocols for sequencing, assembling, and analyzing genetic data. We are also building capabilities in these and other countries as a foundation for a wide variety of research, including pathogen identication and the development of therapeutics and vaccines.
This visualization of sequence analysis and annotation helps when comparing organisms genetic blueprints.

Looking to the future, scientists have been considering how to further expedite disease response by integrating existing databases, analytics platforms, and modeling programs to rapidly evaluate a situation and recommend countermeasures. One example of this kind of integration is a Los Alamos-led pilot project called BioPASS (pathogen analysis supporting system) that demonstrates how existing biosurveillance systems could be accessed and integrated through a user-friendly Web interface. Upon receipt of information from a rapid diagnostic test, BioPASS can access existing genomic databases and analytics to help identify the pathogen in question and create a simple model, showing both how the disease could progress and how the impact could be reduced by certain countermeasures. For instance, it could display a graphical comparison of the effect of administering antibiotics to the patient on day two versus another showing antibiotics beginning on day six. The idea is to enable analysis and collaboration using many existing platforms and a variety of data sources, says Los Alamos biologist Helen Cui. This will help inform decisions that must be made quickly. Furthermore, the hope is to take this analysis one step fartherperhaps the incident location could be cross-referenced with Twitter data to identify if there are outbreaks in nearby geographic locations. The BioPASS pilot was very successful, and the team is now proposing to broaden its scope to include more data streams and more variables. Integrating the components of biosurveillance is a major endeavor, but the scientific community at Los Alamos has been working toward this goal for some time. After the release of the National Strategy for Biosurveillance, Basil Swanson served on the review team for the Strategys implementation plan. Helen Cui was also a participant in the Biosurveillance Science and Technology Roadmap for the Strategy. Through this participation, Swanson and Cui were able to bring a perspective of technological advances to the national biosurveillance picture. In general, some of the biggest gaps in biosurveillance are in diagnostics, big data analysis, and modeling. These are all things Los Alamos does well, says Swanson. Rebecca E. McDonald

Los Alamos materials scientist Fernando Garzon has a plan for national-scale renewableenergy storage

10

1663 los alamos science and technology magazine JULY 2013

1663 los alamos science and technology magazine JULY 2013

11

THE ENERGY AVAILABLE from solar and wind resources in the United States vastly exceeds current needs. Yet both sources suffer from the same deficiency, hindering their development for large-scale use: they are intermittent. They only lead to power generation when the Sun is shining or the wind is blowing. That means extra energy, exceeding the current demand, must be collected whenever the Sun and wind are available, and the excess must be stored in a manner thats easy to access when theyre not. However, while available solar and wind resources dwarf the nations energy demand, the U.S. energy storage capacity doesnt come close to meeting the energy demand. The current electrical grid doesnt rely much on energy storage. Rather, when demand increases or decreases in one part of the grid, regional fossil fuel-based plants either adjust their fuel burn accordingly or exchange power with other parts of the grid. However, because utilities cant turn on additional power generation units as quickly as the demand can increase, readily available additional power comes with a cost: Either regular gas turbine-generator systems must run all the time at low power, essentially idling until they need to be cranked up, or special, fast-startup systems must be brought online. Both options are inecient, consuming more fuel per kilowatt of electricity than normal operations. This ineciency could be mitigated if there were adequate energy storage available. Additional turbines could run at optimal eciency levels, producing more energy than needed and storing the excess for later. But current energy storage technologies fall short. Batteries, for example, have far too little energy density to meet power plant-scale needs,

much less grid-scale needs. They are also expensive, slow to charge and discharge, and too limited in operating life to meet the storage requirements for everyday demand-matching activity. And significantly increasing the nations wind and solar capacity would necessitate dramatically exceeding those storage requirements.
Fuel from air

Making the power supply reliable, despite unpredictable demand, is known as firming. Firming fossil fuel-based power is relatively straightforward (although inecient), but firming wind and solar is much more dicult. The amount of power extracted by a wind turbine, for example, is proportional to the cube of the wind speed, so if the wind speed drops in half, the power produced drops to one-eighth of its previous level. Only extremely high-capacity and rapidly accessible energy stores can accommodate such large and sudden decreases in power production. And only chemical bonds, which can store energy at a density about 100 times greater than that of batteries, have the potential to meet those needskicking in enough power on a moments notice to replace the seven-eighths that was just lost. Fernando Garzon, a materials scientist at Los Alamos, has a plan to store energy on a national-use scale in the chemical bonds of an unlikely molecule, ammonia. While chemical energy today is largely synonymous with carbonbased fuels, ammonia (NH3) is nitrogen-based. Garzons plan is to rapidly convert large quantities of electrical energy into this carbon-free fuel supply, and then convert it back to electricity as needed.
According to the U.S. Energy Information Administration, the total American energy consumption, in all forms, is the equivalent of approximately 10.5 kilowatts of continuous electricity use per person. Thats more than four times the global average and enough to power 175 incandescent, 60-watt light bulbs on an ongoing basis. About 4.9 percent of that energy derives from noncombustible, renewable sources, and, of those, hydroelectric and geothermal energy contribute more than two-thirds. The most intermittent sources, wind and solar, amount to only 1.2 and 0.2 percent of U.S. consumption, respectively. It will take a new approach to energy storage to substantially increase the contribution from wind and solar.

Solar 0.2% Wind 1.2%


Re new abl e en ergy 9%
Coa l 20%
Geothermal 0.2% Hydroelectric 3.2% Biomass 4.4%

8% 27. n o i ortat Transp


.4% Industrial 31

Petroleum 36%
ctric power ear ele Nucl

8%

Commercial 1 8.5%

26% l gas tura a N

Reside ntial 22.2 % Expo rts 10. 6%

12

1663 los alamos science and technology magazine JULY 2013

Ammonia storage tanks will need to crop up all over the United States if ammonia-based energy storage is to enable large-scale domestic solar and wind power. Onsite conversion facilities will use the incoming renewable-sourced electricity to create ammonia and then re-convert the ammonia into electricity as dictated by the energy demand.

Nitrogen is abundant and its free, Garzon says. It makes up 80 percent of the air we breathe and, as an energy storage medium [ammonia], is completely free of pollution. The nitrogen for Garzons process would indeed come from the air and would ultimately be returned to the air once the stored energy is converted back into electricity. The only other raw ingredient consumed in the process is water, and that, too, is restored in the end. The idea works like this: Electricity is generated by wind and solar farms whenever wind and sunlight are available. The electricity that exceeds the current demand is redirected into Garzons electrochemical conversion system, where it is used to isolate the atoms that go into ammonianitrogen from the air and hydrogen from the electrolysis of water (separation of H2O into H2 and O2). Both nitrogen and hydrogen are initially supplied in the form of diatomic molecules (N2and H2), but they must be separated into individual N and H atoms with help from a metal catalyst before they can be combined into ammonia. The details of this conversion, including operating temperatures and pressures, quantities of nitrogen and hydrogen, and selection of catalysts and other facilitating materials, are currently being researched. Once the ammonia has been created, it enters a large storage tank to await the renewable-energy equivalent of a rainy day. When energy is needed, the ammonia is fed into a fuel cell where it is readily converted back into nitrogen, water, and electricity. Such ammonia-fueled fuel cells already exist and are both highly ecient and fully scalable. They go on and off easily and support a large flow of electricity. And they do not require inecient fossil-fuel firming. Ammonia also has the advantage of being portable. Because ammonia is easily liquefied, it is convenient to transport and store. There may ultimately be many thousands

of such tanks around the country, potentially connected by tanker truck routes or pipelinesas well as power lines. So whats the X factor in Garzons idea? Its the chemical bonds themselves. The energy storage capacity of nitrogens bonds is both the plans greatest virtue and its biggest dilemma: current methods for cleaving diatomic nitrogens triple bond to make individual nitrogen atoms, N2 N + N, require a great deal of energy. Some of that energy is recovered when the process is reversed in the fuel cell, but it still presents a large hurdle along the way.
Relieving the pressure

An industrial-use version of this conversion process, including the N2 separation, already exists. It is widely used to manufacture ammonia for agricultural and other applications, but, until Garzon and others improve upon it, its far too expensive for firming renewable energy. Known as the Haber-Bosch process, it relies on high-pressure catalysis: the diatomic gases N2 and H2 flow over a catalytic surface made from either iron or ruthenium, which breaks the diatomic bonds, creating individual nitrogen and hydrogen atoms. These atoms then combine to form ammonia gas, NH3. Sounds good, but heres the problem: The rate-limiting step, not surprisingly, is breaking the diatomic nitrogen molecules triple bond. In order to obtain a decent reaction rate and adequate performance from the catalysts, the process must be run at a high temperature, around 500C. But the ammonia-building reactions are exothermic, and a high temperature suppresses them by inhibiting the release of heat. This would be a deal-breaker, except that, because the overall reaction merges reactants together, there are more incoming molecules (N2 and H2) than outgoing molecules (only NH3). As a result, the reaction can be driven in the desired direction

1663 los alamos science and technology magazine JULY 2013

13

Electrons

N2

H2

N2 N N N H N H H N H H H

H+ H+ H+ H+ H+

H2 H H
H+

NH3

Garzons electrochemical process will use incoming solar- and wind-produced electricity to convert nitrogen (from the air) and hydrogen (from the electrolysis of water) into ammonia. Electrodes coated with specialty materials (shown as red and blue bars) catalyze the separation of diatomic nitrogen and hydrogen molecules into individual atoms. Positively charged hydrogen ions produced at one electrode travel through an electrolyte material (shown as liquid) to the opposite electrode, where the nitrogen atoms are produced. Electrons from the hydrogen atoms ow to the nitrogen electrode through an external wire. At the nitrogen electrode, hydrogen attaches to nitrogen to make ammonia, which is then released and subsequently harvested. To make the process sufficiently energyefficient to meet grid-scale energy storage requirements, both the electrode and the electrolyte materials need to be carefully engineered at the molecular level.

by imposing a very high pressure: the system essentially seeks to relieve its pressure by reducing the number of reactant molecules present, thereby favoring the production of ammonia. Unfortunately, the amount of pressure required to accomplish this is enormous, around 200 atmospheres, and with that much pressure comes tremendous expense in terms of reinforced reaction vessels and pipes, energyconsuming pumps and compressors, and heightened safety accommodations. The solution, then, is to invent a way to make this process work at low temperatures and low pressures. This is a materials science challenge because there are two main components to the electrochemical conversion system, and both will require new, specialty materials to do their jobs under less extreme conditions. The two components are the electrodes, where the catalyst does its work, and the electrolyte that lies between them.
Electrodes and electrolytes

First and foremost, a new electrode material will need to do the heavy lifting, breaking the strong triple bond in diatomic nitrogen from the air. The current Haber-Bosch process uses heat and pressure to drive the breaking of N2 bonds on the iron and ruthenium surfaces. The new Los Alamos process, however, will use electrical energy from wind and solar farms to facilitate cleaving the diatomic bonds, so the reactions can occur at lower pressures and temperatures. Garzon has a lead on a novel material that might do the trick: molybdenum nitride. Early experimentation with molybdenum nitride surfaces has proven to be promising, splitting diatomic nitrogen and producing intermediate structures along the way to producing ammonia, NH and NH2, attached to molybdenum on the electrode surface.

These successes were first revealed in neutron scattering experiments carried out by Garzons colleagues Tony Burrell, Alex Mueller, and T. Mark McCleskey at the Los Alamos Neutron Science Center. In addition, techniques developed at Los Alamos allow the molybdenum nitride to be built into a nanostructured material capable of providing an enormous surface area for catalysis from a very small amount of the catalyst itself: an incredible 70 square meters per gram of molybdenum nitride. Because the surface is textured at the nanoscale, the large surface area is achieved without an especially large electrode. This is an important achievement because the greater the surface area, the greater the reaction throughput. The research team has also developed a thin-film deposition technique to produce large-surface-area films on supporting electrode structures. The new nitrogen-to-ammonia conversion process will also require a new electrolyte to convey hydrogen from one electrode (where hydrogen atoms are isolated) to the other (where they combine with nitrogen to make ammonia). The electrolyte cant be water or water-based, like the acid solutions used in some batteries, because water prevents the catalysts from functioning. And it needs to selectively transport positively charged hydrogen ionsotherwise known as protonsbut reject electrons, forcing the electrons along an external wire to the other electrode. (Although the electrons must ultimately arrive at the same nitrogen-binding electrode as the protons to prevent an overall charge buildup, they get there by a different road.) Garzon believes he and his team have found two viable classes of electrolytes to do this job: one liquid and one solid. The former, ionic liquids, are salts that liquefy at or below room temperature. They exhibit excellent chemical

14

1663 los alamos science and technology magazine JULY 2013

stability and dissolve ammonia as well as water does, and they are compatible with the catalysts under consideration. They also offer great potential to be specifically tailored to support nitrogen conversion into ammonia with maximum proton conductivity. However, they are relatively untested. Los Alamos materials chemist James Boncella is working with Garzon to better understand how ionic liquids work and how they can be improved as electrolytes. The other possibility is a solid proton conductor. This would take the form of a relatively thin polymer or ceramic strip sandwiched between the two electrodes. The optimal material will no doubt bear a complex molecular structure that was carefully designed for the purpose of proton conduction. However, the common chemical tin pyrophosphate (an ingredient in toothpaste) has already been shown to work to a limited extent, proving the potential of solid proton conductors. The overall electrochemistry of both solid proton conductors and ionic liquids is not yet well known (as the electrochemistry of water is), making necessary additional foundational research by Garzons team. For example, Neil Henson, another Los Alamos materials chemist, performs detailed theoretical modeling of the molecular structures

of ceramic electrolytes, to help figure out how they transport protons, and molybdenum nitride electrodes, to help figure out how they separate diatomic nitrogen and synthesize ammonia. In fact, the materials science solutions for both the electrolyte and the electrodes will require significant research and experimentation. The Los Alamos scientists need to create theoretical models of how the materials function at the molecular level. They need to synthesize each material under consideration and test its performance by measuring quantities such as ion formation, ammonia solubility, proton transport, and surface reaction rates. No question, these scientists have their work cut out for them. It remains to be seen whether the new and improved, ammonia-producing electrochemical conversion process, including the N2 separation, can be done in a practical, low-cost, and large-scale way. But Garzon feels that he and his colleagues at Los Alamos have already demonstrated the validity of the concept. Technologically, yes, I believe it will work, Garzon says. In my mind, the greater uncertainties are economic and politicalwhether or not our design will become a national infrastructure to support renewable energy. Craig Tyler

New Day for Nitrogen


If Garzon and his team are successful, they will do more than rm up renewable sources of energy; they will also dramatically cut costs in a major segment of the chemical industry. Ammonia production today is big business. Approximately 200 million metric tons of ammonia are produced worldwide each year, and more than one percent of all energy produced globally is consumed in its production. About ve-sixths of this ammonia is used in fertilizers; the rest has a variety of uses including cleaning products, nutrients for fermentation processes, and antimicrobial agents for animal feeds and beef, to name a few. Ammonia is also the starting point in the manufacture of nearly all nitrogenbased compounds, supporting a wide range of applications that includes propellants and explosives. Ammonia can even be used as a clean-burning combustion fuel; it has been successfully demonstrated in automobiles and aircraft rocket engines, although it contains only about 40 percent as much energy as an equivalent volume of gasoline. The way I see it unfolding, Garzon says, this technology will be introduced into the ammonia industry initially, benetting fertilizer production and maybe lowering food prices. Then there will be a regional-scale energy storage test somewhere, using ammonia conversion, storage, and fuel cells to rm up a local energy utilitys wind or solar production. If that proves successful, then hopefully our work will accelerate the use of wind and solar power nationally.

1663 los alamos science and technology magazine JULY 2013

15

On the Forefront
The sunlit upper layers of the worlds oceans are home to an enormous variety of microscopic organisms that create their own food through photosynthesisthe process wherein sunlight powers the synthesis of simple carbohydrates from carbon dioxide and water. The organisms, known collectively as phytoplankton, are at the base of the aquatic food web and are crucial for sustaining oceanic life. When they excrete, die, or are consumed, some of their carbon sinks to the deep ocean, the net effect of which is known as the biological pump: marine sequestration of atmospheric carbon dioxide. Phytoplankton are also responsible for generating a large fraction of the Earths gaseous oxygen (a byproduct of photosynthesis) and are, therefore, critical for sustaining human lives as well. Scientists are able to monitor the global distribution of phytoplankton because the creatures change the optical properties of the water they inhabit. Chlorophyll and other pigments contained in the phytoplankton absorb the sunlight needed for photosynthesis, leaving less light to be reected from the waters surface. Using satellites to measure the reected light in several color bands, scientists can estimate the abundance of chlorophyll (phytoplankton) in the water. The Climate, Ocean, and Sea Ice Modeling team at Los Alamos National Laboratory is on the forefront for developing numerical models used in ultra-high-resolution computer simulations. The graphic shown here is from a simulation of ocean chlorophyll concentration, a state-ofthe-art enterprise combining a model of phytoplankton ecology and nutrient availability with a detailed ocean circulation model. Ocean chlorophyll data was used to verify that the simulations produce realistic phytoplankton distributions. No small undertaking, the simulation used approximately 18 months on Encanto, the highperformance, massively parallel, 170-teraop computer at the New Mexico Computing Applications Center. Eventually, the team hopes to include these marine biogeochemical dynamics in a fully coupled global climate simulation and so assess the oceans phytoplankton population in the face of global climate change. The graphic shows the distribution for mid-November. A high concentration of chlorophyll (red) runs in a broad band around Antarctica due to the arrival of signicant sunlight and the seasonal availability of nutrients that accumulated during the cold austral winter. High concentrations seen along the equator and along the western coasts of South America and Africa are due to an inux of nutrients brought to the ocean surface by upwelling currents.

Modeling the distribution of oceanic phytoplankton h

16

helps to predict the sustainability of life on planet Earth.

17

In the near future, structures and systems will look after themselves.

18

OUR BODIES USUALLY LET US KNOW when they require some sort of care or treatment. Can manmade structures do the same? Much of our technical infrastructure is approaching, or already exceeds, its initial design life, says Chuck Farrar, leader of the Engineering Institute at Los Alamos, a research and education collaboration between Los Alamos National Laboratory and the University of California, San Diego. We have to monitor the health of these structures because they continue to be used despite the degradation theyve accumulated from their operational environments, he adds. He refers to a wide range of structures, including buildings and bridges, naval equipment and nuclear reactors, amusement park rides and aircraft, as well as large-capital scientific infrastructure items, such as particle accelerators, telescopes, and supercomputers. Farrar wrote a textbook on structural health monitoring (SHM) and guides a cohort of early-career research staff, postdoctoral researchers, and engineering students in its methods. He believes many of the nations infrastructure woes can be addressed with new technology that he and other members of his team are developing. The technology is designed to produce and interpret data streams from sensors that, they predict, will soon be all over the place.
If it aint broke

Human beings are quite adept at knowing when their personal belongings need to be repaired or replaced. Some items can be judged by feel or with a simple visual inspection, as when shoe soles wear thin. Others are used until they fail and are then replaced, like a computer or a hot water heater. But there are others that cant be judged so easily by their look or feel and cant be run until they fail; they must instead be judged by time, as with the expiration date on food or medicine. Compared to personal belongings, major infrastructure items can be much more dicult to assess. Oftentimes, they cant be run to fail because it would be either catastrophic (a bridge collapsing with people on it) or unacceptably expensive (the loss of a single machine upon which an entire production line depends). Such negative outcomes are generally

prevented with regularly scheduled, or time-based, maintenance. Yet this is undesirable from a lifetime-expense perspective. It forces people, businesses, and governments to pay for inspectionsor outright replacementsbefore they are needed. For example, when the engine oil in a car is changed every 3000 miles, even though the oil may still be usable, the owner pays for more oil changes than needed over the life of the car, to say nothing of the unnecessary environmental impact. And while additional oil changes at $40 apiece may not be too burdensome, retiring high-end hardware before its time (think combat missiles) costs considerably more. We can save money, gain eciency, and improve public safety, all by shifting our culture of maintenance from time-based to condition-based with SHM, Farrar says. In that paradigm, repairs and replacements would be carried out only when they are needed. Factories wouldnt be in danger of shutting down production because one machine breaks unexpectedly (nor would backup machinery be needed as a safeguard) if the condition of the machinery were automatically monitored to provide advance notice of potential problems as they develop. Civilian and military hardware could be kept in service longer and, in some cases, relegated into less critical applications as it ages. Rental equipment could be priced according to the measured amount of wear and tear introduced by the renter. Broadly speaking, condition-based maintenance, as enabled by SHM technology, is part of a true cradle-to-grave system state awareness capability that maximizes the return on investment. Thats where the sensor proliferation comes in: sucient data must be collected to assess each structures condition. Unfortunately, it may be prohibitively expensive to retrofit existing structures with large numbers of sensors. (Imagine the Golden Gate Bridge needing multiple sensors on every single one of the interconnecting beam segments underlying the road surface, plus many more on the towers and cables.) However, if the sensors were incorporated into the construction of new bridges, aircraft, equipment, and so on, then their cost would amount to only a tiny fraction of the overall construction or fabrication costs. Therefore, the SHM culture shift can be expected to ramp up as major new infrastructure items are built. Indeed, this is already underway in China,

Structural health monitoring (SHM) involves deploying sensors and software to monitor a wide variety of infrastructure objects for any sign of degradation over time. Objects include buildings and bridges, power plants and industrial plants, ships and aircraft, and other largeinvestment equipment for transportation, entertainment, and scientific research.

1663 los alamos science and technology magazine JULY 2013

19

where bridges, dams, offshore oil platforms, and other large infrastructure construction is accompanied by a large-scale sensing capability.
Shaky foundationin a good way

Sensing and interpreting a wealth of structural health data is far from straightforward. Simply deploying sensors is not enough. Rather, a number of critical design decisions must be made and implemented before the SHM system is capable of delivering the desired information. For example: Which sensors should be used? How many? Where should they go? Do they need to operate in extreme temperatures or rain? Do they need to run all the time, or can they just power up periodically as needed? How often should they turn on, and how long should they stay on when they do? Will they store their measurement data until someone or something collects it, or will they transmit the data somewhere? If the latter, then how often should they transmit? And how much electrical power will be consumed by all this data acquisition, storage, and transmission? Assuming that sending repair teams to replace thousands of batteries is not an option, how will the sensors obtain the power they need, year after year? To power the sensors without a hard-wired connection to the electrical grid, which is not always available, engineers could opt for solar energy. But while solar cells can be small and independent of the grid, their use would be restricted to

Los Alamos engineers Stuart Taylor (left) and Steve Anton fit a sensor node, capable of analyzing and transmitting sensor data, to an energy harvesting element on a wind turbine blade. Mechanical energy from the natural vibration of the blade is converted into electrical energy for use by the sensor node, allowing the sensors to continuously monitor the structural health of the blade without needing a battery or other power source.

locations with frequent access to direct sunlight. In a darker environment, such as the underside of a bridge, an elevator shaft, or an airplane flying at night, sunlight would not be available. What would be available, in these and a variety of other SHM settings, are frequent mechanical vibrations. Within certain materials, including some crystals and ceramics, mechanical stress causes electrical charge to accumulate. This property, known as piezoelectricity, has multiple practical applications. It is frequently used to make sensors that operate by converting motion into electrical signals or, in reverse, to make actuators that convert electrical inputs into motion. Los Alamos postdoctoral researchers Steve Anton and Stuart Taylor harness piezoelectricity in yet a different way, capturing and storing electrical energy from everyday vibrations to provide power in settings where light is limited. They designed their vibration-powered sensor units with two important attributes, in addition to drawing from an energy source thats freely available in many SHM settings. One of these attributes is the energy storage element. Instead of using a rechargeable battery, which would be heavy and lose storage capacity over time, the two researchers chose to charge a supercapacitor with their captured energy. Capacitors are simple electrical storage devices consisting of two metal plates that hold equal and opposite electrical charges. They dont hold onto a charge forever, Anton says, but thats okay because the charge is continually replenished by more vibrations. The other attribute Anton and Taylor built into their sensors is short-range radio transmission capability. As a result, the sensor devices are completely wireless: no wires for power coming in and no wires for sensor readings going out. Taylor explains, however, that short-range communication is a necessary limitation. To get miles and miles of transmission capability would require more power and add a lot of weight, he says. So sometimes we have to settle for each sensor communicating with a neighboring sensor thats no more than 50 meters away and hopping the data down the line to some data storage unit. There are many ways to accommodate short-range communications to a local data storage unit. On an airplane, for example, wireless sensors in the wings might transmit to a central data storage unit onboard, possibly no larger than a flash drive, which could tap into the planes internal power and connect to its radio if needed. Alternatively, the sensors could simply store the data until a separate system (or person) comes along to collect them. On an SHM-equipped bridge, a vehicle or unmanned aircraft known as a data mule could wirelessly download all the sensor data each time it drives across or flies by the bridge.

20

1663 los alamos science and technology magazine JULY 2013

Helping hand

Once the sensors are powered and their data collected, how will that raw sensor data be converted into actionable information for damage detection, characterization, and prediction? Consider, for example, two widely used sensors: strain gauges, which measure how much a solid material is being stretched or compressed, and accelerometers, which (no surprise here) measure accelerations. Suppose a series of strain gauges and accelerometers are axed to an airplane wing. During a flight, the strain gauges obtain a variety of measurements. Most of these readings return to normal afterward but a few remain permanently strained. The accelerometers record the motions from a wide range of forces acting in different directions with different intensities. Thats the raw data. Now, based on that data, is there any damage? Could there be damage located between two of the sensors? Is maintenance needed? Should any parts be replaced? The data alone dont answer these questions. Somehow, the pattern of measurements needs to be compared with other patterns that might indicate damage or health, even though those patterns might be specific to a particular arrangement of sensors and may not yet exist. Perhaps the patterns can be obtained through experience, with future accumulated flight hours. Or maybe they can be calculated in advance by some as-yet undiscovered formulation. Los Alamos engineering researcher David Mascarenas, also part of the Engineering Institute, works on an unconventional solution to this data-to-decision problem. Sometimes the best way to obtain coherent, actionable information from a sensor system, he says, is to give it a helping handfrom an actual human hand. Mascarenas took an unassuming red-knit glove and fitted it with a collection of cell phone vibrators distributed around the hand and fingers. The vibrators receive data from a set of accelerometers attached to a test object. When the test object is subjected to inputs from a shaker, various accelerometer readings cause particular parts of the glove to vibrate. In principle, the test object could be a anything from a rigid structure on a laboratory benchtop to an unmanned aircraft radioing data back whenever it climbs, banks, or changes speed. Given a little time, a person wearing the glove can learn by association what the different vibrations mean, similar to the way a driver can learn what the different vibrations of an old car mean (about to stall, bad brake rotors, etc.). From that time forward, he or she can feel whats happening to the test object, sight unseen. The human nervous system includes an extremely capable, extremely generalized system for interpreting sensor data, from the eyes, ears, skin, etc., Mascarenas says. The glove taps into that.

David Mascarenas, of the Engineering Institute at Los Alamos, models the latest fashion within the structural health monitoring community: a red glove fitted with vibrators so that he can feel what various sensor-equipped objects are doing. The wires coming from the glove go to a small unit that receives wireless signals from sensor nodes on the multi-level test structure in the room behind him where Chuck Farrar, leader of the Institute, operates a shaker connected to the structure in order to generate test signals for the glove. Over time, a person wearing the glove can learn to interpret what the structure (or any sensor data-transmitting object) is doing, based on what he or she can feel.

While computers are better at interpreting certain types of data (a barcode), humans are better with other types of data (facial recognition). Mascarenas finds that the two differing skill sets both have a role in SHM damage detection and interpretation, depending on the object and the type of damage in question. For some applications, he says, the combination of machine and human processing of sensor signals leads to better decision-making than either one could do alone.
Plane scan

Meanwhile, a colleague of Mascarenas and postdoctoral researcher at the Engineering Institute, Eric Flynn, is hard at work on the machine side, developing both hardware and software for automated damage recognition without any help from the red glove. Working with graduate student Greg Jarmer, Flynn designed and constructed a portable system that nondestructively probes solid surfaces for defects. The system involves a laser beam thats redirected by a motioncontrolled mirror and looks like something out of a science fiction moviea red line that sweeps across a test surface. In Flynns experimental setup, the laser system analyzes a large

1663 los alamos science and technology magazine JULY 2013

21

metal plate, onto which corrosion damage has been introduced on the back side. A small piezoelectric vibrator shakes the plate at ultrasonic frequencies while the laser scans the undamaged side, looking for small, local changes in the resulting vibration patterns caused by the hidden corrosion. Flynn wrote the software that his system uses to translate the laser response data into an actual damage assessment. So far, it has been proven to correctly recognize corrosion in metals and more complex damage in composite materials, as well as cracks, delamination (layers peeling apart), and holes. And unlike traditional vibration-mode analysis, it offers both high resolution and portability. The system could be used to scan an aircraft body between flights, using a robotic arm to move the laser all around the aircraft, Flynn explains. Its not the kind of system where the airplane can test itselfwere not quite there yetbut for slowly evolving corrosion and fatigue damage, continuous self-monitoring is much less important than comprehensive measurements. The active scan is better. Farrar also notes the tangible benefits. Right now, on an unmanned aerial vehicle [UAV, or drone], theres no way to assess damage to the wings; they are simply replaced after a certain number of flight hours. Erics system would save a lot of waste by preventing perfectly healthy wings from being thrown away before their time.
Sweeping the streets

A technology developed at the Engineering Institute uses a scanning laser to search material surfaces for damage, such as cracking or corrosion. Having both high-resolution and portability, this system can ultimately perform a variety of important structural health monitoring functions, like checkups for unmanned aerial vehicles between ights, as depicted in this artists conception.

Once a set of sensors has been allocated to monitor the health of some structure or system, and a data analysis system has been established to interpret the sensor readings, the next question is how best to arrange the sensors to maximize the value of the information they collect. In addition to his laser work, Flynn has developed a technique to accomplish this optimization. As a compelling scenario to motivate his research, he created a computer model of a generic urban environment inspired by Times Square in New York City, plus several of the surrounding blocks. He then distributed 10 (simulated) radiological detectors, capable of detecting radiation sources that may indicate the presence of a nuclear threat, such as an undetonated dirty bomb. When Flynn introduces a small radiological source somewhere within the simulated section of the city, his program calculates the range from the source to each detector, including the effects of any line-of-sight obstructions, and determines which detectors will register a signal and how strong that signal will be. If at least three detectors make solid readings (more is better), then the system should be able to narrow down where the radioactive material is located.

The purpose of the simulation, Flynn says, is to figure out exactly where the detectors should be located to create the optimal detection probabilitywithout knowing in advance where the source will show up or how big that source might be. Of course, that problem can be partially avoided by simply adding more sensors, but, Flynn notes, there is always a limit to how many sensors are available for any given use. That limit could come from expense (radiological sensors requiring constructed poles for a better view), but it could also come from weight (sensors and wires on an aircraft or spacecraft), power requirements (multiple sensors drawing from a common battery), bandwidth (many sensors competing to transmit data over a common frequency), or data processing requirements (too much data to process quickly or cheaply). So limiting the number of detectors in the radiological simulation adds a necessary element of realism. If circumstances allowed it, the radiological source problem might be solved with experience. Over time, many different sources would be picked up by different detectors, and the system operator might learn to tweak their locations to improve the performance. For instance, if one of the detectors never detected anything, then it could be moved around until it becomes more productive. Of course, Flynn says, time and experience are luxuries you dont have with radiological crises. So he set up the next best thing, simulated time and simulated experience, by implementing whats known as a genetic algorithm: a natural-selection mechanism akin to the Darwinian process of evolving to suit the environment. The simulation initially deploys its sensors to a randomly generated set of locations and then introduces a series of simulated radiation sources, one at a time, at random locations within the section of the

22

1663 los alamos science and technology magazine JULY 2013

city under surveillance. Some of these sources represent potential threats, while others represent benign activity, such as isotopes used in routine medical procedures. With each trial source location, the performance of the detectors is evaluated. For example, how often would each detector, in conjunction with all the others, contribute valuable information? Over many different trials, the sensor network develops a lifetime of experience. Detector locations, or sets of locations, that do not help enough are like animals that are unfit for survival or reproduction; they are deleted from the gene pool. New sensor locations are introduced to replace the ones that are deleted, based on mutation and breeding among the remaining sensor population, in a process that repeats for each new generation. In this way, the 3D sensor locations evolve to their optimal configuration. Flynn might deservedly take pride in protecting his virtual city from nuclear disaster, but the work also has a broader impact. Optimizing the Times Square radiological alarm system is no different from optimizing the sensor placement in any surveillance problem, including SHM. Either way, its a matter of preparing for a potentially harmful agent, whether that be a radiological weapon or a structural failure, that cannot be predicted in advance.
Sensor suicide mission

We train autonomous vehicles like you might train a pet, Stull says. Except that you probably wouldnt send your pet into a war zone. Indeed, Stulls training program involves rewards and penalties applied to complex, dangerous tasks, such as patrolling a modern urban combat environment. Groundbased vehicles might be rewarded for finding a place to refuel and penalized for encountering an IED [improvised explosive device], he says. They essentially try to maximize their score on the reward-penalty scale, all the while driving around in a pattern that will be unpredictable to the enemy.
Making it real

Other applications call for flexible sensor systems that can be deployed on short notice to remote and potentially hostile regions. In such cases, autonomous, sensor-packed rovers and UAVs may be needed. The researchers at the Engineering Institute, not the sort to leave any stone unturned, have designs in this arena as well. When Mascarenas takes off his red glove, its so he can work on his lightweight, autonomous UAV. About the size of a small bird, Mascarenass plane is designed to extol the virtues of multitasking materials. For this kind of ultra-light, self-powered plane, you really cant tolerate any component that doesnt pull its own weight and then some, Mascarenas says. With that in mind, he designed the wings from a graphene oxide material that does double duty as a wing and an energy-storing capacitor. He also designed the planes to be disposable, in case they should need to be deployed into harsh environments from which they may never return. Mascarenass colleague Chris Stull develops software to instruct autonomous vehicles, including ground-based rovers and Mascarenass UAVs, how to navigate within complex environments without receiving regular attention from human operators.

But even though the Engineering Institutes overall effort includes exotic, high-risk applications like these, its researchers acknowledge that real-world, sensor-system testing needs to begin with more predictable SHM environments. Indeed, sensor-based SHM is already in use in certain applications involving rotating machinery, in which operating conditions (rotation rates, temperatures) are tightly controlled, and damage scenarios (chipped gear teeth, misalignments) are well understood. On the strength of that understanding, real-world SHM applications involving rotating machinery currently in use include a variety of highstakes platforms, such as helicopter drivetrains, Navy vessel propeller shafts, and nuclear reactor coolant pumps. The next challenge is to employ similar technology in less controlled environments, such as aircraft subjected to a huge range of weather, ice, and turbulence conditions, possibly unmanned and behind enemy lines. With all potential applications considered, this work represents an opportunity to rebuild the infrastructure of the world to take care of itself, and the Engineering Institutes team is starting to make it happen. Craig Tyler

The Engineering Institutes Eric Flynn (left) and Greg Jarmer discuss the placement of radiological sources and detectors in a simulation developed by Flynn. The simulation uses a genetic algorithm that evolves throughout a series of tests until it has found an optimal detector placement. Its objective is to protect the public from potentially dangerous sources of radiation while recognizing benign ones, such as those used in routine medical scans.

1663 los alamos science and technology magazine JULY 2013

23

Interplanetary Mission Fission


When you design a spacecraft to explore the solar system, youre always limited by weight and power, says Patrick McClure, a mechanical engineer at the Los Alamos National Laboratory. Too much weight makes it too expensive to get off the ground, and too little power restricts what it can do. For example, the Curiosity rover on Mars, which employs a radioisotope thermoelectric generator (RTG) fueled by a plutonium power source manufactured at Los Alamos, must carefully manage a meager 100 watts of power in order to move around and perform science. But with a lightweight, high-output power source, solar system scientists could obtain data from a much larger suite of power-consuming instruments. Scientists always want more power, McClure says, and we want them to have it. McClure, working with Los Alamos colleague David Poston and David Dixon (formerly of Los Alamos), plus others at the NASA Glenn Research Center and National Security Technologies, LLC, built a successful, small-scale demonstration unit of a fission-based spacecraft power source.

Concept for a spacecraft (at the left end of the boom) powered by a new type of ssion reactor (at the right end) as demonstrated by the DUFF experiment. The long boom keeps the spacecraft at a safe distance from the reactor to protect it from damaging radiation.
CREDIT (BACkGROUND): CASSINI IMAGING TEAM, SSI, JPL, ESA, NASA

Known as DUFF (Demonstration Using Flattop Fissionsan extension of an earlier tabletop experiment that resembled the at top of an aircraft carrier), the system uses highly enriched uranium to drive a sustained nuclear reaction that automatically adjusts to supply whatever amount of power is needed, up to a maximum amount. Most spacecraft are limited to 150 or 200 watts, but a DUFF-type fission power system is expected to provide 1000 watts for the life of the mission, enabling the spacecraft to do significantly more science. The new power source will not be NASAs first fission reactor in space. Back in 1965, a space vehicle called SNAP-10A attempted to y a much larger fission reactor requiring an active control system. That control system malfunctioned, permanently shutting down the reactor in just 45 days. What makes this new system different is its simplicity, Poston explains. DUFF uses a heat pipe with no moving parts and a simple Stirling engine thats already ight qualified. The heat pipe itself is a Los Alamos invention that carries heat away from the reactor and into the Stirling engine without the circulating uids and pumps used in conventional heat exchangers. The heat energy delivered by the heat pipe is converted by a reliable, high-efficiency Stirling engine into mechanical energy, which is then converted to electricity by a small alternator. The combined system is lightweight, adaptable, and maintenance-free for more than a decade, making it ideal for missions to the outer solar system, where solar power is not an option. The only alternative power source available for such deep-space missions is the plutonium-based RTG. It is long-lived and has no moving parts, but because plutonium production is limited, it can take several years to obtain enough to make a single RTG. McClure and others argue that DUFF-like fission power could become plentiful, freeing

RTGs to be used in the missions for which they are particularly well suited. And NASA seems to agree, stating that such small fission designs will be an important addition to their capabilities portfolio in the future. In addition to benefitting current planetary science, Poston says, this system may serve as a stepping stone to enabling technologies for future space exploration. Higher-power fission systems will ultimately be needed for human outposts on the Moon and Mars, as well as for propulsion systems to transport spacecraft and crews across the solar system and beyond. Craig Tyler

Powered by Plastic
Do you want an environmentally friendly way to recharge your phone and other devices? With organic photovoltaic (OPV) solar cells, you could just slap a thin plastic film onto the nearest window to collect power from the Sun. The only obstacle is efficiency, which determines how much power an OPV can obtain from sunlight. At present, they are only about 8 percent efficientthree to four times less efficient than standard, solid-state solar panels. To improve efficiency, researchers need to understand the devices complex submicroscopic structure. OPVs are made from a sandwich of organic layers, each less than 100 nanometers thick (around a thousand times thinner than a human hair). Because the layers are so thin, much of their physics is dominated by the detailed features of the interfaces between them. But imaging these interfaces is difficult because x-rays, normally used to see inside objects, are ineffective with the light elements in the organic materials and tend to damage them. Neutrons, however, penetrate to the interfaces being studied, reect well at the interfaces, and do not damage OPV materials. Adam Moul of the University of California at Davis and Jarek Majewski of the Los Alamos Neutron Science Center became the first to reveal the internal structure of OPV-

24

1663 los alamos science and technology magazine JULY 2013

has so far been viewed with interest by fellow graphematicians, is already being applied to certain aspects of cyber security, with a slew of other applications waiting to be explored. Graph theory deals with relationships between pairs. The namesake graphs are made up of nodes and edges, with each edge connecting a pair of nodes together. One defines rules, say, that each node must be connected to an even number of other nodes. Then one considers how to enumerOrganic photovoltaic solar cell ate and classify all graphs that satisfy the CREDIT: I-MEET INSTITUTE, MATERIALS FOR ELECTRONICS AND ENERGY TECHNOLOGY, FRIEDRICH-ALEXANDER-UNIVERSITT rules and to esh out the how and why of ERLANGEN-NRNBERG, GERMANY. those graphs properties and behaviors. The layer interfaces with neutron reectometry. math is abstract and can quickly become The team discovered a complex relationship challenging, but there is a richness and between the internal structureincluding beauty to it that Lemons finds compelling. the morphology of the interfaces and the Others, however, are motivated to study choice of metal used for the electrodes graphs because they are far and away our and the overall device efficiency, suggesting best hope for understanding the properties promising avenues for research to improve of real-world networks. efficiency. Networks are the backbones of modern Success could make OPVs extremely civilization, and theyre all different. Cellular practical because they are cheaper to networks, the power grid, the interstate manufacture than standard solar cells, and highway system, financial networks, Facethey can be applied easily to rigid or exible book, the World Wide Webeach emerged surfaces, such as walls and even fabrics. for different reasons, grew by different sets Majewski particularly likes the idea of a tent of rules, and consequently, each has differcovered with OPVs, redefining the meaning ent properties. Surprisingly little, however, of roughing it in the wilderness. is understood about the properties of networks in general, and a question as basic Craig Tyler as Does this particular network become more robust as it grows larger? is often difficult to answer. Graphic Math Enter graph theory. It can be viewed as the fundamental mathematics for the study He did it for love. of networks, with graphs seen as structures Nathan Lemons, a young mathematician designed for the express purpose of being with the Laboratorys Applied Mathematanalyzed. Thats not to say that all network ics and Plasma Physics group, and several colleagues have done something that few The Kaliningrad people can lay claim tothey have proven bridge problem: Starting at any a theorem in graph theory. Invented by point, can you Leonhard Euler in 1735 to solve the Knigswalk across all seven bridges, berg Bridge problem (see right), graph crossing each theory has evolved into its own subfield bridge only once? of mathematics, with applications in many Kaliningrad, Russia, is the former city areas of science, including chemistry, of Knigsberg, linguistics, sociology, and, in particular, Prussia. In 1735, computer science. Lemons theorem, which Leonhard Euler
proved the equivalent Knigsberg bridge problem to be impossible. Inset: The problem represented as a graph.

questions will soon be answered. Theres the map versus the territory dilemma, that is, whether the insight gained by studying graphs translates into an understanding of real-world networks. Theres no simple answer. So, for example, Lemons theorem pertains to what are known as random intersection graphs. Defining the graphs by a parameter k, in the limit where the number of nodes goes to infinity, Lemons proved that the graphs undergo a phase transition and abruptly change from being disconnected to being highly connected as k goes from a value of less than one to a value greater than one. Will the theorem help a teen get more friends on Facebook? Probably not, says Lemons honestly. It might, however, provide some insight into how to distribute cryptographic keys in a wireless sensor network. Suppose two sensors can talk to each other securely if they share a cryptographic key, but every sensor cannot share the same key, nor can every sensor have a complete set of keys. The theorem gives conditions for achieving a fully connected network in the case where every sensor receives a small number of keys that are chosen at random from the complete set of keys. For his part, Lemons is unconcerned whether the theorem contributes to anything other than the greater body of knowledge. As he says, I just love the work, and Im happy I could contribute. Jay Schecker

25

ISSN: 1942-6631 Address mail to: 1663 Mail Stop M711 Los Alamos National Laboratory P.O. Box 1663 Los Alamos, NM 87545 1663magazine@lanl.gov www.lanl.gov/newsroom/publications/1663/

Presorted Standard U.S. Postage Paid Albuquerque, NM Permit No. 532

A mountain biker enjoys New Mexicos desert terrain and dramatic skies east of Los Alamos.
CREDIT: James Rickman

Principal Associate Director of Science, Technology, and EngineeringAlan Bishop Editor-in-ChiefCraig Tyler Science EditorJay Schecker Science Writer Rebecca E. McDonald Art DirectorDonald Montoya Los Alamos National Laboratory, an affirmative action/equal opportunity employer, is operated by Los Alamos National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy under contract DE-AC52-06NA25396. A U.S. Department of Energy Laboratory LALP-13-003 Design, Layout, and ProductionLeslie Sandoval Copyeditor Caroline Spaeth PhotographersEthan Frogget and Sandra Valdez Advisory BoardDuncan McBranch, Jonathan Ventura, Lisa Rosendorf, and William Priedhorsky

Printed on recycled paper

You might also like