More stories

  • in

    MIT News – Food | Water

    MIT News – Food | Water | Abdul Latif Jameel World Water and Food Security Lab (J-WAFS)$25 million gift launches ambitious new effort tackling poverty and climate changeEngineering superpowered organisms for a more sustainable worldMIT research on seawater surface tension becomes international guidelineD-Lab moves online, without compromising on impactNear real-time, peer-reviewed hypothesis verification informs FEMA on Covid-19 supply chain risksWhy the Mediterranean is a climate change hotspotMIT startup wraps food in silk for better shelf lifeTowable sensor free-falls to measure vertical slices of ocean conditionsMIT student leaders go virtual with global startup competitionsMIT student leaders go virtual with global startup competitionsEngineers develop precision injection system for plantsStaring into the vortexNew sensor could help prevent food wasteScientists quantify how wave power drives coastal erosionHow plants protect themselves from sun damageMIT-powered climate resilience solution among top 100 proposals for MacArthur $100 million grantInstrument may enable mail-in testing to detect heavy metals in waterSimple, solar-powered water desalinationMIT helps first-time entrepreneur build food hospitality companyReducing risk, empowering resilience to disruptive global changeMaking real a biotechnology dream: nitrogen-fixing cereal cropsJulia Ortony: Concocting nanomaterials for energy and environmental applicationsA new way to remove contaminants from nuclear wastewaterMIT Dining wins the New England Food Vision PrizeCoated seeds may enable agriculture on marginal landsMicroparticles could help fight malnutritionJ-WAFS zeroes in on food security as agricultural impacts of the climate crisis become more apparentJ-WAFS zeroes in on food security as agricultural impacts of the climate crisis become more apparentNew process could make hydrogen peroxide available in remote placesScaling up a cleaner-burning alternative for cookstovesNew vending machines expand fresh food options on campusFirst-year students encouraged to “reuse, refill, replenish”Cody Friesen PhD ’04 awarded $500,000 Lemelson-MIT PrizeJ-WAFS announces 2019 Solutions Grants supporting agriculture and clean waterJ-WAFS announces 2019 Solutions Grants supporting agriculture and clean waterStudy links certain metabolites to stem cell function in the intestineA battery-free sensor for underwater explorationFollowing the current: MIT examines water consumption sustainabilityMarcus Karel, food science pioneer and professor emeritus of chemical engineering, dies at 91PhD students awarded J-WAFS fellowships for water solutionsPhD students awarded J-WAFS fellowships for water solutionsA droplet walks into an electric field …Untangling the social dynamics of waterUntangling the social dynamics of waterEmpowering African farmers with dataJ-WAFS announces seven new seed grantsJ-WAFS announces seven new seed grantsMIT team places second in 2019 NASA BIG Idea ChallengeNew seed fund to address food, water, and agriculture in IndiaNew seed fund to address food, water, and agriculture in Indiahttps://news.mit.edu/rss/topic/food-and-water-security MIT news feed about: Food | Water | Abdul Latif Jameel World Water and Food Security Lab (J-WAFS) en Wed, 29 Jul 2020 09:12:48 -0400 https://news.mit.edu/2020/gift-tackling-poverty-climate-change-0729 The King Climate Action Initiative at J-PAL will develop large-scale climate-response programs for some of the world’s most vulnerable populations. Wed, 29 Jul 2020 09:12:48 -0400 https://news.mit.edu/2020/gift-tackling-poverty-climate-change-0729 Peter Dizikes | MIT News Office With a founding $25 million gift from King Philanthropies, MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) is launching a new initiative to solve problems at the nexus of climate change and global poverty.The new program, the King Climate Action Initiative (K-CAI), was announced today by King Philanthropies and J-PAL, and will start immediately. K-CAI plans to rigorously study programs reducing the effects of climate change on vulnerable populations, and then work with policymakers to scale up the most successful interventions.“To protect our well-being and improve the lives of people living in poverty, we must be better stewards of our climate and our planet,” says Esther Duflo, director of J-PAL and the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics at MIT. “Through K-CAI, we will work to build a movement for evidence-informed policy at the nexus of climate change and poverty alleviation similar to the movement J-PAL helped build in global development. The moment is perhaps unique: The only silver lining of this global pandemic is that it reminds us that nature is sometimes stronger than us. It is a moment to act decisively to change behavior to stave off a much larger catastrophe in the future.”K-CAI constitutes an ambitious effort: The initiative intends to help improve the lives of at least 25 million people over the next decade. K-CAI will announce a call for proposals this summer and select its first funded projects by the end of 2020.“We are short on time to take action on climate change,” says Robert King, co-founder of King Philanthropies. “K-CAI reflects our commitment to confront this global crisis by focusing on solutions that benefit people in extreme poverty. They are already the hardest hit by climate change, and if we fail to act, their circumstances will become even more dire.”There are currently an estimated 736 million people globally living in extreme poverty, on as little as $1.90 per day or less. The World Bank estimates that climate change could push roughly another 100 million into extreme poverty by 2030.As vast as its effects may be, climate change also presents a diverse set of problems to tackle. Among other things, climate change, as well as fossil-fuel pollution, is expected to reduce crop yields, raise food prices, and generate more malnutrition; increase the prevalence of respiratory illness, heat stress, and numerous other diseases; and increase extreme weather events, wiping out homes, livelihoods, and communities.With this in mind, the initiative will focus on specific projects within four areas: climate change mitigation, to reduce carbon emissions; pollution reduction; adaptation to ongoing climate change; and shifting toward cleaner, reliable, and more affordable souces of energy. In each area, K-CAI will study smaller-scale programs, evaluate their impact, and work with partners to scale up the projects with the most effective solutions.Projects backed by J-PAL have already had an impact in these areas. In one recent study, J-PAL-affiliated researchers found that changing the emissions audit system in Gujarat, India, reduced industrial-plant pollution by 28 percent; the state then implemented the reforms. In another study in India, J-PAL affiliated researchers found that farmers using a flood-resistant rice variety called Swarna-Sub1 increased their crop yields by 41 percent.In Zambia, a study by researchers in the J-PAL network showed that lean-season loans for farmers increased agricultural output by 8 percent; in Uganda, J-PAL affiliated researchers found that a payment system to landowners cut deforestation nearly in half and is a cost-effective way to lower carbon emissions.Other J-PAL field experiments in progress include one providing cash payments that stop farmers in Punjab, India, from burning crops, which generates half the air pollution in Delhi; another implementing an emissions-trading plan in India; and a new program to harvest rainwater more effectively in Niger. All told, J-PAL researchers have evaluated over 40 programs focused on climate, energy, and the environment.By conducting these kinds of field experiments, and implementing some widely, K-CAI aims to apply the same approach J-PAL has directed toward multiple aspects of poverty alleviation, including food production, health care, education, and transparent governance.A unique academic enterprise, J-PAL emphasizes randomized controlled trials to identify useful poverty-reduction programs, then works with governments and nongovernmental organizations to implement them. All told, programs evaluated by J-PAL affiliated researchers and found to be effective have been scaled up to reach 400 million people worldwide since the lab’s founding in 2003.“J-PAL has distinctive core competencies that equip it to achieve outsized impact over the long run,” says Kim Starkey, president and CEO of King Philanthropies. “Its researchers excel at conducting randomized evaluations to figure out what works, its leadership is tremendous, and J-PAL as an organization has a rare, demonstrated ability to partner with governments and other organizations to scale up proven interventions and programs.”K-CAI aims to conduct an increasing number of field experiments over the initial five-year period and focus on implementing the highest-quality programs at scale over the subsequent five years. As Starkey observes, this approach may generate increasing interest from additional partners.“There is an immense need for a larger body of evidence about what interventions work at this nexus of climate change and extreme poverty,” Starkey says. “The findings of the King Climate Action Initiative will inform policymakers and funders as they seek to prioritize opportunities with the highest impact.”King Philanthropies was founded by Robert E. (Bob) King and Dorothy J. (Dottie) King in 2016. The organization has a goal of making “a meaningful difference in the lives of the world’s poorest people” by developing and supporting a variety of antipoverty initiatives.J-PAL was co-founded by Duflo; Abhijit Banerjee, the Ford International Professor of Economics at MIT; and Sendhil Mullainathan, now a professor at the University of Chicago’s Booth School of Business. It has over 200 affiliated researchers at more than 60 universities across the globe. J-PAL is housed in the Department of Economics in MIT’s School of Humanities, Arts, and Social Sciences.Last fall, Duflo and Banerjee, along with long-time collaborator Michael Kremer of Harvard University, were awarded the Nobel Prize in economic sciences. The Nobel citation observed that their work has “dramatically improved our ability to fight poverty in practice” and provided a “new approach to obtaining reliable answers about the best ways to fight global poverty.”K-CAI will be co-chaired by two professors, Michael Greenstone and Kelsey Jack, who have extensive research experience in environmental economics. Both are already affiliated researchers with J-PAL.Greenstone is the Milton Friedman Distinguished Service Professor in Economics at the University of Chicago. He is also director of the Energy Policy Institute at the University of Chicago. Greenstone, who was a tenured faculty member in MIT’s Department of Economics from 2003 to 2014, has published high-profile work on energy access, the consequences of air pollution, and the effectiveness of policy measures, among other topics.Jack is an associate professor in the Bren School of Environmental Science and Management at the University of California at Santa Barbara. She is an expert on environment-related programs in developing countries, with a focus on incentives that encourage the private-sector development of environmental goods. Jack was previously a faculty member at Tufts University, and a postdoc at MIT in 2010-11, working on J-PAL’s Agricultural Technology Adoption Initiative. Over the next decade, the King Climate Action Initiative (K-CAI) intends to help improve the lives of at least 25 million people hard hit by poverty and climate change. Image: MIT News https://news.mit.edu/2020/engineering-superpowered-organisms-more-sustainable-world-0722 MIT students explore algal water purifiers, programmable soil bacteria, and other biological engineering approaches to food and water security. Wed, 22 Jul 2020 15:30:01 -0400 https://news.mit.edu/2020/engineering-superpowered-organisms-more-sustainable-world-0722 Vivian Zhong | Abdul Latif Jameel Water and Food Systems Lab Making corn salt-tolerant by engineering its microbiome. Increasing nut productivity with fungal symbiosis. Cleaning up toxic metals in the water supply with algae. Capturing soil nutrient runoff with bacterial biofilms. These were the bio-sustainability innovations designed and presented by students in the Department of Biological Engineering (BE) last May. With the sun shining brightly on an empty Killian Court, the students gathered for the final class presentations over Zoom, physically distanced due to the Covid-19-related closing of MIT’s campus this spring. For decades, the sustainable technologies dominating public discourse have tended toward the mechanical: wind power, solar power, saltwater distillation, etc. But in recent years, biological solutions have increasingly taken the forefront. For recent BE graduate Adrianna Amaro ’20, being able to make use of “existing organisms in the natural world and improve their capabilities, instead of building whole new machines, is the most exciting aspect of biological engineering approaches to sustainability problems.” Each semester, the BE capstone class (20.380: Biological Engineering Design) challenges students to design, in teams, biological engineering solutions to problems focused on a theme selected by the instructors. Teams are tasked with presenting their solutions in two distinct ways: as a written academic grant proposal and as a startup pitch. For Professor Christopher Voigt, one of the lead instructors, the goal of the class is to “create the climate where a half-baked concept emerges and gets transformed into a project that is both achievable and could have a real-world impact.” A glance at the research portfolio on the MIT biological engineering homepage reveals a particular focus on human biology. But over the years, students and faculty alike have started pushing for a greater diversity in challenges to which the cutting-edge technology they were developing could be applied. Indeed, “sustainability has been one of the top areas that students raise when asked what they want to address with biological engineering,” says Sean Clarke PhD ’13, another instructor for the class. In response to student input, the instructors chose food and water security as the theme for the spring 2020 semester. (Sustainability, broadly, was the theme the previous semester.) The topic was well-received by the 20.380 students. Recent BE graduate Cecilia Padilla ’20 appreciated how wide-reaching and impactful the issues were, while teammate Abby McGee ’20 was thrilled because she had always been interested in environmental issues — and is “not into pharma.” Since this is the biological engineering capstone, students had to incorporate engineering principles in their biology-based solutions. This meant developing computational models of their proposed biological systems to predict the output of a system from a defined set of inputs. Team SuperSoil, for example, designed a genetic circuit that, when inserted into B. subtilis, a common soil bacteria, would allow it to change behavior based on water and nutrient levels. During heavy rain, for example, the bacteria would respond by producing a phosphate-binding protein biofilm. This would theoretically reduce phosphate runoff, thus preserving soil nutrients and reducing the pollution of waterways. By modeling natural processes such as protein production, bacterial activation, and phosphate diffusion in the soil using differential equations, they were able to predict the degree of phosphate capture and show that significant impact could be achieved with a realistic amount of engineered bacterial input. Biological engineering Professor Forest White co-leads the class every spring with Voigt. White also teaches the prerequisite, where students learn how to construct computational models of biological systems. He points out how the models helped students develop their capstone projects: “In a couple of cases the model revealed true design challenges, where the feasibility of the project requires optimal engineering of particular aspects of the design.” Models aside, simply thinking about the mathematical reality of proposed solutions helped teams early on in the idea selection process. Team Nutlettes initially considered using methane-consuming bacteria to capture methane gas from landfills, but back-of-the-envelope calculations revealed unfavorable kinetics. Additionally, further reading brought to light a possible toxic byproduct of bacterial methane metabolism: formaldehyde. Instead, they chose to develop an intervention for water-intensive nut producers: engineer the tree’s fungal symbionts to provide a boost of hormones that would promote flower production, which in turn increases nut yields. Team Halo saw water filtration as the starting point for ideation, deeming it the most impactful issue to tackle. For inspiration, they looked to mangrove trees, which naturally take up salt from the water that they grow in. They applied this concept to their design of corn-associated, salt-tolerant bacteria that could enhance their plant host’s ability to grow in high salinity conditions — an increasingly common consequence of drought and industrial agricultural irrigation. Additional inspiration came from research in the Department of Civil and Environmental Engineering: In their design, the team incorporated a silk-based seed coating developed by Professor Benedetto Marelli’s group. Many of the capstone students found themselves exploring unfamiliar fields of research. During their foray into plant-fungal symbiosis, Team Nutlettes was often frustrated by the prevalence of outdated and contradictory findings, and by the lack of quantitative results that they could use in their models. Still, Vaibhavi Shah, one of the few juniors in the class, says she found a lot of value in “diving into something you’ve no experience in.” In addition to biological design, teams were encouraged to think about the financial feasibility of their proposed solutions. This posed a challenge for Team H2Woah and their algal-based solution for sequestering heavy metals from wastewater. Unlike traditional remediation methods, which produce toxic sludge, their system allows for the recycling of metals from the wastewater for manufacturing, and the opportunity to harvest the algae for biofuels. However, as they developed their concept, they realized that breaking into the existing market would be difficult due to the cost of all the new infrastructure that would be required. Students read broadly over the course of the semester, which helped them enhance their understanding of food and water insecurity beyond their specific projects. Before the class, Kayla Vodehnal ’20 of Team Nutlettes had only been exposed to policy-driven solutions. Amaro, meanwhile, came to realize how close to home the issues they were researching are: all Americans may soon have to confront inadequate access to clean water due to, among other factors, pollution, climate change, and overuse. In any other semester, the capstone students would have done their final presentations in a seminar room before peers, instructors, a panel of judges, and the indispensable pastry-laden brunch table. This semester, however, the presentations took place, like everything else this spring, on Zoom. Instructors beamed in front of digital congratulatory messages, while some students coordinated background images to present as a single cohesive team. Despite the loss of in-person engagement, the Zoom presentations did come with benefits. This year’s class had a larger group of audience members compared to past years, including at least two dozen faculty, younger students, and alumni who joined virtually to show their support. Coordinating a group project remotely was challenging for all the teams, but Team Nutlettes found a silver lining: Because having spontaneous conversations over Zoom is harder than in person, they found that their meetings became a lot more productive. One attendee was Renee Robins ’83, executive director of the Abdul Latif Jameel Water and Food Systems Lab, who had previously interacted with the class as a guest speaker. “Many of the students’ innovative concepts for research and commercialization,” she says, “were of the caliber we see from MIT faculty submitting proposals to J-WAFS’ various grant programs.” Now that they have graduated, the seniors in the class are all going their separate ways, and some have sustainability careers in mind. Joseph S. Faraguna ’20 of Team Halo will be joining Ginkgo Bioworks in the fall, where he hopes to work on a bioremediation or agricultural project. His teammate, McGee, will be doing therapeutic CRISPR research at the Broad Institute of MIT and Harvard, but says that environment-focused research is definitely her end goal. Between Covid-19 and post-graduation plans, the capstone projects will likely end with the class. Still, this experience will continue to have an influence on the student participants. Team H2Woah is open to continuing their project in the future in some way, Amaro says, since it was their “first real bioengineering experience, and will always have a special place in our hearts.” Their instructors certainly hope that the class will prove a lasting inspiration. “Even in the face of the Covid-19 pandemic,” White says, “the problems with global warming and food and water security are still the most pressing problems we face as a species. These problems need lots of smart, motivated people thinking of different solutions. If our class ends up motivating even a couple of these students to engage on these problems in the future, then we will have been very successful.” Students in the biological engineering capstone design class showcase their proposed solutions to food and water security challenges over Zoom. Image: Sean Clarke https://news.mit.edu/2020/mit-seawater-surface-tension-research-becomes-international-guideline-0709 Work by Professor John Lienhard and Kishor Nayar SM ’14, PhD ’19 was recently recognized by the International Association for the Properties of Water and Steam. Thu, 09 Jul 2020 16:10:01 -0400 https://news.mit.edu/2020/mit-seawater-surface-tension-research-becomes-international-guideline-0709 Mary Beth Gallagher | Department of Mechanical Engineering The property of water that enables a bug to skim the surface of a pond or keeps a carefully placed paperclip floating on the top of a cup of water is known as surface tension. Understanding the surface tension of water is important in a wide range of applications including heat transfer, desalination, and oceanography. Although much is known about the surface tension of fresh water, very little has been known about the surface tension of seawater — until recently. In 2012, John Lienhard, the Abdul Latif Jameel Professor of Water and Mechanical Engineering, and then-graduate student Kishor Nayar SM ’14, PhD ’19 embarked on a research project to understand how the surface tension of seawater changes with temperature and salinity. Two years later, they published their findings in the Journal of Physical and Chemical Reference Data. This spring, the International Association for the Properties of Water and Steam (IAPWS) announced that they had deemed Lienhard and Nayar’s work an international guideline. According to the IAPWS, Lienhard and Nayar’s research “presents a correlation for the surface tension of seawater as a function of temperature and salinity.” The announcement of the guideline marked the completion of eight years of work with dozens of collaborators from MIT and across the globe. “This project grew out of my work in desalination. In desalination, you need to know about the surface tension of water because that affects how water travels through pores in a membrane,” explains Lienhard, a world leading expert in desalination — the process by which salt water is treated to become potable freshwater. Lienhard suggested Nayar take measurements of seawater’s surface tension and compare the results to the surface tension of pure water. As they would soon find out, getting reliable data from salt water would prove to be incredibly difficult.  “We had thought originally that these experiments would be pretty simple to do, that we’d be done in a month or two. But as we started looking into it, we realized it was a much harder problem to tackle,” says Lienhard. From the outset, Nayar hoped to get enough accurate data to inform a property standard. Doing so would require the uncertainty in the measurements to be less than 1 percent. “When you talk about property measurements, you need to be as accurate as possible,” explains Nayar. The first hurdle he had to surmount to achieve this level of accuracy was finding the appropriate instrumentation to make reliable measurements — something that turned out to be no easy feat. Measuring surface tension To measure the surface tension of water, Lienhard and Nayar teamed up with Gareth McKinley, professor of mechanical engineering, and then-graduate student Divya Panchanathan SM ’15, PhD ’18. They began with a device known as a Wilhelmy plate, which finds the surface tension by lowering a small platinum plate into a beaker of water then measuring the force the water exerts as the plate is raised. Nayar and Panchanathan struggled to measure the surface tension of salt water at higher temperatures. “The issue we kept finding was once the temperature was above 50 degrees Celsius, the water on the beaker evaporated faster than we could take the measurements,” Nayar says.  No instrument would allow them to get the data they needed — so Nayar turned to the MIT Hobby Shop. Using a lathe, he built a special lid for the beaker to keep vapor in. “The little lid Kishor built had accurately cut doors that allowed him to put a surface tension probe through the lid without letting water vapor get out,” explains Lienhard. After making progress on obtaining data, the team suffered a massive setback. They found that barely visible salt scales, which formed on their test beaker over time, had introduced errors to their measurements. To get the most accurate values, they decided to use fresh new beakers for every single test. As a result, Nayar had to repeat nine months of work just prior to his master’s thesis being due. Fortunately, since the main problem was identified and solved, experiments could be repeated much faster. Nayar was able to redo the experiments on time. The team measured surface tension in seawater ranging from room temperature to 90 degrees Celsius and salinity levels ranging from pure water to four times the salinity of ocean water. They found that surface tension decreases by roughly 20 percent as water goes from room temperature toward boiling. Meanwhile, as salinity increases, surface tension increases as well. The team had unlocked the mystery of seawater surface tension. “It was literally the most technically challenging thing I had ever done,” Nayar recalls. Their data had an average deviation of 0.19 percent, with a maximum deviation of just 0.6 percent — well within the 1 percent bound needed for a guideline. From master’s thesis to international guideline Three years after completing his master’s thesis, Nayar, by then a PhD student, attended an IAPWS meeting in Kyoto, Japan. The IAPWS is a nonprofit international organization responsible for releasing standards on the properties of water and steam. There, Nayar met with leaders in the field of water surface tension who had been struggling with the same issues Nayar had faced. These contacts introduced him to the long, rigorous process of declaring something an international guideline. The IAPWS had previously published standards on the properties of steam developed by the late Joseph Henry Keenan, professor and one-time department head of mechanical engineering at MIT. To join Keenan as authors of an IAPWS standard, the team’s data needed to be verified by measurements conducted by other researchers. After three years of working with the IAPWS, the team’s work was finally adopted as an international guideline. For Nayar, who graduated with his PhD last year and is now a senior industrial water/wastewater engineer at engineering consulting firm GHD, the guideline announcement made the long months collecting data well worth it. “It felt like something getting completed,” he recalls.  The findings that Nayar, Panchanathan, McKinley, and Lienhard reported back in 2014 are broadly applicable to a number of industries, according to Lienhard. “It’s certainly relevant for desalination work, but also for oceanographic problems such as capillary wave dynamics,” he explains. It also helps explain how small things — like a bug or a paperclip — can float on seawater. Research by John Lienhard and Kishor Nayar to understand how the surface tension of seawater changes with temperature and salinity has now become an international standard. Photo courtesy of the Department of Mechanical Engineering https://news.mit.edu/2020/d-lab-moves-online-without-compromising-impact-0701 With the campus shut down by Covid-19, the spring D-Lab class Water, Climate Change, and Health had to adapt. Wed, 01 Jul 2020 12:05:01 -0400 https://news.mit.edu/2020/d-lab-moves-online-without-compromising-impact-0701 Jessie Hendricks | Environmental Solutions Initiative It’s not a typical sentence you’d find on a class schedule, but on April 2, the first action item for one MIT course read: “Check in on each other’s health and well-being.” The revised schedule was for Susan Murcott and Julie Simpson’s spring D-Lab class EC.719 / EC.789 (Water, Climate Change, and Health), just one of hundreds of classes at MIT that had to change course after the novel coronavirus sparked a campus-wide shutdown. D-Lab at home The dust had only begun to settle two weeks later, after a week of canceled classes followed by the established spring break, when students and professors reconvened in their new virtual classrooms. In Murcott and Simpson’s three-hour, once-a-week D-Lab class, the 20 students had completed only half of the subject’s 12 classes before the campus shut down. Those who could attend the six remaining classes would do so remotely for the first time in the five-year history of the class. Typically, students would have gathered at D-Lab, an international design and development center next to the MIT Museum on Massachusetts Avenue in Cambridge, Massachusetts. Within the center, D-Lab provides project-based and hands-on learning for undergraduate and graduate students in collaboration with international non-governmental organizations, governments, and industry. Many of the projects involve design solutions in low-income countries around the world. Murcott, an MIT lecturer who has worked with low-income populations for over 30 years in 25 countries, including Nepal and Ghana, was a natural fit to teach the class. Murcott’s background is in civil and environmental engineering, wastewater management, and climate. Her co-teacher, Research Engineer Julie Simpson of the Sea Grant College Program, has a PhD in coastal and marine ecology and a strong climate background. “It’s typical to find courses in climate change and energy, climate change and policy, or maybe climate change and human behavior,” Murcott says. But when she first began planning her D-Lab subject, there were no classes one could find anywhere in the world that married climate change and water.  Murcott and Simpson refer to the class as transdisciplinary. “[Transdisciplinary] is about having as broad a sample of humanity as you can teaching and learning together on the topics that you care about,” Murcott says. But transdisciplinary also means attracting a wide range of students from various walks of life, studying a variety of subjects. This spring, Murcott and Simpson’s class had undergraduates, graduate students, and young professionals from MIT, Wellesley College, and Harvard University, studying architecture, chemistry, mechanical engineering, biochemistry, microbiology, computer science, math, food and agriculture, law, and public health, plus a Knight Science Journalism at MIT Fellow. After campus closed, these students scattered to locations across the country and the world, including France, Hong Kong, Rwanda, and South Korea. Student Sun Kim sent a five-page document with pictures to the class after returning to her home in South Korea, detailing her arrival in a Covid-19 world. Kim was tested in the airport after landing, given free room and board in a nearby hotel until she received her result (a “negative” result came back within eight hours), and quarantined in her parents’ house for two weeks, just in case she had picked up the virus during her travels. “I have been enjoying my Zoom classes during the wee hours of the night and sleeping during the day — ignoring the sunlight and pretending I am still in the U.S.,” Kim wrote. Future generation climate action plans Usually, the class has three or four field trips over the course of the semester, to places like the Blue Hill Meteorological Observatory, home of the longest climate record in the United States, and the Charles River Dam Infrastructure, which helps control flooding along Memorial Drive. With these physical trips closed off during the pandemic, Murcott and Simpson had to find new virtual spaces in which to convene. Four student teams took part in a climate change simulation using a program developed by Climate Interactive called En-ROADS, in which they were challenged to create scenarios that aimed for a limit of 1.5 degree Celsius global average temperature rise above pre-industrial levels set out in the 2015 Paris Agreement. Each team developed unique scenarios and managed to reach that target by adjusting energy options, agricultural and land-use practices, economic levers, and policy options. The teams then used their En-ROADS scenario planning findings to evaluate the climate action plans of Cambridge, Boston, and Massachusetts, with virtual visits from experts on the plans. They also evaluated MIT’s climate plan, which was written in 2015 and which will be updated by the end of this year. Students found that MIT has one of the least-ambitious targets for reducing its greenhouse gas emissions compared to other institutions that the D-Lab class reviewed. Teams of students were then challenged to improve upon what MIT had done to date by coming up with their own future generation climate action plans. “I wanted them to find their voice,” says Murcott. As the co-chair of MIT’s Water Sustainability Working Group, an official committee designated to come up with a water plan for MIT, Murcott and Simpson are now working with a subset of eight students from the class over the summer, together with the MIT Environmental Solutions Initiative, the MIT Office of Sustainability, and the Office of the Vice President for Research, to collaborate on a new water and climate action plan. Final projects The spring 2020 D-Lab final presentations were as diverse as the students’ fields of study. Over two Zoom sessions, teams and individual students presented a total of eight final projects. The first project aimed to lower the number of Covid-19 transmissions among Cambridge residents and update access to food programs in light of the pandemic. At the time of the presentation, Massachusetts had the third-highest reported number of cases of the new coronavirus. Students reviewed what was already being done in Cambridge and expanded on that with recommendations such as an assistive phone line for sick residents, an N95 mask exchange program, increased transportation for medical care, and lodging options for positive cases to prevent household transmission. Another team working on the Covid-19 project presented their recommendations to update the city’s food policy. They suggested programs to increase awareness of the Supplemental Nutrition Assistance Program (SNAP) and the Women, Infants, and Children program (WIC) through municipal mailings, help vendors at farmers markets enroll in SNAP/EBT so that users could purchase local produce and goods, and promote local community gardens to help with future food security. Another project proposed an extensive rainwater harvesting project for the Memorial Drive dormitories, which also have a high photovoltaic potential, in which the nearby MIT recreational fields would benefit from self-sufficient rainwater irrigation driven by a solar-powered pump. Another student developed a machine learning method to count and detect river herrings that migrate into Boston each year by training a computer program to identify the fish using existing cameras installed by fish ladders.  Student Lowry Yankwich wrote a long-form science journalism piece about the effect of climate change on local fisheries, and a team of three students created a six-unit climate change course called “Surviving and Thriving in the 21st Century” for upper-high-school to first-year college students Two global water projects were presented. In the first, student Ade Dapo-Famodu’s study compared a newly manufactured water test, the ECC Vial, to other leading global products that measure two major indicators of contaminated water: E. coli and coliforms. The second global water project was the Butaro Water Project team with Carene Umubyeyi and Naomi Lutz. Their project is a collaboration between faculty and students at MIT, Tufts University, University of Rwanda and University of Global Health Equity in Butaro, a small district in the northern part of Rwanda, where a number of villages lack access to safe drinking water. The end is just the beginning For many, the D-Lab projects aren’t just a semester-long endeavor. It’s typical for some D-Lab term projects to turn into either a January Independent Activities Period or a summer research or field project. Of the 20 students in the class, 10 are continuing to work on their term projects over the summer. Umubyeyi is Rwandan. Having returned home after the MIT shutdown, she will be coordinating the team’s design and construction of the village water system over the summer, with technical support from her teammate, Lutz, remotely from Illinois. The Future Generations Climate Action Planning process resulted in five students eager to take the D-Lab class work forward. They will be working with Jim Gomes, senior advisor in the Office of the Vice President, who is responsible for coordination MIT’s 2020 Climate Action Plan, together with one other student intern, Grace Moore. The six-unit online course for teens, Surviving and Thriving in the 21st Century, is being taught by Clara Gervaise-Volaire and Gabby Cazares and will be live through July 3. Continued policy work on Covid-19 will continue with contacts in the Cambridge City Council. Finally, Lowry will be sending out his full-length article for publication and starting his next piece.   “Students have done so well in the face of the MIT shutdown and coronavirus pandemic challenge,” says Murcott. “Scattered around the country and around the world, they have come together through this online D-Lab class to embrace MIT’s mission of ‘creating a better world.’ In the process, they have deepened themselves and are actively serving others in the process. What could be better in these hard times?” Lecturer Susan Murcott met many members of her EC.719 / EC.789 (Water, Climate Change, and Health) D-Lab class for the first time at the Boston climate strike on Sept. 20, 2019. Photo: Susan Murcott https://news.mit.edu/2020/mit-humanitiarian-supply-chain-lab-informs-fema-covid-19-supply-chain-risks-0618 The MIT Humanitarian Supply Chain Lab implements a rapid assessment process to inform policy. Thu, 18 Jun 2020 14:10:01 -0400 https://news.mit.edu/2020/mit-humanitiarian-supply-chain-lab-informs-fema-covid-19-supply-chain-risks-0618 Arthur Grau | Center for Transportation and Logistics Every corner of the globe has suffered from supply chain disruptions during the coronavirus pandemic. Beginning in January with a focus on China manufacturing, the MIT Humanitarian Supply Chain Lab (HSCL) began providing evidenced-based analysis to the U.S. Federal Emergency Management Agency (FEMA) to inform strategic planning around the supply chain risks. By March, the focus turned to domestic food supply chains and freight markets in the United States so that FEMA could anticipate potential response scenarios. Through this engagement, HSCL developed a rapid vetting and publishing approach that aligned with the pace and volatility of the situation. HSCL is part of the Supply Chain Analysis Network (SCAN) — along with Dewberry, the Center for Naval Analyses, and American Logistics Aid Network — that supports FEMA Logistics Management Directorate during crisis activations. HSCL hosts three of the five team members that delivered 20 grocery sector and freight assessments over 10 weeks. Each assessment followed a week-long research and industry peer review process before delivery to FEMA. As an example, the Friday freight assessment begins on Monday as HSCL researchers develop hypotheses based on the data from several proprietary channels, publicly available media, and primary interviews with practitioners from the field. On Tuesday, the hypotheses are formalized into a written digest and subsequently shared on Wednesday with a group of private sector leaders. These professional volunteers, who are involved in food supply chains and freight movement, review and respond to the hypotheses. On Thursday, the HSCL team compiles further relevant data and industry feedback into a draft assessment. This assessment is circulated for final industry review on Friday, before sharing with FEMA.”Our process is as rigorous as possible given our near real-time engagement,” remarks Jarrod Goentzel, director of the HSCL. “Our aim is to synthesize evidence and organize ongoing peer review with our industry partners to provide strategic orientation for government decision-making.” Senior leaders use the ecosystem assessments to anticipate shortages and prepare emergency support. “We’re talking to retailers, shippers, and carriers to find out how the market is responding,” according to lab researcher Chelsey Graham. “We look at data such as loads tendered or rejected, wait times, freight volume, and ocean sailings that are triangulated with other economic data to develop evidence and ensure it aligns with the truth on the ground from factory floors to store shelves.” The complexity of ecosystem assessments may increase with the onset of the Atlantic summer hurricane season. Further natural disaster impacts would uniquely stress supply chains already fatigued and constrained by a relentless pandemic. The HSCL has a long history of working with government and industry during hurricane season, starting with volunteer efforts in 2017 and activations with SCAN in recent years. HSCL has also led efforts to reflect and improve public-private sector coordination during crises. In December 2017, MIT hosted a roundtable on “Supply Chain Resilience: Restoring Business Operations Following Hurricanes,” producing the earliest report on a very active hurricane season. In 2018, Goentzel and MIT Center for Transportation and Logistics (CTL) Director Yossi Sheffi were both invited by FEMA to deliver PrepTalks, broadcasts given by subject-matter experts to promote innovation in emergency management. The lab was contracted by the National Academies of Science, Engineering, and Medicine to support a recently released study “Strengthening Post-Hurricane Supply Chain Resilience,” based on findings from the 2017 hurricane season. The challenges of responding to disruptions have been a cause for the development of novel approaches to research and assessment. Capabilities developed may prove invaluable as new crises may arrive during this Covid-19 pandemic, including the possible resurgence of the virus itself. Through these uniquely positioned engagements, HSCL is able to support decisions quickly during urgent and dynamic crises. The lab is still actively recruiting volunteer industry leaders for this ongoing effort. A suburban warehouse hub common to food supply chains in the United States https://news.mit.edu/2020/why-mediterranean-climate-change-hotspot-0617 MIT analysis uncovers the basis of the severe rainfall declines predicted by many models. Wed, 17 Jun 2020 09:55:48 -0400 https://news.mit.edu/2020/why-mediterranean-climate-change-hotspot-0617 David L. Chandler | MIT News Office Although global climate models vary in many ways, they agree on this: The Mediterranean region will be significantly drier in coming decades, potentially seeing 40 percent less precipitation during the winter rainy season.An analysis by researchers at MIT has now found the underlying mechanisms that explain the anomalous effects in this region, especially in the Middle East and in northwest Africa. The analysis could help refine the models and add certainty to their projections, which have significant implications for the management of water resources and agriculture in the region.The study, published last week in the Journal of Climate, was carried out by MIT graduate student Alexandre Tuel and professor of civil and environmental engineering Elfatih Eltahir.The different global circulation models of the Earth’s changing climate agree that temperatures virtually everywhere will increase, and in most places so will rainfall, in part because warmer air can carry more water vapor. However, “There is one major exception, and that is the Mediterranean area,” Eltahir says, which shows the greatest decline of projected rainfall of any landmass on Earth.“With all their differences, the models all seem to agree that this is going to happen,” he says, although they differ on the amount of the decline, ranging from 10 percent to 60 percent. But nobody had previously been able to explain why.Tuel and Eltahir found that this projected drying of the Mediterranean region is a result of the confluence of two different effects of a warming climate: a change in the dynamics of upper atmosphere circulation and a reduction in the temperature difference between land and sea. Neither factor by itself would be sufficient to account for the anomalous reduction in rainfall, but in combination the two phenomena can fully account for the unique drying trend seen in the models.The first effect is a large-scale phenomenon, related to powerful high-altitude winds called the midlatitude jet stream, which drive a strong, steady west-to-east weather pattern across Europe, Asia, and North America. Tuel says the models show that “one of the robust things that happens with climate change is that as you increase the global temperature, you’re going to increase the strength of these midlatitude jets.”But in the Northern Hemisphere, those winds run into obstacles, with mountain ranges including the Rockies, Alps, and Himalayas, and these collectively impart a kind of wave pattern onto this steady circulation, resulting in alternating zones of higher and lower air pressure. High pressure is associated with clear, dry air, and low pressure with wetter air and storm systems. But as the air gets warmer, this wave pattern gets altered.“It just happened that the geography of where the Mediterranean is, and where the mountains are, impacts the pattern of air flow high in the atmosphere in a way that creates a high pressure area over the Mediterranean,” Tuel explains. That high-pressure area creates a dry zone with little precipitation.However, that effect alone can’t account for the projected Mediterranean drying. That requires the addition of a second mechanism, the reduction of the temperature difference between land and sea. That difference, which helps to drive winds, will also be greatly reduced by climate change, because the land is warming up much faster than the seas.“What’s really different about the Mediterranean compared to other regions is the geography,” Tuel says. “Basically, you have a big sea enclosed by continents, which doesn’t really occur anywhere else in the world.” While models show the surrounding landmasses warming by 3 to 4 degrees Celsius over the coming century, the sea itself will only warm by about 2 degrees or so. “Basically, the difference between the water and the land becomes a smaller with time,” he says.That, in turn, amplifies the pressure differential, adding to the high-pressure area that drives a clockwise circulation pattern of winds surrounding the Mediterranean basin. And because of the specifics of local topography, projections show the two areas hardest hit by the drying trend will be the northwest Africa, including Morocco, and the eastern Mediterranean region, including Turkey and the Levant.That trend is not just a projection, but has already become apparent in recent climate trends across the Middle East and western North Africa, the researchers say. “These are areas where we already detect declines in precipitation,” Eltahir says. It’s possible that these rainfall declines in an already parched region may even have contributed to the political unrest in the region, he says.“We document from the observed record of precipitation that this eastern part has already experienced a significant decline of precipitation,” Eltahir says. The fact that the underlying physical processes are now understood will help to ensure that these projections should be taken seriously by planners in the region, he says. It will provide much greater confidence, he says, by enabling them “to understand the exact mechanisms by which that change is going to happen.”Eltahir has been working with government agencies in Morocco to help them translate this information into concrete planning. “We are trying to take these projections and see what would be the impacts on availability of water,” he says. “That potentially will have a lot of impact on how Morocco plans its water resources, and also how they could develop technologies that could help them alleviate those impacts through better management of water at the field scale, or maybe through precision agriculture using higher technology.”The work was supported by the collaborative research program between Université Mohamed VI Polytechnique in Morocco and MIT. Global climate models agree that the Mediterranean area will be significantly drier, potentially seeing 40 percent less precipitation during the winter rainy season in the already parched regions of the Middle East and North Africa. https://news.mit.edu/2020/mit-based-startup-cambridge-crops-wraps-food-in-silk-0605 Cambridge Crops develops an edible, imperceptible coating that might replace plastic packaging to preserve meats and produce. Fri, 05 Jun 2020 14:35:01 -0400 https://news.mit.edu/2020/mit-based-startup-cambridge-crops-wraps-food-in-silk-0605 Archana Apte | Abdul Latif Jameel Water and Food Systems Lab Benedetto Marelli, assistant professor of civil and environmental engineering at MIT, was a postdoc at Tufts University’s Omenetto Lab when he stumbled upon a novel use for silk. Preparing for a lab-wide cooking competition whose one requirement was to incorporate silk into each dish, Marelli accidentally left a silk-dipped strawberry on his bench: “I came back almost one week later, and the strawberries that were coated were still edible. The ones that were not coated with silk were completely spoiled.” Marelli, whose previous research focused on the biomedical applications of silk, was stunned. “That opened up a new world for me,” he adds. Marelli viewed his inadvertent discovery as an opportunity to explore silk’s ability to address the issue of food waste. Marelli partnered with several Boston-based scientists, including Adam Behrens, then a postdoc in the lab of Institute Professor Robert Langer, to form Cambridge Crops. The company aims to iterate and expand on the initial discovery, using silk as its core ingredient to develop products that extend the shelf life of all sorts of perishable foods. The company’s technology sees broad impact on extending the shelf life of whole and cut produce, meats, fish, and other foods. With support from a startup competition and subsequent venture capital, Cambridge Crops is equipped to increase global access to fresh foods, improve supply chain efficiencies, and even enable new products altogether. A simple solution for a complex issue One-third of the global food supply is wasted annually, yet over 10 percent of the population faces hunger. Food waste has massive social, economic, and health implications that affect developed and developing countries alike. While many technologies have emerged aimed at extending the longevity of fresh foods, they often employ genetic modifications, environmentally harmful packaging materials, or are costly to implement. “So far, the majority of innovation in food- and ag-tech is based on genetic engineering, plant engineering, mechanical engineering, AI, and computer science. There’s a lot of room to innovate using material, like nanomaterials and biomaterials,” explains Marelli. The professor views technology like silk as an opportunity to mitigate many of the issues facing the food industry without changing the innate properties of the foods themselves. Silk’s strengths stem from the material’s natural simplicity, honed by millennia of evolutionary biology. Cambridge Crops utilizes a proprietary and efficient process using only water and salt to isolate and reform the silk’s natural protein. This makes Cambridge Crops’ silk coatings easy to integrate into existing food-processing lines without the need for costly new equipment or modifications. Once deposited on the surface of food, the silk coating forms a tasteless, odorless, and otherwise imperceptible barrier that slows down the food’s natural degradation mechanisms. Depending on the food item, the result can show up to a 200 percent increase in shelf life. Not only does that enable less food waste, but that also reduces the pressure on cold chains, allowing shippers to reduce greenhouse gases in transportation. Ties to MIT Cambridge Crops gained early industry traction after winning first place in the 2017 Rabobank-MIT Food and Agribusiness Innovation Prize, a competition for early-stage startups sponsored by Rabobank and the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) and supported by the student-run MIT Food and Agriculture club. The technical feedback and industry connections Cambridge Crops leveraged through its participation in the competition proved invaluable in identifying key pain points and market opportunities in the food industry that could be addressed through its core technology. “It was great for us,” explains CEO Adam Behrens. “[The prize] was important for doing technical validation in addition to forming early value propositions.” Cambridge Crops has since raised two rounds of financing, both led or co-led by The Engine, which helps incubate startups working on “tough tech.” These have been combined with awards from AgFunder and multiple Massachusetts Clean Energy Center grants. The initial successes even merited a mention in Bill Gates’ “Gates Notes,” and by a company tackling food waste naturally. Behrens maintains that investors’ contributions go beyond strictly their monetary value. “Our investors have been an integral part of our early stage success … adding value in all kinds of ways — from brand positioning to overall strategy.” Next steps Behrens and Marelli view Cambridge Crops’ technology as a true platform, reaching far beyond just that initial strawberry. Not only can the technology extend the shelf life of whole produce, but it also sees a dramatic effect on cut produce, meats, fish, and processed foods. Cambridge Crops is leveraging its breadth of application to address the broader needs of the food industry through strategic partnerships. Cambridge Crops is optimistic about silk’s potential to mitigate many of the challenges facing complex food networks. “We think that our technology is one that can actually enable [the elimination of plastic food packaging],” adds Behrens. In the classroom, Marelli tries to instill a sense of excitement about technology’s role in the future of food and agriculture, such as in his Department of Civil and Environmental Engineering class, Materials in Agriculture, Food Security, and Food Safety. “They see an angle on agriculture and food science that they never thought about,” he explains, “and they see how much it can be a technology-driven sector.” As Cambridge Crops prepares for the commercial launch of its own patented technology, it is poised to tackle some of the most intractable obstacles facing global food networks to reduce waste and make nutritious foods more accessible to all. An edible silk-based coating developed by MIT Assistant Professor Benedetto Marelli can preserve food longer and prevent food waste. Marelli has teamed up with other Boston-based scientists to form Cambridge Crops, a spinout company using silk technologies to extend the shelf life of all sorts of perishable foods. Photos courtesy of Cambridge Crops. https://news.mit.edu/2020/towable-sensor-vertical-ocean-conditions-0520 Instrument may help scientists assess the ocean’s response to climate change. Wed, 20 May 2020 11:26:59 -0400 https://news.mit.edu/2020/towable-sensor-vertical-ocean-conditions-0520 Jennifer Chu | MIT News Office The motion of the ocean is often thought of in horizontal terms, for instance in the powerful currents that sweep around the planet, or the waves that ride in and out along a coastline. But there is also plenty of vertical motion, particularly in the open seas, where water from the deep can rise up, bringing nutrients to the upper ocean, while surface waters sink, sending dead organisms, along with oxygen and carbon, to the deep interior.Oceanographers use instruments to characterize the vertical mixing of the ocean’s waters and the biological communities that live there. But these tools are limited in their ability to capture small-scale features, such as the up- and down-welling of water and organisms over a small, kilometer-wide ocean region. Such features are essential for understanding the makeup of marine life that exists in a given volume of the ocean (such as in a fishery), as well as the amount of carbon that the ocean can absorb and sequester away.Now researchers at MIT and the Woods Hole Oceanographic Institution (WHOI) have engineered a lightweight instrument that measures both physical and biological features of the vertical ocean over small, kilometer-wide patches. The “ocean profiler,” named EcoCTD, is about the size of a waist-high model rocket and can be dropped off the back of a moving ship. As it free-falls through the water, its sensors measure physical features, such as temperature and salinity, as well as biological properties, such as the optical scattering of chlorophyll, the green pigment of phytoplankton.“With EcoCTD, we can see small-scale areas of fast vertical motion, where nutrients could be supplied to the surface, and where chlorophyll is carried downward, which tells you this could also be a carbon pathway. That’s something you would otherwise miss with existing technology,” says Mara Freilich, a graduate student in MIT’s Department of Earth, Atmospheric, and Planetary Sciences and the MIT-WHOI Joint Program in Oceanography/Applied Ocean Sciences and Engineering.Freilich and her colleagues have published their results today in the Journal of Atmospheric and Oceanic Technology. The paper’s co-authors are J. Thomas Farrar, Benjamin Hodges, Tom Lanagan, and Amala Mahadevan of WHOI, and Andrew Baron of Dynamic System Analysis, in Nova Scotia. The lead author is Mathieu Dever of WHOI and RBR, a developer of ocean sensors based in Ottawa.Ocean synergyOceanographers use a number of methods to measure the physical properties of the ocean. Some of the more powerful, high-resolution instruments used are known as CTDs, for their ability to measure the ocean’s conductivity, temperature, and depth. CTDs are typically bulky, as they contain multiple sensors as well as components that collect water and biological samples. Conventional CTDs require a ship to stop as scientists lower the instrument into the water, sometimes via a crane system. The ship has to stay put as the instrument collects measurements and water samples, and can only get back underway after the instrument is hauled back onboard.Physical oceanographers who do not study ocean biology, and therefore do not need to collect water samples, can sometimes use “UCTDs” — underway versions of CTDs, without the bulky water sampling components, that can be towed as a ship is underway. These instruments can sample quickly since they do not require a crane or a ship to stop as they are dropped.Freilich and her team looked to design a version of a UCTD that could also incorporate biological sensors, all in a small, lightweight, towable package, that would also keep the ship moving on course as it gathered its vertical measurements.“It seemed there could be straightforward synergy between these existing instruments, to design an instrument that captures physical and biological information, and could do this underway as well,” Freilich says.“Reaching the dark ocean”The core of the EcoCTD is the RBR Concerto Logger, a sensor that measures the temperature of the water, as well as the conductivity, which is a proxy for the ocean’s salinity. The profiler also includes a lead collar that provides enough weight to enable the instrument to free-fall through the water at about 3 meters per second — a rate that takes the instrument down to about 500 meters below the surface in about two minutes.“At 500 meters, we’re reaching the upper twilight zone,” Freilich says. “The euphotic zone is where there’s enough light in the ocean for photosynthesis, and that’s at about 100 to 200 meters in most places. So we’re reaching the dark ocean.”Another sensor, the EcoPuck, is unique to other UCTDs in that it measures the ocean’s biological properties. Specifically, it is a small, puck-shaped bio-optical sensor that emits two wavelengths of light — red and blue. The sensor captures any change in these lights as they scatter back and as chlorophyll-containing phytoplankton fluoresce in response to the light. If the red light received resembles a certain wavelength characteristic of chlorophyll, scientists can deduce the presence of phytoplankton at a given depth. Variations in red and blue light scattered back to the sensor can indicate other matter in the water, such as sediments or dead cells — a measure of the amount of carbon at various depths.The EcoCTD includes another sensor unique to UCTDs — the Rinko III Do, which measures the oxygen concentration in water, which can give scientists an estimate of how much oxygen is being taken up by any microbial communities living at a given depth and parcel of water.Finally, the entire instrument is encased in a tube of aluminum and designed to attach via a long line to a winch at the back of a ship. As the ship is moving, a team can drop the instrument overboard and use the winch to pay the line out at a rate that the instrument drops straight down, even as the ship moves away. After about two minutes, once it has reached a depth of about 500 meters, the team cranks the winch to pull the instrument back up, at a rate that the  instrument catches up to the ship within 12 minutes. The crew can then drop the instrument again, this time at some distance from their last dropoff point.“The nice thing is, by the time we go to the next cast, we’re 500 meters away from where we were the first time, so we’re exactly where we want to sample next,” Freilich says.They tested the EcoCTD on two cruises in 2018 and 2019, one to the Mediterranean and the other in the Atlantic, and in both cases were able to collect both physical and biological data at a higher resolution than existing CTDs.“The ecoCTD is capturing these ocean characteristics at a gold-standard quality with much more convenience and versatility,” Freilich says.The team will further refine their design, and hopes that their high-resolution, easily-deployable, and more efficient alternative may be adapted by both scientists to monitor the ocean’s small-scale responses to climate change, as well as fisheries that want to keep track of a certain region’s biological productivity.  This research was funded in part by the U.S. Office of Naval Research. Scientists prepare to deploy an underway CTD from the back deck of a research vessel. Image: Amala Mahadevan https://news.mit.edu/2020/mit-student-leaders-persevere-going-virtual-global-student-startup-competitions-0515 In the face of Covid-19, the MIT Water Club and the MIT Food and Agriculture Club take their signature innovation prizes online. Fri, 15 May 2020 10:00:01 -0400 https://news.mit.edu/2020/mit-student-leaders-persevere-going-virtual-global-student-startup-competitions-0515 Oona Gaffney | Abdul Latif Jameel Water and Food Systems Lab On April 22, the MIT Water Club hosted its annual Water Innovation Prize Pitch Night, the culminating event of a year-long international competition for student innovators seeking to launch water sector companies. This event, now in its sixth year, normally gathers over 250 people to MIT’s campus to cheer on finalist teams from around the world as they compete for cash awards. Yet, six weeks before the event, when the Water Club would usually be finalizing logistics and collecting RSVPs, Covid-19 upended our world. At the same time that the Water Club’s student leaders were gearing up for their event, the MIT Food and Agriculture Club was in its own final stages of planning its annual pitch competition, the Rabobank-MIT Food and Agribusiness Innovation Prize. Now in its fifth year, this event is a national innovation competition for student startups spanning all aspects of the food system. For both clubs, these events are the largest and highest-profile of the year and provide important networking and professional development opportunities for finalist teams and attendees. Bringing signature MIT resilience and ingenuity, student leaders from both clubs persevered through physical distancing measures, successfully pivoting both events to virtual space. From shared disappointment to supportive action At the outset, both clubs’ leaders were very disappointed. Zhenya Karelina, a second-year MBA student at the MIT Sloan School of Management who is also the Food and Agriculture Club’s co-president and director of the Rabobank-MIT Prize, had been so excited to lead the Rabobank-MIT Prize. She “had this vision of what it would look like at the end,” but under the circumstances she “felt like [she] had to let it all go.” But cancelation simply wasn’t an option. As Erika Desmond, a first-year MBA student at MIT Sloan and vice president of growth for the MIT Water Club, puts it, “the first priority is making sure that the finalists still get the opportunity of getting their innovations out there and to compete for the prizes.” Zhenya’s initial disappointment quickly led to her realization that other MIT startup competition leaders must be feeling the same way. So, she started a Slack channel to connect with other student leadership teams who were dealing with similar losses and to collectively brainstorm what it could look like to take things virtual. “A lot of these MIT prizes are very similar, but we tend to run them in silos. This seemed, to me, to be a cool opportunity to learn from each other,” Zhenya reflects. The Slack group included leaders from the Clean Energy Prize, the 100K Prize, the Water Innovation Prize, and the Rabobank-MIT Food and Agribusiness Innovation Prize. “We were all in the same boat,” recalls Javier Renna, a sophomore MBA student at MIT Sloan who is one of the co-directors for the Water Innovation Prize. “I was amazed by the sense of community in saying, ‘We’re all trying to do the same thing’ and ‘What can we do to help each other out?’” New challenges and silver linings For any organization to pivot one of its biggest in-person events of the year online is no easy task. Inevitably, both the Food and Agriculture Club and the Water Club faced technical, strategic, and personal hurdles while organizing their online events. Both clubs loosely maintained the traditional format of each pitch event: keynotes, pitches by student teams, and Q&A with judges, immediately followed by deliberation and award announcements. One aspect that they struggled to replace, however, was in-person networking. When students and entrepreneurs from around the country gather for these events, networking is “one of the main value propositions,” says Desmond. As a replacement, the Water Club tried smaller virtual breakout sessions through Zoom, to mixed effect. Another huge challenge was to hurdle the technology gaps. “I was the host of the webinar and I remember that it was very scary at first,” recalls Renna. “I had no idea how to run a webinar and I thought ‘how am I going to manage all the different stakeholders with people watching?’ It felt like a recipe for a disaster.” But after tapping a friend experienced in webinars, he managed to learn the ropes. “Once she started to explain it, I started to feel more comfortable.” Renna says. Ultimately, he was able to share his newfound knowledge with leaders of the Food and Agriculture Club, helping them to open up their webinar to the public. Overcoming initial roadblocks led to a shift in thinking for both teams. “The biggest thing for us was pivoting from looking at [going virtual] as a disadvantage … to how we could use it to our advantage,” Desmond recalls. For Karelina, shifting her mindset was key. “By the end … I could see how the virtual environment actually enables all these really cool opportunities that I hadn’t even thought about.” In fact, going online ended up revealing some key advantages. Among them was how the virtual events enabled the participation of a more diverse audience base, one that wouldn’t have been possible under normal circumstances. “Someone from Japan contacted me asking how they could watch the event,” Renna says. “We had people logging onto the Water Innovation Prize from Africa, the UK, the East Coast, the West Coast, Mexico, and more!” Significant startup support for five winning teams continues Despite all the changes, the energy and creativity of the diverse group of participating student entrepreneurs was palpable as they competed for cash awards. The two clubs together awarded $75,000 across five winning teams. In fact, the Water Club was able to increase the total prize amounts for its competitors by diverting money saved from other event cancelations. So, the increased award of $25,000 came as a pleasant surprise for Blue Tap, the team winning first place in the Water Innovation Prize. This team, based out of the University of Cambridge, uses 3D printing technologies to bring affordable clean water to the developing world. They have focused development efforts of their main product, a simple and cost-effective chlorine injector, in Uganda. Their work there has also involved community development as they have partnered with over 30 plumbers to train them in water treatment practices and entrepreneurship. Runners up in the Water Innovation Prize included second-place winner Floe. The team, from Yale University, was awarded $12,500 to further the development of their system that prevents ice buildup on roofs. Ice buildup affects nearly 62 million buildings across the United States every year and can lead to serious structural damage. An MIT team, Harmony Water, took home a third-place prize of $7,500, to support their continued research and development of a low-cost water desalination system that can produce more water and less brine using 30 percent less energy than present methods. Eight teams competed for the MIT-Rabobank Food and Agribusiness Innovation Prize, which awarded two top prizes totaling $30,000. MotorCortex, a student team out of Carnegie Mellon, won the first-place prize of $20,000 with advanced robotics technology that could change the future of the fruit packing industry. The group has developed an algorithm to guide robotic arms in food packing plants that optimizes “pick-up-points” on delicate fruits like avocados and apples. Varying shapes and sizes of individual fruits have historically made automation of the industry a particularly difficult challenge — until now. Their invention could potentially cut fruit packing costs in half as their robotic arms would replace human laborers — in low-wage, high-turnover positions — and increase packing efficiency. In second place, winning $10,000, was Antithesis Foods, a team from Cornell University using high-protein chickpeas and a novel processing technology to produce healthier chocolate snacks. Their garbanzo bean-based product, Grabanzos, was all set for rollout previous to Covid-19. However, the sudden shuttering of production facilities, storefronts, and campuses, has greatly hindered their progress. The startup will now use their prize to pivot their original business plan to an online sales platform. Innovation prize sponsors inspired by student resilience The main sponsor of the food prize is Rabobank, a global financial services leader in the food, agribusiness, and beverage industries. Rabobank executives working with members of the Food and Agriculture Club were impressed by the students’ resilience and drive. Throughout the past months Jennifer Jiang worked closely with the club. As vice president of strategy and business development at Rabobank, she reflects that she has been “inspired by the creativity and novel thinking of the team to run an event that gave viewers and participants alike an energy that so closely resembled that of an in-person event.”  MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) serves as a mentor to both teams in the production of these innovation prizes, and is also a co-sponsor. Working day-to-day with the students, J-WAFS saw this resilience firsthand. Each year the prizes grow in participation and success, and despite the unprecedented challenges of physical distancing and other measures over the last few months, the students produced thoughtful, engaging events. “We were again delighted by the dedication, creativity, and achievements that students from MIT and across the country bring to challenges in the food and agriculture sectors,” says J-WAFS Director John H. Lienhard V. The students’ perseverance in the face of adversity demonstrated their commitment to see these impactful competitions through to their end, as well as to advancing solutions to global water and food challenges. As we move forward through these challenging times, we can look to the collaborative spirit, commitment, and drive of these young water and food leaders as inspiration. Francesca O’Hanlon of Blue Tap delivered a pitch at the Water Innovation Prize on April 22. Her team won first place with a novel water chlorinating system that makes clean water affordable and accessible for users without existing access to healthy water supplies. Photo: Andi Sutton/J-WAFS https://news.mit.edu/2020/mit-student-leaders-persevere-going-virtual-global-student-startup-competitions-0515 In the face of Covid-19, the MIT Water Club and the MIT Food and Agriculture Club take their signature innovation prizes online. Fri, 15 May 2020 10:00:01 -0400 https://news.mit.edu/2020/mit-student-leaders-persevere-going-virtual-global-student-startup-competitions-0515 Oona Gaffney | Abdul Latif Jameel Water and Food Systems Lab On April 22, the MIT Water Club hosted its annual Water Innovation Prize Pitch Night, the culminating event of a year-long international competition for student innovators seeking to launch water sector companies. This event, now in its sixth year, normally gathers over 250 people to MIT’s campus to cheer on finalist teams from around the world as they compete for cash awards. Yet, six weeks before the event, when the Water Club would usually be finalizing logistics and collecting RSVPs, Covid-19 upended our world. At the same time that the Water Club’s student leaders were gearing up for their event, the MIT Food and Agriculture Club was in its own final stages of planning its annual pitch competition, the Rabobank-MIT Food and Agribusiness Innovation Prize. Now in its fifth year, this event is a national innovation competition for student startups spanning all aspects of the food system. For both clubs, these events are the largest and highest-profile of the year and provide important networking and professional development opportunities for finalist teams and attendees. Bringing signature MIT resilience and ingenuity, student leaders from both clubs persevered through physical distancing measures, successfully pivoting both events to virtual space. From shared disappointment to supportive action At the outset, both clubs’ leaders were very disappointed. Zhenya Karelina, a second-year MBA student at the MIT Sloan School of Management who is also the Food and Agriculture Club’s co-president and director of the Rabobank-MIT Prize, had been so excited to lead the Rabobank-MIT Prize. She “had this vision of what it would look like at the end,” but under the circumstances she “felt like [she] had to let it all go.” But cancelation simply wasn’t an option. As Erika Desmond, a first-year MBA student at MIT Sloan and vice president of growth for the MIT Water Club, puts it, “the first priority is making sure that the finalists still get the opportunity of getting their innovations out there and to compete for the prizes.” Zhenya’s initial disappointment quickly led to her realization that other MIT startup competition leaders must be feeling the same way. So, she started a Slack channel to connect with other student leadership teams who were dealing with similar losses and to collectively brainstorm what it could look like to take things virtual. “A lot of these MIT prizes are very similar, but we tend to run them in silos. This seemed, to me, to be a cool opportunity to learn from each other,” Zhenya reflects. The Slack group included leaders from the Clean Energy Prize, the 100K Prize, the Water Innovation Prize, and the Rabobank-MIT Food and Agribusiness Innovation Prize. “We were all in the same boat,” recalls Javier Renna, a sophomore MBA student at MIT Sloan who is one of the co-directors for the Water Innovation Prize. “I was amazed by the sense of community in saying, ‘We’re all trying to do the same thing’ and ‘What can we do to help each other out?’” New challenges and silver linings For any organization to pivot one of its biggest in-person events of the year online is no easy task. Inevitably, both the Food and Agriculture Club and the Water Club faced technical, strategic, and personal hurdles while organizing their online events. Both clubs loosely maintained the traditional format of each pitch event: keynotes, pitches by student teams, and Q&A with judges, immediately followed by deliberation and award announcements. One aspect that they struggled to replace, however, was in-person networking. When students and entrepreneurs from around the country gather for these events, networking is “one of the main value propositions,” says Desmond. As a replacement, the Water Club tried smaller virtual breakout sessions through Zoom, to mixed effect. Another huge challenge was to hurdle the technology gaps. “I was the host of the webinar and I remember that it was very scary at first,” recalls Renna. “I had no idea how to run a webinar and I thought ‘how am I going to manage all the different stakeholders with people watching?’ It felt like a recipe for a disaster.” But after tapping a friend experienced in webinars, he managed to learn the ropes. “Once she started to explain it, I started to feel more comfortable.” Renna says. Ultimately, he was able to share his newfound knowledge with leaders of the Food and Agriculture Club, helping them to open up their webinar to the public. Overcoming initial roadblocks led to a shift in thinking for both teams. “The biggest thing for us was pivoting from looking at [going virtual] as a disadvantage … to how we could use it to our advantage,” Desmond recalls. For Karelina, shifting her mindset was key. “By the end … I could see how the virtual environment actually enables all these really cool opportunities that I hadn’t even thought about.” In fact, going online ended up revealing some key advantages. Among them was how the virtual events enabled the participation of a more diverse audience base, one that wouldn’t have been possible under normal circumstances. “Someone from Japan contacted me asking how they could watch the event,” Renna says. “We had people logging onto the Water Innovation Prize from Africa, the UK, the East Coast, the West Coast, Mexico, and more!” Significant startup support for five winning teams continues Despite all the changes, the energy and creativity of the diverse group of participating student entrepreneurs was palpable as they competed for cash awards. The two clubs together awarded $75,000 across five winning teams. In fact, the Water Club was able to increase the total prize amounts for its competitors by diverting money saved from other event cancelations. So, the increased award of $25,000 came as a pleasant surprise for Blue Tap, the team winning first place in the Water Innovation Prize. This team, based out of the University of Cambridge, uses 3D printing technologies to bring affordable clean water to the developing world. They have focused development efforts of their main product, a simple and cost-effective chlorine injector, in Uganda. Their work there has also involved community development as they have partnered with over 30 plumbers to train them in water treatment practices and entrepreneurship. Runners up in the Water Innovation Prize included second-place winner Floe. The team, from Yale University, was awarded $12,500 to further the development of their system that prevents ice buildup on roofs. Ice buildup affects nearly 62 million buildings across the United States every year and can lead to serious structural damage. An MIT team, Harmony Water, took home a third-place prize of $7,500, to support their continued research and development of a low-cost water desalination system that can produce more water and less brine using 30 percent less energy than present methods. Eight teams competed for the MIT-Rabobank Food and Agribusiness Innovation Prize, which awarded two top prizes totaling $30,000. MotorCortex, a student team out of Carnegie Mellon, won the first-place prize of $20,000 with advanced robotics technology that could change the future of the fruit packing industry. The group has developed an algorithm to guide robotic arms in food packing plants that optimizes “pick-up-points” on delicate fruits like avocados and apples. Varying shapes and sizes of individual fruits have historically made automation of the industry a particularly difficult challenge — until now. Their invention could potentially cut fruit packing costs in half as their robotic arms would replace human laborers — in low-wage, high-turnover positions — and increase packing efficiency. In second place, winning $10,000, was Antithesis Foods, a team from Cornell University using high-protein chickpeas and a novel processing technology to produce healthier chocolate snacks. Their garbanzo bean-based product, Grabanzos, was all set for rollout previous to Covid-19. However, the sudden shuttering of production facilities, storefronts, and campuses, has greatly hindered their progress. The startup will now use their prize to pivot their original business plan to an online sales platform. Innovation prize sponsors inspired by student resilience The main sponsor of the food prize is Rabobank, a global financial services leader in the food, agribusiness, and beverage industries. Rabobank executives working with members of the Food and Agriculture Club were impressed by the students’ resilience and drive. Throughout the past months Jennifer Jiang worked closely with the club. As vice president of strategy and business development at Rabobank, she reflects that she has been “inspired by the creativity and novel thinking of the team to run an event that gave viewers and participants alike an energy that so closely resembled that of an in-person event.”  MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) serves as a mentor to both teams in the production of these innovation prizes, and is also a co-sponsor. Working day-to-day with the students, J-WAFS saw this resilience firsthand. Each year the prizes grow in participation and success, and despite the unprecedented challenges of physical distancing and other measures over the last few months, the students produced thoughtful, engaging events. “We were again delighted by the dedication, creativity, and achievements that students from MIT and across the country bring to challenges in the food and agriculture sectors,” says J-WAFS Director John H. Lienhard V. The students’ perseverance in the face of adversity demonstrated their commitment to see these impactful competitions through to their end, as well as to advancing solutions to global water and food challenges. As we move forward through these challenging times, we can look to the collaborative spirit, commitment, and drive of these young water and food leaders as inspiration. Francesca O’Hanlon of Blue Tap delivered a pitch at the Water Innovation Prize on April 22. Her team won first place with a novel water chlorinating system that makes clean water affordable and accessible for users without existing access to healthy water supplies. Photo: Andi Sutton/J-WAFS https://news.mit.edu/2020/plant-precision-injection-orange-olive-banana-0427 Microneedles made of silk-based material can target plant tissues for delivery of micronutrients, hormones, or genes. Sun, 26 Apr 2020 23:59:59 -0400 https://news.mit.edu/2020/plant-precision-injection-orange-olive-banana-0427 David L. Chandler | MIT News Office While the human world is reeling from one pandemic, there are several ongoing epidemics that affect crops and put global food production at risk. Oranges, olives, and bananas are already under threat in many areas due to diseases that affect plants’ circulatory systems and that cannot be treated by applying pesticides.A new method developed by engineers at MIT may offer a starting point for delivering life-saving treatments to plants ravaged by such diseases.These diseases are difficult to detect early and to treat, given the lack of precision tools to access plant vasculature to treat pathogens and to sample biomarkers. The MIT team decided to take some of the principles involved in precision medicine for humans and adapt them to develop plant-specific biomaterials and drug-delivery devices.The method uses an array of microneedles made of a silk-based biomaterial to deliver nutrients, drugs, or other molecules to specific parts of the plant. The findings are described in the journal Advanced Science, in a paper by MIT professors Benedetto Marelli and Jing-Ke-Weng, graduate student Yunteng Cao, postdoc Eugene Lim at MIT, and postdoc Menglong Xu at the Whitehead Institute for Biomedical Research.The microneedles, which the researchers call phytoinjectors, can be made in a variety of sizes and shapes, and can deliver material specifically to a plant’s roots, stems, or leaves, or into its xylem (the vascular tissue involved in water transportation from roots to canopy) or phloem (the vascular tissue that circulates metabolites throughout the plant). In lab tests, the team used tomato and tobacco plants, but the system could be adapted to almost any crop, they say. The microneedles can not only deliver targeted payloads of molecules into the plant, but they can also be used to take samples from the plants for lab analysis.The work started in response to a request from the U.S. Department of Agriculture for ideas on how to address the citrus greening crisis, which is threatening the collapse of a $9 billion industry, Marelli says. The disease is spread by an insect called the Asian citrus psyllid that carries a bacterium into the plant. There is as yet no cure for it, and millions of acres of U.S. orchards have already been devastated. In response, Marelli’s lab swung into gear to develop the novel microneedle technology, led by Cao as his thesis project.The disease infects the phloem of the whole plant, including roots, which are very difficult to reach with any conventional treatment, Marelli explains. Most pesticides are simply sprayed or painted onto a plant’s leaves or stems, and little if any penetrates to the root system. Such treatments may appear to work for a short while, but then the bacteria bounce back and do their damage. What is needed is something that can target the phloem circulating through a plant’s tissues, which could carry an antibacterial compound down into the roots. That’s just what some version of the new microneedles could potentially accomplish, he says.“We wanted to solve the technical problem of how you can have a precise access to the plant vasculature,” Cao adds. This would allow researchers to inject pesticides, for example, that would be transported between the root system and the leaves. Present approaches use “needles that are very large and very invasive, and that results in damaging the plant,” he says. To find a substitute, they built on previous work that had produced microneedles using silk-based material for injecting human vaccines.“We found that adaptations of a material designed for drug delivery in humans to plants was not straightforward, due to differences not only in tissue vasculature, but also in fluid composition,” Lim says. The microneedles designed for human use were intended to biodegrade naturally in the body’s moisture, but plants have far less available water, so the material didn’t dissolve and was not useful for delivering the pesticide or other macromolecules into the phloem. The researchers had to design a new material, but they decided to stick with silk as its basis. That’s because of silk’s strength, its inertness in plants (preventing undesirable side effects), and the fact that it degrades into tiny particles that don’t risk clogging the plant’s internal vasculature systems.They used biotechnology tools to increase silk’s hydrophilicity (making it attract water), while keeping the material strong enough to penetrate the plant’s epidermis and degradable enough to then get out of the way.Sure enough, they tested the material on their lab tomato and tobacco plants, and were able to observe injected materials, in this case fluorescent molecules, moving all they way through the plant, from roots to leaves.“We think this is a new tool that can be used by plant biologists and bioengineers to better understand transport phenomena in plants,” Cao says. In addition, it can be used “to deliver payloads into plants, and this can solve several problems. For example, you can think about delivering micronutrients, or you can think about delivering genes, to change the gene expression of the plant or to basically engineer a plant.”“Now, the interests of the lab for the phytoinjectors have expanded beyond antibiotic delivery to genetic engineering and point-of-care diagnostics,” Lim adds.For example, in their experiments with tobacco plants, they were able to inject an organism called Agrobacterium to alter the plant’s DNA – a typical bioengineering tool, but delivered in a new and precise way.So far, this is a lab technique using precision equipment, so in its present form it would not be useful for agricultural-scale applications, but the hope is that it can be used, for example, to bioengineer disease-resistant varieties of important crop plants. The team has also done tests using a modified toy dart gun mounted to a small drone, which was able to fire microneedles into plants in the field. Ultimately, such a process might be automated using autonomous vehicles, Marelli says, for agricultural-scale use.Meanwhile, the team continues to work on adapting the system to the varied needs and conditions of different kinds of plants and their tissues. “There’s a lot of variation among them, really,” Marelli says, so you need to think about having devices that are plant-specific. For the future, our research interests will go beyond antibiotic delivery to genetic engineering and point-of-care diagnostics based on metabolite sampling.”The work was supported by the Office of Naval Research, the National Science Foundation, and the Keck Foundation. A microinjection device (red) is attached to a citrus tree, providing a way of injecting pesticide or other materials directly into the plant’s circulatory system. Images: Courtesy of the researchers https://news.mit.edu/2020/staring-into-ocean-atmospheric-vortex-0319 MIT researchers describe factors governing how oceans and atmospheres move heat around on Earth and other planetary bodies. Thu, 19 Mar 2020 14:00:01 -0400 https://news.mit.edu/2020/staring-into-ocean-atmospheric-vortex-0319 EAPS Imagine a massive mug of cold, dense cream with hot coffee poured on top. Now place it on a rotating table. Over time, the fluids will slowly mix into each other, and heat from the coffee will eventually reach the bottom of the mug. But as most of us impatient coffee drinkers know, stirring the layers together is a more efficient way to distribute the heat and enjoy a beverage that’s not scalding hot or ice cold. The key is the swirls, or vortices, that formed in the turbulent liquid. “If you just waited to see whether molecular diffusion did it, it would take forever and you’ll never get your coffee and milk together,” says Raffaele Ferrari, Cecil and Ida Green Professor of Oceanography in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). This analogy helps explain a new theory on the intricacies the climate system on Earth — and other rotating planets with atmospheres and/or oceans — outlined in a recent PNAS paper by Ferrari and Basile Gallet, an EAPS visiting researcher from Service de Physique de l’Etat Condensé, CEA Saclay, France. It may seem intuitive that Earth’s sun-baked equator is hot while the relatively sun-deprived poles are cold, with a gradient of temperatures in between. However, the actual span of that temperature gradient is relatively small compared to what it might otherwise be because of the way the Earth system physically transports heat around the globe to cooler regions, moderating the extremes. Otherwise, “you would have unbearably hot temperatures at the equator and [the temperate latitudes] would be frozen,” says Ferrari. “So, the fact that the planet is habitable, as we know it, has to do with heat transport from the equator to the poles.” Yet, despite the importance of global heat flux for maintaining the contemporary climate of Earth, the mechanisms that drive the process are not completely understood. That’s where Ferrari and Gallet’s recent work comes in: their research lays out a mathematical description of the physics underpinning the role that marine and atmospheric vortices play in redistributing that heat in the global system. Ferrari and Gallet’s work builds on that of another MIT professor, the late meteorologist Norman Phillips, who, in 1956, proposed a set of equations, the “Phillips model,” to describe global heat transport. Phillips’ model represents the atmopshere and ocean as two layers of different density on top of each other. While these equations capture the development of turbulence and predict the distribution of temperature on Earth with relative accuracy, they are still very complex and need to be solved with computers. The new theory from Ferrari and Gallet provides analytical solutions to the equations and quantitatively predicts local heat flux, energy powering the eddies, and large-scale flow characteristics. And their theoretical framework is scalable, meaning it works for eddies, which are smaller and denser in the ocean, as well as cyclones in the atmosphere that are larger. Setting the process in motion The physics behind vortices in your coffee cup differ from those in nature. Fluid media like the atmosphere and ocean are characterized by variations in temperature and density. On a rotating planet, these variations accelerate strong currents, while friction — on the bottom of the ocean and atmosphere — slows them down. This tug of war results in instabilities of the flow of large-scale currents and produces irregular turbulent flows that we experience as ever-changing weather in the atmosphere. Vortices — closed circular flows of air or water — are born of this instability. In the atmosphere, they’re called cyclones and anticyclones (the weather patterns); in the ocean they’re called eddies. In both cases, they are transient, ordered formations, emerging somewhat erratically and dissipating over time. As they spin out of the underlying turbulence, they, too, are hindered by friction, causing their eventual dissipation, which completes the transfer of heat from the equator (the top of the hot coffee) to the poles (the bottom of the cream). Zooming out to the bigger picture While the Earth system is much more complex than two layers, analyzing heat transport in Phillips’ simplified model helps scientists resolve the fundamental physics at play. Ferrari and Gallet found that the heat transport due to vortices, though directionally chaotic, ends up moving heat to the poles faster than a more smooth-flowing system would. According to Ferrari, “vortices do the dog work of moving heat, not disorganized motion (turbulence).” It would be impossible to mathematically account for every single eddy feature that forms and disappears, so the researchers developed simplified calculations to determine the overall effects of vortex behavior, based on latitude (temperature gradient) and friction parameters. Additionally, they considered each vortex as a single particle in a gas fluid. When they incorporated their calculations into the existing models, the resulting simulations predicted Earth’s actual temperature regimes fairly accurately, and revealed that both the formation and function of vortices in the climate system are much more sensitive to frictional drag than anticipated. Ferrari emphasizes that all modeling endeavors require simplifications and aren’t perfect representations of natural systems — as in this instance, with the atmosphere and oceans represented as simple two-layer systems, and the sphericity of the Earth is not accounted for. Even with these drawbacks, Gallet and Ferrari’s theory has gotten the attention of other oceanographers. “Since 1956, meteorologists and oceanographers have tried, and failed, to understand this Phillips model,” says Bill Young, professor of physical oceanography at Scripps Institution of Oceanography, “The paper by Gallet and Ferrari is the first successful deductive prediction of how the heat flux in the Phillips model varies with temperature gradient.” Ferrari says that answering fundamental questions of how heat transport functions will allow scientists to more generally understand the Earth’s climate system. For instance, in Earth’s deep past, there were times when our planet was much warmer, when crocodiles swam in the arctic and palm trees stretched up into Canada, and also times when it was much colder and the mid-latitudes were covered in ice. “Clearly heat transfer can change across different climates, so you’d like to be able to predict it,” he says. “It’s been a theoretical question on the minds of people for a long time.” As the average global temperature has increased more than 1 degree Celsius in the past 100 years, and is on pace to far exceed that in the next century, the need to understand — and predict — Earth’s climate system has become crucial as communities, governments, and industry adapt to the current changing environment. “I find it extremely rewarding to apply the fundamentals of turbulent flows to such a timely issue,” says Gallet, “In the long run, this physics-based approach will be key to reducing the uncertainty in climate modelling.” Following in the footsteps of meteorology giants like Norman Phillips, Jule Charney, and Peter Stone, who developed seminal climate theories at MIT, this work too adheres to an admonition from Albert Einstein: “Out of clutter, find simplicity.” This visualization shows the Gulf Stream’s sea surface currents and and temperatures. Image: MIT/JPL project entitled Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) https://news.mit.edu/2020/ethylene-sensor-food-waste-0318 Monitoring the plant hormone ethylene could reveal when fruits and vegetables are about to spoil. Wed, 18 Mar 2020 08:00:00 -0400 https://news.mit.edu/2020/ethylene-sensor-food-waste-0318 Anne Trafton | MIT News Office As flowers bloom and fruits ripen, they emit a colorless, sweet-smelling gas called ethylene. MIT chemists have now created a tiny sensor that can detect this gas in concentrations as low as 15 parts per billion, which they believe could be useful in preventing food spoilage.The sensor, which is made from semiconducting cylinders called carbon nanotubes, could be used to monitor fruit and vegetables as they are shipped and stored, helping to reduce food waste, says Timothy Swager, the John D. MacArthur Professor of Chemistry at MIT.“There is a persistent need for better food management and reduction of food waste,” says Swager. “People who transport fruit around would like to know how it’s doing during transit, and whether they need to take measures to keep ethylene down while they’re transporting it.”In addition to its natural role as a plant hormone, ethylene is also the world’s most widely manufactured organic compound and is used to manufacture products such as plastics and clothing. A detector for ethylene could also be useful for monitoring this kind of industrial ethylene manufacturing, the researchers say.Swager is the senior author of the study, which appears today in the journal ACS Central. MIT postdoc Darryl Fong is the lead author of the paper, and MIT graduate student Shao-Xiong (Lennon) Luo and visiting scholar Rafaela Da Silveira Andre are also authors.Ripe or notEthylene is produced by most plants, which use it as a hormone to stimulate growth, ripening, and other key stages of their life cycle. Bananas, for instance, produce increasing amounts of ethylene as they ripen and turn brown, and flowers produce it as they get ready to bloom. Produce and flowers under stress can overproduce ethylene, leading them to ripen or wilt prematurely. It is estimated that every year U.S. supermarkets lose about 12 percent of their fruits and vegetables to spoilage, according to the U.S. Department of Agriculture. In 2012, Swager’s lab developed an ethylene sensor containing arrays of tens of thousands of carbon nanotubes. These carbon cylinders allow electrons to flow along them, but the researchers added copper atoms that slow down the electron flow. When ethylene is present, it binds to the copper atoms and slows down electrons even more. Measuring this slowdown can reveal how much ethylene is present. However, this sensor can only detect ethylene levels down to 500 parts per billion, and because the sensors contain copper, they are likely to eventually become corroded by oxygen and stop working.“There still is not a good commercial sensor for ethylene,” Swager says. “To manage any kind of produce that’s stored long-term, like apples or potatoes, people would like to be able to measure its ethylene to determine if it’s in a stasis mode or if it’s ripening.”Swager and Fong created a new kind of ethylene sensor that is also based on carbon nanotubes but works by an entirely different mechanism, known as Wacker oxidation. Instead of incorporating a metal such as copper that binds directly to ethylene, they used a metal catalyst called palladium that adds oxygen to ethylene during a process called oxidation.As the palladium catalyst performs this oxidation, the catalyst temporarily gains electrons. Palladium then passes these extra electrons to carbon nanotubes, making them more conductive. By measuring the resulting change in current flow, the researchers can detect the presence of ethylene.The sensor responds to ethylene within a few seconds of exposure, and once the gas is gone, the sensor returns to its baseline conductivity within a few minutes.“You’re toggling between two different states of the metal, and once ethylene is no longer there, it goes from that transient, electron-rich state back to its original state,” Fong says.“The repurposing of the Wacker oxidation catalytic system for ethylene detection was an exceptionally clever and fundamentally interdisciplinary idea,” says Zachary Wickens, an assistant professor of chemistry at the University of Wisconsin, who was not involved in the study. “The research team drew upon recent modifications to the Wacker oxidation to provide a robust catalytic system and incorporated it into a carbon nanotube-based device to provide a remarkably selective and simple ethylene sensor.”In bloomTo test the sensor’s capabilities, the researchers deposited the carbon nanotubes and other sensor components onto a glass slide. They then used it to monitor ethylene production in two types of flowers — carnations and purple lisianthus. They measured ethylene production over five days, allowing them to track the relationship between ethylene levels and the plants’ flowering.In their studies of carnations, the researchers found that there was a rapid spike in ethylene concentration on the first day of the experiment, and the flowers bloomed shortly after that, all within a day or two.Purple lisianthus flowers showed a more gradual increase in ethylene that started during the first day and lasted until the fourth day, when it started to decline. Correspondingly, the flowers’ blooming was spread out over several days, and some still hadn’t bloomed by the end of the experiment.The researchers also studied whether the plant food packets that came with the flowers had any effect on ethylene production. They found that plants given the food showed slight delays in ethylene production and blooming, but the effect was not significant (only a few hours).The MIT team has filed for a patent on the new sensor. The research was funded by the National Science Foundation, the U.S. Army Engineer Research and Development Center Environmental Quality Technology Program, the Natural Sciences and Engineering Research Council of Canada, and the Sao Paulo Research Foundation. To test the new sensor’s capabilities, the researchers deposited the carbon nanotubes and other sensor components onto a glass slide. They then used it to monitor ethylene production in two types of flowers — red carnations and purple lisianthus. Image: Stock imagery edited by MIT News https://news.mit.edu/2020/wave-power-coastal-erosion-0316 The average power of waves hitting a coastline can predict how fast that coast will erode. Mon, 16 Mar 2020 00:00:00 -0400 https://news.mit.edu/2020/wave-power-coastal-erosion-0316 Jennifer Chu | MIT News Office Over millions of years, Hawaiian volcanoes have formed a chain of volcanic islands stretching across the Northern Pacific, where ocean waves from every direction, stirred up by distant storms or carried in on tradewinds, have battered and shaped the islands’ coastlines to varying degrees.Now researchers at MIT and elsewhere have found that, in Hawaii, the amount of energy delivered by waves averaged over each year is a good predictor of how fast or slow a rocky coastline will erode. If waves are large and frequent, the coastline will erode faster, whereas smaller, less frequent waves will result in a slower-eroding coast.Their study helps to explain the Hawaiian Islands’ meandering shorelines, where north-facing sea cliffs, experiencing larger waves produced by distant storms and persistent tradewinds, have eroded farther inland. In contrast, south-facing coasts typically enjoy calmer waters, smaller waves, and therefore less eroded coasts.The results, published this month in the journal Geology, can also help scientists forecast how fast other rocky coasts around the world might erode, based on the power of the waves that a coast typically experiences.“Over half of the world’s oceanic coastlines are rocky sea cliffs, so sea-cliff erosion affects a lot of coastal inhabitants and infrastructure,” says Kim Huppert ’11, PhD ’17, lead author of the study and a former graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “If storminess increases with climate change, and waves get bigger, we need to understand specifically how waves affect erosion.”Huppert, who is now a senior research scientist at the German Research Center for Geosciences, has co-authored the paper with Taylor Perron, professor of earth, atmospheric, and planetary sciences and associate department head at MIT, and Andrew Ashton of the Woods Hole Oceanographic Institution.Sink and carveScientists have had some idea that the rate of coastal erosion depends on the power of the waves that act on that coast. But until now, there’s been no systematic study to confirm this relationship, mainly because there can be so many other factors contributing to coastal erosion that can get in the way.The team found the Hawaiian Islands provide an ideal environment in which to study this relationship: The islands are all made from the same type of bedrock, meaning they wouldn’t have to account for multiple types of rock and sediment and their differences in erosion; and the islands inhabit a large oceanic basin that produces a wide range of wave “climates,” or waves of varying sizes and frequencies.“As you go around the shoreline of different islands, you see very different wave climates, simply by turning a corner of the island,” Huppert notes. “And the rock type is all the same. So Hawaii is a nice natural laboratory.” The researchers focused their study on 11 coastal locations around the islands of Hawaii, Maui, and Kaho‘olawe, each facing different regions of the Pacific that produce varying sizes and frequencies of waves. Before considering the wave power at these various locations, they first worked to estimate the average rate at which the sea cliffs at each coastal location eroded over the last million years. The team sought to identify the erosion rates that produced the coastal profiles of Hawaiian Islands today, given the islands’ original profiles, which can be estimated from each island’s topography. To do this, they first had to account for changes in each island’s vertical motion and sea level change over time. After a volcanic island forms, it inevitably starts to subside, or sink under its own weight. As an island sinks, the level at which the sea interacts with the island changes, just as if you were to lower yourself into a pool: The water’s surface may start at your ankles, and progressively lap at your knees, your waist, and eventually your shoulders and chin. For an island, the more slowly it sinks, the more time the sea has to carve out the coastline at a particular elevation. In contrast, if an island sinks quickly, the sea has only fleeting time to cut into the coast before the island subsides further, exposing a new coastline for the sea to wear away. As a result, the rate at which an island sinks strongly affects how far the coast has retreated inland at any given elevation, over millions of years. To calculate the speed of island sinking, the team used a model to estimate how much the lithosphere, the outermost layer of the Earth on which volcanic islands sit, sagged under the weight of each Hawaiian volcano formed in the past million years. Because the Hawaiian Islands are close together, the sinking of one island can also affect the sinking or rising of neighboring islands, similar to the way one child may bounce up as another child sinks into a trampoline. The team used the model to simulate various possible histories of island sinking over the last million years, and the subsequent erosion of sea cliffs and coastlines. They looked for the scenario that best linked the islands’ original coastlines with today’s modern coastlines, and matched the various resulting erosion rates to the 11 locations that they focused on in their study. “We found erosion rates that vary from 17 millimeters per year to 118 millimeters per year at the different sites,” Huppert says. “The upper end of that range is nearly half a foot per year, so some of those rates are pretty fast for rock.”Waves of a size They chose the 11 coastal locations in the study for their variability: Some sea cliffs face north, where they are battered by stronger waves produced by distant storms. Other north-facing coasts experience tradewinds that come from the northeast and produce waves that are smaller but more frequent. The coastal locations that face southward experience smaller, less-frequent waves in contrast. The team compared erosion rates at each site with the typical wave power experienced at each site, which they calculated from wave height and frequency measurements derived from buoy data. They then compared the 11 locations’ wave power to their long-term rates of erosion. What they found was a rather simple, linear relationship between wave power and the rate of coastal erosion. The stronger the waves that a coast experiences, the faster that coast erodes. Specifically, they found that waves of a size that occur every few days might be a better indicator of how fast a coast is eroding than larger but less frequent storm waves. That is, if  waves on normal, nonstormy days are large, a coast is likely eroding quickly; if the typical waves are smaller, a coast is retreating more slowly. The researchers say carrying out this study in Hawaii allowed them to confirm this simple relationship, without confounding natural factors. As a result, scientists can use this relationship to help predict how rocky coasts in other parts of the world may change, with variations in sea level and wave activity as a result of climate change. “Sea level is rising along much of the world’s coasts, and changes in winds and storminess with ongoing climate change could alter wave regimes, too,” Perron points out. “To be able to isolate the influence of wave climate on the rate of coastal erosion gets you one step closer to going to a particular place and calculating the change in erosion rate there.”This research was supported, in part, by the NEC Corporation, and by NASA. On Maui (shown here) and Hawaii’s other islands, MIT researchers find that the rate of coastal erosion depends on the size of the average wave. https://news.mit.edu/2020/how-plants-protect-sun-damage-0310 Study reveals a mechanism that plants can use to dissipate excess sunlight as heat. Tue, 10 Mar 2020 05:59:59 -0400 https://news.mit.edu/2020/how-plants-protect-sun-damage-0310 Anne Trafton | MIT News Office For plants, sunlight can be a double-edged sword. They need it to drive photosynthesis, the process that allows them to store solar energy as sugar molecules, but too much sun can dehydrate and damage their leaves.A primary strategy that plants use to protect themselves from this kind of photodamage is to dissipate the extra light as heat. However, there has been much debate over the past several decades over how plants actually achieve this.“During photosynthesis, light-harvesting complexes play two seemingly contradictory roles. They absorb energy to drive water-splitting and photosynthesis, but at the same time, when there’s too much energy, they have to also be able to get rid of it,” says Gabriela Schlau-Cohen, the Thomas D. and Virginia W. Cabot Career Development Assistant Professor of Chemistry at MIT.In a new study, Schlau-Cohen and colleagues at MIT, the University of Pavia, and the University of Verona directly observed, for the first time, one of the possible mechanisms that have been proposed for how plants dissipate energy. The researchers used a highly sensitive type of spectroscopy to determine that excess energy is transferred from chlorophyll, the pigment that gives leaves their green color, to other pigments called carotenoids, which can then release the energy as heat.“This is the first direct observation of chlorophyll-to-carotenoid energy transfer in the light-harvesting complex of green plants,” says Schlau-Cohen, who is the senior author of the study. “That’s the simplest proposal, but no one’s been able to find this photophysical pathway until now.”MIT graduate student Minjung Son is the lead author of the study, which appears today in Nature Communications. Other authors are Samuel Gordon ’18, Alberta Pinnola of the University of Pavia, in Italy, and Roberto Bassi of the University of Verona.Excess energyWhen sunlight strikes a plant, specialized proteins known as light-harvesting complexes absorb light energy in the form of photons, with the help of pigments such as chlorophyll. These photons drive the production of sugar molecules, which store the energy for later use.Much previous research has shown that plants are able to quickly adapt to changes in sunlight intensity. In very sunny conditions, they convert only about 30 percent of the available sunlight into sugar, while the rest is released as heat. If this excess energy is allowed to remain in the plant cells, it creates harmful molecules called free radicals that can damage proteins and other important cellular molecules.“Plants can respond to fast changes in solar intensity by getting rid of extra energy, but what that photophysical pathway is has been debated for decades,” Schlau-Cohen says.The simplest hypothesis for how plants get rid of these extra photons is that once the light-harvesting complex absorbs them, chlorophylls pass them to nearby molecules called carotenoids. Carotenoids, which include lycopene and beta-carotene, are very good at getting rid of excess energy through rapid vibration. They are also skillful scavengers of free radicals, which helps to prevent damage to cells.A similar type of energy transfer has been observed in bacterial proteins that are related to chlorophyll, but until now, it had not been seen in plants. One reason why it has been hard to observe this phenomenon is that it occurs on a very fast time scale (femtoseconds, or quadrillionths of a second). Another obstacle is that the energy transfer spans a broad range of energy levels. Until recently, existing methods for observing this process could only measure a small swath of the spectrum of visible light.In 2017, Schlau-Cohen’s lab developed a modification to a femtosecond spectroscopic technique that allows them to look at a broader range of energy levels, spanning red to blue light. This meant that they could monitor energy transfer between chlorophylls, which absorb red light, and carotenoids, which absorb blue and green light.In this study, the researchers used this technique to show that photons move from an excited state, which is spread over multiple chlorophyll molecules within a light-harvesting complex, to nearby carotenoid molecules within the complex.“By broadening the spectral bandwidth, we could look at the connection between the blue and the red ranges, allowing us to map out the changes in energy level. You can see energy moving from one excited state to another,” Schlau-Cohen says.Once the carotenoids accept the excess energy, they release most of it as heat, preventing light-induced damage to the cells.Boosting crop yieldsThe researchers performed their experiments in two different environments — one in which the proteins were in a detergent solution, and one in which they were embedded in a special type of self-assembling membrane called a nanodisc. They found that the energy transfer occurred more rapidly in the nanodisc, suggesting that environmental conditions affect the rate of energy dissipation.It remains a mystery exactly how excess sunlight triggers this mechanism within plant cells. Schlau-Cohen’s lab is now exploring whether the organization of chlorophylls and carotenoids within the chloroplast membrane play a role in activating the photoprotection system.A better understanding of plants’ natural photoprotection system could help scientists develop new ways to improve crop yields, Schlau-Cohen says. A 2016 paper from University of Illinois researchers showed that by overproducing all of the proteins involved in photoprotection, crop yields could be boosted by 15 to 20 percent. That paper also suggested that production could be further increased to a theoretical maximum of about 30 percent.“If we understand the mechanism, instead of just upregulating everything and getting 15 to  20 percent, we could really optimize the system and get to that theoretical maximum of 30 percent,” Schlau-Cohen says.The research was funded by the U.S. Department of Energy. Assistant Professor Gabriela Schlau-Cohen has observed, for the first time, a mechanism that plants use to protect themselves from sun damage. Image: Jose-Luis Olivares https://news.mit.edu/2020/mit-powered-climate-resilience-solution-among-top-proposals-macarthur-100-and-change-0225 High-scoring 100&Change applications featured in Bold Solutions Network. Tue, 25 Feb 2020 11:30:01 -0500 https://news.mit.edu/2020/mit-powered-climate-resilience-solution-among-top-proposals-macarthur-100-and-change-0225 Mark Dwortzan | Joint Program on the Science and Policy of Global Change The John D. and Catherine T. MacArthur Foundation unveiled that a proactive climate resilience system co-developed by MIT and BRAC, a leading development organization, was one of the highest-scoring proposals, designated as the Top 100, in its 100&Change competition in 2020 for a single $100 million grant to help solve one of the world’s most critical social challenges. The MIT/BRAC system, known as the Climate Resilience Early Warning System Network (CREWSNET), aims to empower climate‑threatened populations to make timely, science-driven decisions about their future. Starting with western Bangladesh but scalable to other frontline nations across the globe, CREWSNET will combine leading-edge climate forecasting and socioeconomic analysis with innovative resilience services to enable people to make and implement informed decisions about adaptation and relocation — and thereby minimize loss of life, livelihoods, and property. “Climate change is one of the most urgent threats facing human civilization today, and while the world’s most vulnerable did not create this challenge, they are the first to inherit it,” said John Aldridge, assistant leader of the Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory who serves as a CREWSNET project leader along with principal investigator Elfatih Eltahir, the Breene M. Kerr Professor of Hydrology and Climate at MIT. “We at MIT are excited and proud to have partnered with BRAC, a proven, global leader in humanitarian assistance and development programming, to create a new, proactive model for climate adaptation and individual empowerment.” “In its earliest days, BRAC worked tirelessly to rebuild communities devastated by climate disasters. Almost 50 years later, we continue to innovate our poverty alleviation and climate change adaptation programming, which reaches tens of millions of people each year. We are thrilled to partner with MIT now to incorporate their advanced technology, research, and scientific capabilities to tackle the myriad of challenges created by climate change, first in Bangladesh and then globally,” says Ashley Toombs, director of External Affairs at BRAC USA, the U.S.-based affiliate, whose portfolio includes climate change adaptation. 100&Change is a distinctive competition that is open to organizations and collaborations working in any field, anywhere in the world. Proposals must identify a problem and offer a solution that promises significant and durable change. The second round of the competition had a promising start: 3,690 competition registrants submitted 755 proposals. Of those, 475 passed an initial administrative review. The Top 100 represent the top 21 percent of competition submissions. The proposals were rigorously vetted, undergoing MacArthur’s initial administrative review, a Peer-to-Peer review, an evaluation by an external panel of judges, and a technical review by specialists whose expertise was matched to the project. Each proposal was evaluated using four criteria: impactful, evidence-based, feasible, and durable. MacArthur’s board of directors will select up to 10 finalists from among these high-scoring proposals this spring. “MacArthur seeks to generate increased recognition, exposure, and support for the high-impact ideas designated as the Top 100,” says Cecilia Conrad, CEO of Lever for Change and MacArthur managing director at 100&Change. “Based on our experience in the first round of 100&Change, we know the competition will produce multiple compelling and fundable ideas. We are committed to matching philanthropists with powerful solutions and problem solvers to accelerate social change.” Since the inaugural competition, other funders and philanthropists have committed an additional $419 million to date to support bold solutions by 100&Change applicants. Building on the success of 100&Change, MacArthur created Lever for Change to unlock significant philanthropic capital by helping donors find and fund vetted, high-impact opportunities through the design and management of customized competitions. In addition to 100&Change, Lever for Change is managing the Chicago Prize, the Economic Opportunity Challenge, and the Larsen Lam ICONIQ Impact Award. The Bold Solutions Network launched on Feb. 19, featuring CREWSNET as one of the Top 100 from 100&Change. The searchable online collection of submissions contains a project overview, 90-second video, and two-page factsheet for each proposal. Visitors can sort by subject, location, sustainable development goal, or beneficiary population to view proposals based on area of interest. The Bold Solutions Network will showcase the highest-rated proposals that emerge from the competitions Lever for Change manages. Proposals in the Bold Solutions Network undergo extensive evaluation and due diligence to ensure each solution promises real and measurable progress to accelerate social change. The Bold Solutions Network was designed to provide an innovative approach to identifying the most effective, enduring solutions aligned with donors’ philanthropic goals and to help top applicants gain visibility and funding from a wide array of funders. Organizations that are part of the network will have continued access to a variety of technical support and learning opportunities focused on strengthening their proposals and increasing the impact of their work. A proactive climate resilience system co-developed by MIT and BRAC was included among the top 100 entries in the MacArthur Foundation’s 100&Change competition. Image courtesy of the John D. and Catherine T. MacArthur Foundation. https://news.mit.edu/2020/instrument-may-enable-mail-testing-detect-heavy-metals-water-0225 Whisk-shaped device absorbs trace contaminants, preserves them in dry state that can be shipped to labs for analysis. Tue, 25 Feb 2020 11:05:07 -0500 https://news.mit.edu/2020/instrument-may-enable-mail-testing-detect-heavy-metals-water-0225 Jennifer Chu | MIT News Office Lead, arsenic, and other heavy metals are increasingly present in water systems around the world due to human activities, such as pesticide use and, more recently, the inadequate disposal of electronic waste. Chronic exposure to even trace levels of these contaminants, at concentrations of parts per billion, can cause debilitating health conditions in pregnant women, children, and other vulnerable populations.Monitoring water for heavy metals is a formidable task, however, particularly for resource-constrained regions where workers must collect many liters of water and chemically preserve samples before transporting them to distant laboratories for analysis.To simplify the monitoring process, MIT researchers have developed an approach called SEPSTAT, for solid-phase extraction, preservation, storage, transportation, and analysis of trace contaminants. The method is based on a small, user-friendly device the team developed, which absorbs trace contaminants in water and preserves them in a dry state so the samples can be easily dropped in the mail and shipped to a laboratory for further analysis. A whisk-like device lined with small pockets filled with gold polymer beads, fits inside a typical sampling bottle, and can be twirled to pick up any metal contaminants in water.The device resembles a small, flexible propeller, or whisk, which fits inside a typical sampling bottle. When twirled inside the bottle for several minutes, the instrument can absorb most of the trace contaminants in the water sample. A user can either air-dry the device or blot it with a piece of paper, then flatten it and mail it in an envelope to a laboratory, where scientists can dip it in a solution of acid to remove the contaminants and collect them for further analysis in the lab.“We initially designed this for use in India, but it’s taught me a lot about our own water issues and trace contaminants in the United States,” says device designer Emily Hanhauser, a graduate student in MIT’s Department of Mechanical Engineering. “For instance, someone who has heard about the water crisis in Flint, Michigan, who now wants to know what’s in their water, might one day order something like this online, do the test themselves, and send it to a lab.”Hanhauser and her colleagues recently published their results in the journal Environmental Science and Technology. Her MIT co-authors are Chintan Vaishnav of the Tata Center for Technology and Design and the MIT Sloan School of Management; John Hart, associate professor of mechanical engineering; and Rohit Karnik, professor of mechanical engineering and associate department head for education, along with Michael Bono of Boston University.From teabags to whisksThe team originally set out to understand the water monitoring infrastructure in India. Millions of water samples are collected by workers at local laboratories all around the country, which are equipped to perform basic water quality analysis. However, to analyze trace contaminants, workers at these local labs need to chemically preserve large numbers of water samples and transport the vessels, often over hundreds of kilometers, to state capitals, where centralized labs have facilities to properly analyze trace contaminants.“If you’re collecting a lot of these samples and trying to bring them to a lab, it’s pretty onerous work, and there is a significant transportation barrier,” Hanhauser says. After the device is pulled out and dried, it can preserve any metal contaminants that it has picked up, for long periods of time. The device can be flattened and mailed to a lab, where the contaminants can be further analyzed. In looking to streamline the logistics of water monitoring, she and her colleagues wondered whether they could bypass the need to transport the water, and instead transport the contaminants by themselves, in a dry state. They eventually found inspiration in dry blood spotting, a simple technique that involves pricking a person’s finger and collecting a drop of blood on a card of cellulose. When dried, the chemicals in the blood are stable and preserved, and the cards can be mailed off for further analysis, avoiding the need to preserve and ship large volumes of blood.The team started thinking of a similar collection system for heavy metals, and looked through the literature for materials that could both absorb trace contaminants from water and keep them stable when dry.They eventually settled on ion-exchange resins, a class of material that comes in the form of small polymer beads, several hundreds of microns wide. These beads contain groups of molecules bound to a hydrogen ion. When dipped in water, the hydrogen comes off and can be exchanged with another ion, such as a heavy metal cation, that takes hydrogen’s place on the bead. In this way, the beads can absorb heavy metals and other trace contaminants from water.The researchers then looked for ways to immerse the beads in water, and first considered a teabag-like design. They filled a mesh-like pocket with beads and dunked it in water they spiked with heavy metals. They found, though, that it took days for the beads to adequately absorb the contaminants if they simply left the teabag in the water. When they stirred the teabag around, turbulence sped the process somewhat, but it still took far too long for the beads, packed into one large teabag, to absorb the contaminants.Ultimately, Hanhauser found that a handheld stirring design worked best to take up metal contaminants in water within a reasonable amount of time. The device is made from a polymer mesh cut into several propeller-like panels. Within each panel, Hanhauser hand-stitched small pockets, which she filled with polymer beads. She then stitched each panel around a polymer stick to resemble a sort of egg beater or whisk.Testing the watersThe researchers fabricated several of the devices, then tested them on samples of natural water collected around Boston, including the Charles and Mystic rivers. They spiked the samples with various heavy metal contaminants, such as lead, copper, nickel, and cadmium, then stuck a device in the bottle of each sample, and twirled it around by hand to catch and absorb the contaminants. They then placed the devices on a counter to dry overnight.To recover the contaminants from the device, they dipped the device in hydrochloric acid. The hydrogen in the solution effectively knocks away any ions attached to the polymer beads, including heavy metals, which can then be collected and analyzed with instruments such as mass spectrometers.The researchers found that by stirring the device in the water sample, the device was able to absorb and preserve about 94 percent of the metal contaminants in each sample. In their recent trials, they found they could still detect the contaminants and predict their concentrations in the original water samples, with an accuracy range of 10 to 20 percent, even after storing the device in a dry state for up to two years.With a cost of less than $2, the researchers believe that the device could facilitate transport of samples to centralized laboratories, collection and preservation of samples for future analysis, and acquisition of water quality data in a centralized manner, which, in turn, could help to identify sources of contamination, guide policies, and enable improved water quality management.The researchers have now partnered with a company in India, in hopes of commercializing the device. Together, their project was recently chosen as one of 26 proposals out of more than 950 to be funded by the Indian government under its Atal New India Challenge program.This research was funded, in part, by the MIT Abdul Latif Jameel Water and Food Systems Lab, the MIT Tata Center, and the National Science Foundation.   MIT graduate student Emily Hanhauser demonstrates a new device that may simplify the logistics of water monitoring for trace metal contaminants, particularly in resource-constrained regions. Image: Melanie Gonick/MIT https://news.mit.edu/2020/passive-solar-powered-water-desalination-0207 System achieves new level of efficiency in harnessing sunlight to make fresh potable water from seawater. Thu, 06 Feb 2020 23:59:59 -0500 https://news.mit.edu/2020/passive-solar-powered-water-desalination-0207 David L. Chandler | MIT News Office A completely passive solar-powered desalination system developed by researchers at MIT and in China could provide more than 1.5 gallons of fresh drinking water per hour for every square meter of solar collecting area. Such systems could potentially serve off-grid arid coastal areas to provide an efficient, low-cost water source.The system uses multiple layers of flat solar evaporators and condensers, lined up in a vertical array and topped with transparent aerogel insulation. It is described in a paper appearing today in the journal Energy and Environmental Science, authored by MIT doctoral students Lenan Zhang and Lin Zhao, postdoc Zhenyuan Xu, professor of mechanical engineering and department head Evelyn Wang, and eight others at MIT and at Shanghai Jiao Tong University in China.The key to the system’s efficiency lies in the way it uses each of the multiple stages to desalinate the water. At each stage, heat released by the previous stage is harnessed instead of wasted. In this way, the team’s demonstration device can achieve an overall efficiency of 385 percent in converting the energy of sunlight into the energy of water evaporation.The device is essentially a multilayer solar still, with a set of evaporating and condensing components like those used to distill liquor. It uses flat panels to absorb heat and then transfer that heat to a layer of water so that it begins to evaporate. The vapor then condenses on the next panel. That water gets collected, while the heat from the vapor condensation gets passed to the next layer.Whenever vapor condenses on a surface, it releases heat; in typical condenser systems, that heat is simply lost to the environment. But in this multilayer evaporator the released heat flows to the next evaporating layer, recycling the solar heat and boosting the overall efficiency.“When you condense water, you release energy as heat,” Wang says. “If you have more than one stage, you can take advantage of that heat.”Adding more layers increases the conversion efficiency for producing potable water, but each layer also adds cost and bulk to the system. The team settled on a 10-stage system for their proof-of-concept device, which was tested on an MIT building rooftop. The system delivered pure water that exceeded city drinking water standards, at a rate of 5.78 liters per square meter (about 1.52 gallons per 11 square feet) of solar collecting area. This is more than two times as much as the record amount previously produced by any such passive solar-powered desalination system, Wang says.Theoretically, with more desalination stages and further optimization, such systems could reach overall efficiency levels as high as 700 or 800 percent, Zhang says.Unlike some desalination systems, there is no accumulation of salt or concentrated brines to be disposed of. In a free-floating configuration, any salt that accumulates during the day would simply be carried back out at night through the wicking material and back into the seawater, according to the researchers.Their demonstration unit was built mostly from inexpensive, readily available materials such as a commercial black solar absorber and paper towels for a capillary wick to carry the water into contact with the solar absorber. In most other attempts to make passive solar desalination systems, the solar absorber material and the wicking material have been a single component, which requires specialized and expensive materials, Wang says. “We’ve been able to decouple these two.”The most expensive component of the prototype is a layer of transparent aerogel used as an insulator at the top of the stack, but the team suggests other less expensive insulators could be used as an alternative. (The aerogel itself is made from dirt-cheap silica but requires specialized drying equipment for its manufacture.)Wang emphasizes that the team’s key contribution is a framework for understanding how to optimize such multistage passive systems, which they call thermally localized multistage desalination. The formulas they developed could likely be applied to a variety of materials and device architectures, allowing for further optimization of systems based on different scales of operation or local conditions and materials.One possible configuration would be floating panels on a body of saltwater such as an impoundment pond. These could constantly and passively deliver fresh water through pipes to the shore, as long as the sun shines each day. Other systems could be designed to serve a single household, perhaps using a flat panel on a large shallow tank of seawater that is pumped or carried in. The team estimates that a system with a roughly 1-square-meter solar collecting area could meet the daily drinking water needs of one person. In production, they think a system built to serve the needs of a family might be built for around $100. The researchers plan further experiments to continue to optimize the choice of materials and configurations, and to test the durability of the system under realistic conditions. They also will work on translating the design of their lab-scale device into a something that would be suitable for use by consumers. The hope is that it could ultimately play a role in alleviating water scarcity in parts of the developing world where reliable electricity is scarce but seawater and sunlight are abundant.“This new approach is very significant,” says Ravi Prasher, an associate lab director at Lawrence Berkeley National Laboratory and adjunct professor of mechanical engineering at the University of California at Berkeley, who was not involved in this work. “One of the challenges in solar still-based desalination has been low efficiency due to the loss of significant energy in condensation. By efficiently harvesting the condensation energy, the overall solar to vapor efficiency is dramatically improved. … This increased efficiency will have an overall impact on reducing the cost of produced water.”The research team included Bangjun Li, Chenxi Wang and Ruzhu Wang at the Shanghai Jiao Tong University, and Bikram Bhatia, Kyle Wilke, Youngsup Song, Omar Labban, and John Lienhard, who is the Abdul Latif Jameel Professor of Water at MIT. The research was supported by the National Natural Science Foundation of China, the Singapore-MIT Alliance for Research and Technology, and the MIT Tata Center for Technology and Design. Tests on an MIT building rooftop showed that a simple proof-of-concept desalination device could produce clean, drinkable water at a rate equivalent to more than 1.5 gallons per hour for each square meter of solar collecting area. Images courtesy of the researchers https://news.mit.edu/2020/alchemista-food-hospitality-christine-marcus-0131 Led by Christine Marcus MBA ’12, Alchemista is finding success with a human-centered approach to food service. Thu, 30 Jan 2020 23:59:59 -0500 https://news.mit.edu/2020/alchemista-food-hospitality-christine-marcus-0131 Zach Winn | MIT News Office Christine Marcus MBA ’12 was an unlikely entrepreneur in 2011. That year, after spending her entire, 17-year career in government, most recently as the deputy chief financial officer for the U.S. Department of Energy, she entered the MIT Sloan School of Management Fellows MBA Program.Moreover, Marcus didn’t think of herself as an entrepreneur.“That was the furthest thing from my mind,” she says. “I knew it was time to think about the private sector, but my plan was to leave Sloan and get a job in finance. The thought of entrepreneurship was nowhere in my mind. I wasn’t one of those people who came with a business idea.”By the end of Sloan’s intensive, 12-month program, however, Marcus was running a startup helping local organizations and companies serve food from some of Boston’s best restaurants to hundreds of people. Upon graduation, in addition to her degree, Marcus had 40 recurring customers and had sold about $50,000 worth of food from her classmates’ Italian restaurant.What happened to spark such a dramatic change?“MIT happened,” Marcus says. “Being in that ecosystem and listening to all the people share their stories of starting companies, listening to CEOs talk about their successes and failures, the mistakes they’ve made along the way, that was super-inspiring. What I realized at MIT was that I’ve always been an entrepreneur.”In the years since graduation, Marcus has used her new perspective to build Alchemista, a “high-touch” hospitality company that helps businesses, commercial real estate developers, and property owners provide meals to employees and tenants. Today, Alchemista has clients in Boston, New York City, and Washington, and serves more than 60,000 meals each month.The company’s services go beyond simply curating restaraunts on a website: Each one of Alchemista’s clients has its own representative that customizes menus each month, and Alchemista employees are on the scene setting up every meal to ensure everything goes smoothly.“We work with companies that focus on employee culture and invest in their employees, and we incorporate ourselves into that culture,” Marcus says.Finding inspiration, then confidenceAt first, all Marcus wanted from MIT were some bright new employees for the Department of Energy. During a recruiting trip for that agency in 2011, she met Bill Aulet, the managing director of the Martin Trust Center for MIT Entrepreneurship and professor of the practice at Sloan.“I mentioned to Bill that I was thinking of doing an MBA,” Marcus remembers. “He said, ‘You need to come to MIT. It will transform your life.’ Those were his exact words. Then basically, ‘And you need to do it now.’”Soon after that conversation, Marcus applied for the Sloan Fellows Program, which crams an MBA into one year of full-time, hands on work. A few weeks after being accepted, she left her lifelong career in government for good.But Marcus still had no plans to become an entrepreneur. That came more gradually at Sloan, as she listened to experts describe entrepreneurship as a learnable craft, received encouragement and advice from professors, and heard from dozens of successful first-time entrepreneurs about their own early doubts and failures.“A lot of these founders had backgrounds in things that had nothing to do with their industry,” Marcus says. “My question was always, ‘How do you become successful in an industry you don’t know anything about?’ Their answer was always the same: ‘It’s all about learning and being curious.’”During one typically long day in the MBA program, a classmate brought in food from his Italian restaurant. Marcus was blown away and wondered why MIT didn’t cater from nice restaurants like that all the time.The thought set in motion a process that has never really stopped for Marcus. She began speaking with office secretaries, club presidents, and other event organizers at MIT. She learned it was a nightmare ordering food for hundreds of people, and that many of Boston’s best restaurants had no means of connecting with such organizers.“I made myself known on campus just hustling,” Marcus remembers. “First I had to spend time figuring out who orders food. … I made it my mission to talk to all of them, understand their pain points, and understand what would get them to change their processes at that point. It was a lot of legwork.”Marcus moved into the entrepreneurial track at Sloan, and says one of her most helpful classes was tech sales, taught by Lou Shipley, who’s now an advisor for Alchemista. She also says it was helpful that professors focused on real-world problems, at some points even using Alchemista as a case study, allowing Marcus’s entire class to weigh in on problems she was grappling with.“That was super-helpful, to have all these smart MIT students working on my company,” she says.As she neared gradation, Marcus spent a lot of time in the Trust Center, and leaned heavily on MIT’s support system.“That’s the best thing about MIT: the ecosystem,” Marcus says. “Everybody genuinely wants to help however they can.”Leaving that ecosystem, which Marcus described as a “challenging yet safe environment,” presented Marcus with her biggest test yet.Taking the plungeAt some point, every entrepreneur must decide if they’re passionate and confident enough in their business to fully commit to it. Over the course of a whirlwind year, MIT gave Marcus a crash course in entrepreneurship, but it couldn’t make that decision for her.Marcus responded unequivocally. She started by selling her house in Washington and renting a one-bedroom apartment in Boston. She also says she used up her retirement savings as she worked to expand Alchemista’s customer base in the early days.“I’m not sure I would recommend it to anyone without a strong stomach, but I jumped in with both feet,” Marcus says.And MIT never stopped lending support. At the time, Sloan was planning to renovate a building on campus, so in the interim, Aulet started a coworking space called the MIT Beehive. Marcus worked out of there for more than a year, collaborating with other MIT startup founders and establishing a supportive network of peers.Her commitment paid off. By 2014, Marcus had a growing customer base and a strong business model based on recurring revenue from large customer accounts. Alchemista soon expanded to Washington and New York City.Last year, the company brought on a culinary team and opened its own kitchens. It also expanded its services to commercial property owners and managers who don’t want to give up leasing space for a traditional cafeteria or don’t have restaurants nearby.Marcus has also incorporated her passion for sustainability into Alchemista’s operations. After using palm leaf plates for years, the company recently switched over to reusable plates and utensils, saving over 100,000 tons of waste annually, she says.Ultimately, Marcus thinks Alchemista’s success is a result of its human-centered approach to helping customers.“It’s not this massive website where you place an order and have no contact,” Marcus says. “We’re the opposite of that. We’re high-touch because everyone else is a website or app. Simply put, we take all the headaches away from ordering for hundreds of people. Food is very personal; breaking bread is one of the most fundamental ways to connect with others. We provide that experience in a premium, elevated way.” Alchemista co-founder and CEO Christine Marcus MBA ’12 says she sold her house and dipped into her retirement savings to get the company off the ground. Image courtesy of Alchemista https://news.mit.edu/2020/reducing-risk-empowering-resilience-disruptive-global-change-0123 Workshop highlights how MIT research can guide adaptation at local, regional, and national scales. Thu, 23 Jan 2020 15:15:01 -0500 https://news.mit.edu/2020/reducing-risk-empowering-resilience-disruptive-global-change-0123 Mark Dwortzan | Joint Program on the Science and Policy of Global Change Five-hundred-year floods. Persistent droughts and heat waves. More devastating wildfires. As these and other planetary perils become more commonplace, they pose serious risks to natural, managed, and built environments around the world. Assessing the magnitude of these risks over multiple decades and identifying strategies to prepare for them at local, regional, and national scales will be essential to making societies and economies more resilient and sustainable. With that goal in mind, the MIT Joint Program on the Science of Global Change launched in 2019 its Adaptation-at-Scale initiative (AS-MIT), which seeks evidence-based solutions to global change-driven risks. Using its Integrated Global System Modeling (IGSM) framework, as well as a suite of resource and infrastructure assessment models, AS-MIT targets, diagnoses, and projects changing risks to life-sustaining resources under impending societal and environmental stressors, and evaluates the effectiveness of potential risk-reduction measures.   In pursuit of these objectives, MIT Joint Program researchers are collaborating with other adaptation-at-scale thought leaders across MIT. And at a conference on Jan. 10 on the MIT campus, they showcased some of their most promising efforts in this space. Part of a series of MIT Joint Program workshops aimed at providing decision-makers with actionable information on key global change concerns, the conference covered risks and resilience strategies for food, energy, and water systems; urban-scale solutions; predicting the evolving risk of extreme events; and decision-making and early warning capabilities — and featured a lunch seminar on renewable energy for resilience and adaptation by an expert from the National Renewable Energy Laboratory. Food, energy, and water systems Greg Sixt, research manager in the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), described the work of J-WAFS’ Alliance for Climate Change and Food Systems Research, an emerging alliance of premier research institutions and key stakeholders to collaboratively frame challenges, identify research paths, and fund and pursue convergence research on building more resilience across the food system, from production to supply chains to consumption. MIT Joint Program Deputy Director Sergey Paltsev, also a senior research scientist at the MIT Energy Initiative (MITEI), explored climate-related risks to energy systems. He highlighted physical risks, such as potential impacts of permafrost degradation on roads, airports, natural gas pipelines, and other infrastructure in the Arctic, and of an increase in extreme temperature, wind, and icing events on power distribution infrastructure in the U.S. Northeast. “No matter what we do in terms of climate mitigation, the physical risks will remain the same for decades because of inertia in the climate system,” says Paltsev. “Even with very aggressive emissions-reduction policies, decision-makers must take physical risks into consideration.” They must also account for transition risks — long-term financial and investment risks to fossil fuel infrastructure posed by climate policies. Paltsev showed how energy scenarios developed at MIT and elsewhere can enable decision-makers to assess the physical and financial risks of climate change and of efforts to transition to a low-carbon economy. MIT Joint Program Deputy Director Adam Schlosser discussed MIT Joint Program (JP) efforts to assess risks to, and optimal adaptation strategies for, water systems subject to drought, flooding, and other challenges impacting water availability and quality posed by a changing environment. Schlosser noted that in some cases, efficiency improvements can go a long way in meeting these challenges, as shown in one JP study that found improving municipal and industrial efficiencies was just as effective as climate mitigation in confronting projected water shortages in Asia. Finally, he introduced a new JP project funded by the U.S. Department of Energy that will explore how in U.S. floodplains, foresight could increase resilience to future forces, stressors, and disturbances imposed by nature and human activity. “In assessing how we avoid and adapt to risk, we need to think about all plausible futures,” says Schlosser. “Our approach is to take all [of those] futures, put them into our [integrated global] system of human and natural systems, and think about how we use water optimally.” Urban-scale solutions Brian Goldberg, assistant director of the MIT Office of Sustainability, detailed MIT’s plans to sustain MIT campus infrastructure amid intensifying climate disruptions and impacts over the next 100 years. Toward that end, the MIT Climate Resiliency Committee is working to shore up multiple, interdependent layers of resiliency that include the campus site, infrastructure and utilities, buildings, and community, and creating modeling tools to evaluate flood risk. “We’re using the campus as a testbed to develop solutions, advance research, and ultimately grow a more climate-resilient campus,” says Goldberg. “Perhaps the models we develop and engage with at the campus scale can then influence the city or region scale and then be shared globally.” MIT Joint Program/MITEI Research Scientist Mei Yuan described an upcoming study to assess the potential of the building sector to reduce its greenhouse gas emissions through more energy-efficient design and intelligent telecommunications — and thereby lower climate-related risk to urban infrastructure. Yuan aims to achieve this objective by linking the program’ s U.S. Regional Energy Policy (USREP) model with a detailed building sector model that explicitly represents energy-consuming technologies (e.g., for heating, cooling, lighting, and household appliances).  “Incorporating this building sector model within an integrated framework that combines USREP with an hourly electricity dispatch model (EleMod) could enable us to simulate the supply and demand of electricity at finer spatial and temporal resolution,” says Yuan, “and thereby better understand how the power sector will need to adapt to future energy needs.” Renewable energy for resilience and adaptation Jill Engel-Cox, director of NREL’s Joint Institute for Strategic Energy Analysis, presented several promising adaptation measures for energy resilience that incorporate renewables. These include placing critical power lines underground; increasing demand-side energy efficiency to decrease energy consumption and power system instability; diversifying generation so electric power distribution can be sustained when one power source is down; deploying distributed generation (e.g., photovoltaics, small wind turbines, energy storage systems) so that if one part of the grid is disconnected, other parts continue to function; and implementing smart grids and micro-grids. “Adaptation and resilience measures tend to be very localized,” says Engel-Cox. “So we need to come up with strategies that will work for particular locations and risks.” These include storm-proofing photovoltaics and wind turbine systems, deploying hydropower with greater flexibility to account for variability in water flow, incorporating renewables in planning for natural gas system outages, and co-locating wind and PV systems on agricultural land. Extreme events MIT Joint Program Principal Research Scientist Xiang Gao showed how a statistical method that she developed has produced predictions of the risk of heavy precipitation, heat waves, and other extreme weather events that are more consistent with observations than conventional climate models do. Known as the “analog method,” the technique detects extreme events based on large-scale atmospheric patterns associated with such events. “Improved prediction of extreme weather events enabled by the analog method offers a promising pathway to provide meaningful climate mitigation and adaptation actions,” says Gao. Sai Ravela, a principal research scientist at MIT’s Department of Earth, Atmospheric and Planetary Sciences, showed how artificial intelligence could be exploited to predict extreme events. Key methods that Ravela and his research group are developing combine climate statistics, atmospheric modeling, and physics to assess the risk of future extreme events. The group’s long-range predictions draw upon deep learning and small-sample statistics using local sensor data and global oscillations. Applying these methods, Ravela and his co-investigators are developing a model to assess the risk of extreme weather events to infrastructure, such as that of wind and flooding damage to a nuclear plant or city.  Decision-making and early warning capabilities MIT Joint Program/MITEI Research Scientist Jennifer Morris explored uncertainty and decision-making for adaptation to global change-driven challenges ranging from coastal adaptation to grid resilience. Morris described the MIT Joint Program approach as a four-step process: quantify stressors and influences, evaluate vulnerabilities, identify response options and transition pathways, and develop decision-making frameworks. She then used the following Q&A to show how this four-pronged approach can be applied to the case of grid resilience. Q: Do human-induced changes in damaging weather events present a rising, widespread risk of premature failure in the nation’s power grid — and, if so, what are the cost-effective near-term actions to hedge against that risk? A: First, identify critical junctures within power grid, starting with large power transformers (LPTs). Next, use an analogue approach (described above) to construct distribution of expected changes in extreme heat wave events which would be damaging to LPTs under different climate scenarios. Next, use energy-economic and electric power models to assess electricity demand and economic costs related to LPT failure. And finally, make decisions under uncertainty to identify near-term actions to mitigate risks of LPT failure (e.g., upgrading or replacing LPTs). John Aldridge, assistant leader of the Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory, highlighted the group’s efforts to combine advanced remote sensing and decision support systems to assess the impacts of natural disasters, support hurricane evacuation decision-making, and guide proactive climate adaptation and resilience. Lincoln Laboratory is collaborating with MIT campus partners to develop the Climate Resilience Early Warning System Network (CREWSNET), which draws on MIT strengths in cutting-edge climate forecasting, impact models, and applied decision support tools to empower climate resilience and adaptation on a global scale. “From extreme event prediction to scenario-based risk analysis, this workshop showcased the core capabilities of the joint program and its partners across MIT that can advance scalable solutions to adaptation challenges across the globe,” says Adam Schlosser, who coordinated the day’s presentations. “Applying leading-edge modeling tools, our research is well-positioned to provide decision-makers with guidance and strategies to build a more resilient future.” An Army Corps of Engineers flood model depicting the Ala Wai watershed after a 100-year rain event. The owner of a local design firm described the Ala Wai Flood Control Project as the largest climate impact project in Hawai’s modern history. Image: U.S. Army Corps of Engineers-Honolulu District https://news.mit.edu/2020/making-real-biotechnology-dream-nitrogen-fixing-cereal-crops-0110 Voigt Lab's work could eventually replace cereal crops’ need for nitrogen from chemical fertilizers. Fri, 10 Jan 2020 12:00:01 -0500 https://news.mit.edu/2020/making-real-biotechnology-dream-nitrogen-fixing-cereal-crops-0110 Lisa Miller | Abdul Latif Jameel Water and Food Systems Lab As food demand rises due to growing and changing populations around the world, increasing crop production has been a vital target for agriculture and food systems researchers who are working to ensure there is enough food to meet global need in the coming years. One MIT research group mobilizing around this challenge is the Voigt lab in the Department of Biological Engineering, led by Christopher Voigt, the Daniel I.C. Wang Professor of Advanced Biotechnology at MIT. For the past four years, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has funded Voigt with two J-WAFS Seed Grants. With this support, Voigt and his team are working on a significant and longstanding research challenge: transform cereal crops so they are able to fix their own nitrogen. Chemical fertilizer: how it helps and hurts Nitrogen is a key nutrient that enables plants to grow. Plants like legumes are able to provide their own through a symbiotic relationship with bacteria that are capable of fixing nitrogen from the air and putting it into the soil, which is then drawn up by the plants through their roots. Other types of crops — including major food crops such as corn, wheat, and rice — typically rely on added fertilizers for nitrogen, including manure, compost, and chemical fertilizers. Without these, the plants that grow are smaller and produce less grain.  Over 3.5 billion people today depend on chemical fertilizers for their food. Eighty percent of chemical nitrogen fertilizers today are made using the Haber-Borsch process, which involves transforming nitrile gas into ammonia. While nitrogen fertilizer has boosted agriculture production in the last century, this has come with some significant costs. First, the Haber-Borsch process itself is very energy- and fossil fuel-intensive, making it unsustainable in the face of a rapidly changing climate. Second, using too much chemical fertilizer results in nitrogen pollution. Fertilizer runoff pollutes rivers and oceans, resulting in algae blooms that suffocate marine life. Cleaning up this pollution and paying for the public health and environmental damage costs the United States $157 billion annually. Third, when it comes to chemical fertilizers, there are problems with equity and access. These fertilizers are made in the northern hemisphere by major industrialized nations, where postash, a main ingredient, is abundant. However, transportation costs are high, especially to countries in the southern hemisphere. So, for farmers in poorer regions, this barrier results in lower crop yield. These environmental and societal challenges pose large problems, yet farmers still need to apply nitrogen to maintain the necessary agriculture productivity to meet the world’s food needs, especially as population and climate change stress the world’s food supplies. So, fertilizers are and will continue to be a critical tool.  But, might there be another way? The bacterial compatability of chloroplasts and mitochondria This is the question that drives researchers in the Voigt lab, as they work to develop nitrogen-fixing cereal grains. The strategy they have developed is to target the specific genes in the nitrogen-fixing bacteria that operate symbiotically with legumes, called the nif genes. These genes cause the expression of the protein structures (nitrogenase clusters) that fix nitrogen from the air. If these genes were able to be successfully transferred and expressed in cereal crops, chemical fertilizers would no longer be needed to add needed nitrogen, as these crops would be able to obtain nitrogen themselves. This genetic engineering work has long been regarded as a major technical challenge, however. The nif pathway is very large and involves many different genes. Transferring any large gene cluster is itself a difficult task, but there is added complexity in this particular pathway. The nif genes in microbes are controlled by a precise system of interconnected genetic parts. In order to successfully transfer the pathway’s nitrogen-fixing capabilities, researchers not only have to transfer the genes themselves, but also replicate the cellular components responsible for controlling the pathway. This leads into another challenge. The microbes responsible for nitrogen fixation in legumes are bacteria (prokaryotes), and, as explained by Eszter Majer, a postdoc in the Voigt lab who has been working on the project for the past two years, “the gene expression is completely different in plants, which are eukaryotes.” For example, prokaryotes organize their genes into operons, a genetic organization system that does not exist in eukaryotes such as the tobacco leaves the Voigt is using in its experiments. Reengineering the nif pathway in a eukaryote is tantamount to a complete system overhaul. The Voigt lab has found a workaround: Rather than target the entire plant cell, they are targetting organelles within the cell — specifically, the chloroplasts and the mitochondria. Mitochondria and chloroplasts both have ancient bacterial origins and once lived independently outside of eukaryotic cells as prokaryotes. Millions of years ago, they were incorporated into the eukaryotic system as organelles. They are unique in that they have their own genetic data and have also maintained many similarities to modern-day prokaryotes. As a result, they are excellent candidates for nitrogenase transfer. Majer explains, “It’s much easier to transfer from a prokaryote to a prokaryote-like system than reengineer the whole pathway and try to transfer to a eukaryote.” Beyond gene structure, these organelles have additional attributes that make them suitable environments for nitrogenase clusters to function. Nitrogenase requires a lot of energy to function and both chloroplasts and mitochondria already produce high amounts energy — in the form of ATP — for the cell. Nitrogenase is also very sensitive to oxygen and will not function if there is too much of it in its environment. However, chloroplasts at night and mitochondria in plants have low-oxygen levels, making them an ideal location for the nitrogenase protein to operate. An international team of experts While the team found devised an approach for transforming eukaryotic cells, their project still involved highly technical biological engineering challenges. Thanks to the J-WAFS grants, the Voigt lab has been able to collaborate with two specialists at overseas universities to obtain critical expertise.. One was Luis Rubio, an associate professor focusing on the biochemistry of nitrogen fixation at the Polytechnic University of Madrid, Spain. Rubio is an expert in nitrogenase and nitrogen-inspired chemistry. Transforming mitochondrial DNA is a challenging process, so the team designed a nitrogenase gene delivery system using yeast. Yeast are easy eukaryotic organisms to engineer and can be used to target the mitochondria. The team inserted the nitrogenase genes into the yeast nuclei, which are then targeted to mitochondria using peptide fusions. This research resulted in the first eukaryotic organism to demonstrate the formation of nitrogenase structural proteins. The Voigt lab also collaborated with Ralph Bock, a chloroplast expert from the Max Planck Institute of Molecular Plant Physiology in Germany. He and the Voigt team have made great strides toward the goal of nitrogen-fixing cereal crops; the details of their recent accomplishments advancing the field crop engineering and furthering the nitrogen-fixing work will be published in the coming months. Continuing in pursuit of the dream The Voigt lab, with the support of J-WAFS and the invaluable international collaboration that has resulted, was able to obtain groundbreaking results, moving us closer to fertilizer independence through nitrogen-fixing cereals. They made headway in targeting nitrogenase to mitochondria and were able to express a complete NifDK tetramer — a key protein in the nitrogenase cluster — in yeast mitochondria. Despite these milestones, more work is yet to be done. “The Voigt lab is invested in moving this research forward in order to get ever closer to the dream of creating nitrogen-fixing cereal crops,“ says Chris Voigt. With these milestones under their belt, these researchers have made great advances, and will continue to push torward the realization of this transformative vision, one that could revolutionize cereal production globally. Christopher Voigt and Eszter Majer (pictured) collaborated with chloroplast and mitochondria experts from the Max Planck Institute of Molecular Plant Physiology in Germany and the Polytechnic University of Madrid, Spain, during their seed grant period, collaborations that were made possible by J-WAFS seed grants. Photo: Lisa Miller/J-WAFS https://news.mit.edu/2019/julia-ortony-concocting-nanomaterials-energy-and-environmental-applications-0109 The MIT assistant professor is entranced by the beauty she finds pursuing chemistry. Thu, 09 Jan 2020 14:00:01 -0500 https://news.mit.edu/2019/julia-ortony-concocting-nanomaterials-energy-and-environmental-applications-0109 Leda Zimmerman | MIT Energy Initiative A molecular engineer, Julia Ortony performs a contemporary version of alchemy. “I take powder made up of disorganized, tiny molecules, and after mixing it up with water, the material in the solution zips itself up into threads 5 nanometers thick — about 100 times smaller than the wavelength of visible light,” says Ortony, the Finmeccanica Career Development Assistant Professor of Engineering in the Department of Materials Science and Engineering (DMSE). “Every time we make one of these nanofibers, I am amazed to see it.” But for Ortony, the fascination doesn’t simply concern the way these novel structures self-assemble, a product of the interaction between a powder’s molecular geometry and water. She is plumbing the potential of these nanomaterials for use in renewable energy and environmental remediation technologies, including promising new approaches to water purification and the photocatalytic production of fuel. Tuning molecular properties Ortony’s current research agenda emerged from a decade of work into the behavior of a class of carbon-based molecular materials that can range from liquid to solid. During doctoral work at the University of California at Santa Barbara, she used magnetic resonance (MR) spectroscopy to make spatially precise measurements of atomic movement within molecules, and of the interactions between molecules. At Northwestern University, where she was a postdoc, Ortony focused this tool on self-assembling nanomaterials that were biologically based, in research aimed at potential biomedical applications such as cell scaffolding and regenerative medicine. “With MR spectroscopy, I investigated how atoms move and jiggle within an assembled nanostructure,” she says. Her research revealed that the surface of the nanofiber acted like a viscous liquid, but as one probed further inward, it behaved like a solid. Through molecular design, it became possible to tune the speed at which molecules that make up a nanofiber move. A door had opened for Ortony. “We can now use state-of-matter as a knob to tune nanofiber properties,” she says. “For the first time, we can design self-assembling nanostructures, using slow or fast internal molecular dynamics to determine their key behaviors.” Slowing down the dance When she arrived at MIT in 2015, Ortony was determined to tame and train molecules for nonbiological applications of self-assembling “soft” materials. “Self-assembling molecules tend to be very dynamic, where they dance around each other, jiggling all the time and coming and going from their assembly,” she explains. “But we noticed that when molecules stick strongly to each other, their dynamics get slow, and their behavior is quite tunable.” The challenge, though, was to synthesize nanostructures in nonbiological molecules that could achieve these strong interactions. “My hypothesis coming to MIT was that if we could tune the dynamics of small molecules in water and really slow them down, we should be able to make self-assembled nanofibers that behave like a solid and are viable outside of water,” says Ortony. Her efforts to understand and control such materials are now starting to pay off. “We’ve developed unique, molecular nanostructures that self-assemble, are stable in both water and air, and — since they’re so tiny — have extremely high surface areas,” she says. Since the nanostructure surface is where chemical interactions with other substances take place, Ortony has leapt to exploit this feature of her creations — focusing in particular on their potential in environmental and energy applications. Clean water and fuel from sunlight One key venture, supported by Ortony’s Professor Amar G. Bose Fellowship, involves water purification. The problem of toxin-laden drinking water affects tens of millions of people in underdeveloped nations. Ortony’s research group is developing nanofibers that can grab deadly metals such as arsenic out of such water. The chemical groups she attaches to nanofibers are strong, stable in air, and in recent tests “remove all arsenic down to low, nearly undetectable levels,” says Ortony. She believes an inexpensive textile made from nanofibers would be a welcome alternative to the large, expensive filtration systems currently deployed in places like Bangladesh, where arsenic-tainted water poses dire threats to large populations. “Moving forward, we would like to chelate arsenic, lead, or any environmental contaminant from water using a solid textile fabric made from these fibers,” she says. In another research thrust, Ortony says, “My dream is to make chemical fuels from solar energy.” Her lab is designing nanostructures with molecules that act as antennas for sunlight. These structures, exposed to and energized by light, interact with a catalyst in water to reduce carbon dioxide to different gases that could be captured for use as fuel. In recent studies, the Ortony lab found that it is possible to design these catalytic nanostructure systems to be stable in water under ultraviolet irradiation for long periods of time. “We tuned our nanomaterial so that it did not break down, which is essential for a photocatalytic system,” says Ortony. Students dive in While Ortony’s technologies are still in the earliest stages, her approach to problems of energy and the environment are already drawing student enthusiasts. Dae-Yoon Kim, a postdoc in the Ortony lab, won the 2018 Glenn H. Brown Prize from the International Liquid Crystal Society for his work on synthesized photo-responsive materials and started a tenure track position at the Korea Institute of Science and Technology this fall. Ortony also mentors Ty Christoff-Tempesta, a DMSE doctoral candidate, who was recently awarded a Martin Fellowship for Sustainability. Christoff-Tempesta hopes to design nanoscale fibers that assemble and disassemble in water to create environmentally sustainable materials. And Cynthia Lo ’18 won a best-senior-thesis award for work with Ortony on nanostructures that interact with light and self-assemble in water, work that will soon be published. She is “my superstar MIT Energy Initiative UROP [undergraduate researcher],” says Ortony. Ortony hopes to share her sense of wonder about materials science not just with students in her group, but also with those in her classes. “When I was an undergraduate, I was blown away at the sheer ability to make a molecule and confirm its structure,” she says. With her new lab-based course for grad students — 3.65 (Soft Matter Characterization) — Ortony says she can teach about “all the interests that drive my research.” While she is passionate about using her discoveries to solve critical problems, she remains entranced by the beauty she finds pursuing chemistry. Fascinated by science starting in childhood, Ortony says she sought out every available class in chemistry, “learning everything from beginning to end, and discovering that I loved organic and physical chemistry, and molecules in general.” Today, she says, she finds joy working with her “creative, resourceful, and motivated” students. She celebrates with them “when experiments confirm hypotheses, and it’s a breakthrough and it’s thrilling,” and reassures them “when they come with a problem, and I can let them know it will be thrilling soon.” This article appears in the Autumn 2019 issue of Energy Futures, the magazine of the MIT Energy Initiative. Julia Ortony is the Finmeccanica Career Development Assistant Professor of Engineering in the Department of Materials Science and Engineering. Photo: Lillie Paquette/School of Engineering https://news.mit.edu/2019/remove-contaminants-nuclear-plant-wastewater-1219 Method concentrates radionuclides in a small portion of a nuclear plant’s wastewater, allowing the rest to be recycled. Thu, 19 Dec 2019 09:23:05 -0500 https://news.mit.edu/2019/remove-contaminants-nuclear-plant-wastewater-1219 David L. Chandler | MIT News Office Nuclear power continues to expand globally, propelled, in part, by the fact that it produces few greenhouse gas emissions while providing steady power output. But along with that expansion comes an increased need for dealing with the large volumes of water used for cooling these plants, which becomes contaminated with radioactive isotopes that require special long-term disposal.Now, a method developed at MIT provides a way of substantially reducing the volume of contaminated water that needs to be disposed of, instead concentrating the contaminants and allowing the rest of the water to be recycled through the plant’s cooling system. The proposed system is described in the journal Environmental Science and Technology, in a paper by graduate student Mohammad Alkhadra, professor of chemical engineering Martin Bazant, and three others.The method makes use of a process called shock electrodialysis, which uses an electric field to generate a deionization shockwave in the water. The shockwave pushes the electrically charged particles, or ions, to one side of a tube filled with charged porous material, so that concentrated stream of contaminants can be separated out from the rest of the water. The group discovered that two radionuclide contaminants — isotopes of cobalt and cesium — can be selectively removed from water that also contains boric acid and lithium. After the water stream is cleansed of its cobalt and cesium contaminants, it can be reused in the reactor.The shock electrodialysis process was initially developed by Bazant and his co-workers as a general method of removing salt from water, as demonstrated in their first scalable prototype four years ago. Now, the team has focused on this more specific application, which could help improve the economics and environmental impact of working nuclear power plants. In ongoing research, they are also continuing to develop a system for removing other contaminants, including lead, from drinking water.Not only is the new system inexpensive and scalable to large sizes, but in principle it also can deal with a wide range of contaminants, Bazant says. “It’s a single device that can perform a whole range of separations for any specific application,” he says.In their earlier desalination work, the researchers used measurements of the water’s electrical conductivity to determine how much salt was removed. In the years since then, the team has developed other methods for detecting and quantifying the details of what’s in the concentrated radioactive waste and the cleaned water.“We carefully measure the composition of all the stuff going in and out,” says Bazant, who is the E.G. Roos Professor of Chemical Engineering as well as a professor of mathematics. “This really opened up a new direction for our research.” They began to focus on separation processes that would be useful for health reasons or that would result in concentrating material that has high value, either for reuse or to offset disposal costs.The method they developed works for sea water desalination, but it is a relatively energy-intensive process for that application. The energy cost is dramatically lower when the method is used for ion-selective separations from dilute streams such as nuclear plant cooling water. For this application, which also requires expensive disposal, the method makes economic sense, he says. It also hits both of the team’s targets: dealing with high-value materials and helping to safeguard health. The scale of the application is also significant — a single large nuclear plant can circulate about 10 million cubic meters of water per year through its cooling system, Alkhadra says.For their tests of the system, the researchers used simulated nuclear wastewater based on a recipe provided by Mitsubishi Heavy Industries, which sponsored the research and is a major builder of nuclear plants. In the team’s tests, after a three-stage separation process, they were able to remove 99.5 percent of the cobalt radionuclides in the water while retaining about 43 percent of the water in cleaned-up form so that it could be reused. As much as two-thirds of the water can be reused if the cleanup level is cut back to 98.3 percent of the contaminants removed, the team found.While the overall method has many potential applications, the nuclear wastewater separation, is “one of the first problems we think we can solve [with this method] that no other solution exists for,” Bazant says. No other practical, continuous, economic method has been found for separating out the radioactive isotopes of cobalt and cesium, the two major contaminants of nuclear wastewater, he adds.While the method could be used for routine cleanup, it could also make a big difference in dealing with more extreme cases, such as the millions of gallons of contaminated water at the damaged Fukushima Daichi power plant in Japan, where the accumulation of that contaminated water has threatened to overpower the containment systems designed to prevent it from leaking out into the adjacent Pacific. While the new system has so far only been tested at much smaller scales, Bazant says that such large-scale decontamination systems based on this method might be possible “within a few years.”The research team also included MIT postdocs Kameron Conforti and Tao Gao and graduate student Huanhuan Tian. A small-scale device, seen here, was used in the lab to demonstrate the effectiveness of the new shockwave-based system for removing radioactive contaminants from the cooling water in nuclear powerplants. Image courtesy of the researchers https://news.mit.edu/2019/mit-dining-wins-new-england-food-vision-prize-1206 The $250,000 prize is awarded to six teams of college and university dining programs to bring more local food to campus menus. Fri, 06 Dec 2019 11:00:01 -0500 https://news.mit.edu/2019/mit-dining-wins-new-england-food-vision-prize-1206 Division of Student Life MIT Dining, in collaboration with the MIT Office of Sustainability, has been selected as one of six recipients of the 2019 Henry P. Kendall Foundation New England Food Vision Prize. Launched by the Henry P. Kendall Foundation in 2018, the New England Food Vision Prize Program gives out as many as six awards of up to $250,000 each to help New England college and university food-service directors explore bold and innovative ideas that strengthen the region’s food system. MIT’s concept — entitled “Food from Here” — combines resources from area universities, local food-processing collaboratives, and regional farms to sustainably increase the amount of local food served on campus. The proposed program meets the measurable, sustainable, and replicable goals of the Food Vision Prize while addressing recent recommendations from the MIT Food and Sustainability Working Group. Those recommendations call on MIT to ensure that students have access to “affordable, sustainable, and culturally meaningful food” and to “empower consumers to make informed choices,” all inspired by the Institute’s innovative spirit. “I am proud and excited about this award,” says Suzy Nelson, vice president and dean for student life. “We want to make sure that students have access to delicious and nutritious food — and having it come from regional growers and co-ops is a great way to contribute to the Massachusetts economy, to support farms and farmers, and to strengthen our food chain.”         “MIT Dining’s proposal aims to sustainably increase the amount of local and regional food served and sold on campuses,” says Mark Hayes, director of MIT Dining. In this proposal, MIT partnered with Lesley University and Emmanuel College to share food sources and solutions. “Lesley and Emmanuel are both close to MIT geographically, and we share the same food-service contractor — Bon Appetit Management Company (BAMCo) — making them a natural choice for partnership,” Hayes says. “Developing a visionary approach for new campus food systems is a huge task, so the idea is that if one university can figure out how to do that, that can be done elsewhere,” says Susy Jones, sustainability project manager at MIT. “By working with these partners from the outset, we can identify how to make something like this work in a way that is replicable.”  Working together, MIT, Lesley, Emmanuel, and their food-processing and gleaning partners will identify a core set of locally grown surplus crops — like apples, eggplant, and squash — that can be used across campuses, allowing the schools’ chefs to forecast demand and commit to regular purchases. Boston Area Gleaners will help source the surplus produce from area farms. Commonwealth Kitchen and Western Massachusetts Food Processing Center will process the produce into products such as diced onions or crushed tomatoes that can be used year-round in recipes, sold in campus cafés, and made available in grocery and convenience stores. The Food Vision Prize supports this effort by allowing organizations to dedicate staff to managing the effort and rolling it out in a way that engages campus communities. “This award recognizes the innovative ways MIT is working to solve for sustainability across food systems,” says Director of Sustainability Julie Newman. “Building creative partnerships across campus and communities helps us tackle these big challenges, and this award supports our work in doing that.” MIT Dining is continually working to add value and choice to eating options at MIT. Send suggestions, comments, or other any food for thought to foodstuff@mit.edu. MIT Dining recently won the New England Food Vision Prize. “Food from Here” combines resources from area universities, local food-processing collaboratives, and regional farms to sustainably increase the amount of local food served on campus. Photo: MIT Dining https://news.mit.edu/2019/coated-seeds-agriculture-marginal-lands-1125 A specialized silk covering could protect seeds from salinity while also providing fertilizer-generating microbes. Mon, 25 Nov 2019 14:59:59 -0500 https://news.mit.edu/2019/coated-seeds-agriculture-marginal-lands-1125 David L. Chandler | MIT News Office Providing seeds with a protective coating that also supplies essential nutrients to the germinating plant could make it possible to grow crops in otherwise unproductive soils, according to new research at MIT.A team of engineers has coated seeds with silk that has been treated with a kind of bacteria that naturally produce a nitrogen fertilizer, to help the germinating plants develop. Tests have shown that these seeds can grow successfully in soils that are too salty to allow untreated seeds to develop normally. The researchers hope this process, which can be applied inexpensively and without the need for specialized equipment, could open up areas of land to farming that are now considered unsuitable for agriculture.The findings are being published this week in the journal PNAS, in a paper by graduate students Augustine Zvinavashe ’16 and Hui Sun, postdoc Eugen Lim, and professor of civil and environmental engineering Benedetto Marelli.The work grew out of Marelli’s previous research on using silk coatings as a way to extend the shelf life of seeds used as food crops. “When I was doing some research on that, I stumbled on biofertilizers that can be used to increase the amount of nutrients in the soil,” he says. These fertilizers use microbes that live symbiotically with certain plants and convert nitrogen from the air into a form that can be readily taken up by the plants.Not only does this provide a natural fertilizer to the plant crops, but it avoids problems associated with other fertilizing approaches, he says: “One of the big problems with nitrogen fertilizers is they have a big environmental impact, because they are very energetically demanding to produce.” These artificial fertilizers may also have a negative impact on soil quality, according to Marelli.Although these nitrogen-fixing bacteria occur naturally in soils around the world, with different local varieties found in different regions, they are very hard to preserve outside of their natural soil environment. But silk can preserve biological material, so Marelli and his team decided to try it out on these nitrogen-fixing bacteria, known as rhizobacteria.“We came up with the idea to use them in our seed coating, and once the seed was in the soil, they would resuscitate,” he says. Preliminary tests did not turn out well, however; the bacteria weren’t preserved as well as expected.That’s when Zvinavashe came up with the idea of adding a particular nutrient to the mix, a kind of sugar known as trehalose, which some organisms use to survive under low-water conditions. The silk, bacteria, and trehalose were all suspended in water, and the researchers simply soaked the seeds in the solution for a few seconds to produce an even coating. Then the seeds were tested at both MIT and a research facility operated by the Mohammed VI Polytechnic University in Ben Guerir, Morocco. “It showed the technique works very well,” Zvinavashe says.The resulting plants, helped by ongoing fertilizer production by the bacteria, developed in better health than those from untreated seeds and grew successfully in soil from fields that are presently not productive for agriculture, Marelli says.In practice, such coatings could be applied to the seeds by either dipping or spray coating, the researchers say. Either process can be done at ordinary ambient temperature and pressure. “The process is fast, easy, and it might be scalable” to allow for larger farms and unskilled growers to make use of it, Zvinavashe says. “The seeds can be simply dip-coated for a few seconds,” producing a coating that is just a few micrometers thick.The ordinary silk they use “is water soluble, so as soon as it’s exposed to the soil, the bacteria are released,” Marelli says. But the coating nevertheless provides enough protection and nutrients to allow the seeds to germinate in soil with a salinity level that would ordinarily prevent their normal growth. “We do see plants that grow in soil where otherwise nothing grows,” he says.These rhizobacteria normally provide fertilizer to legume crops such as common beans and chickpeas, and those have been the focus of the research so far, but it may be possible to adapt them to work with other kinds of crops as well, and that is part of the team’s ongoing research. “There is a big push to extend the use of rhizobacteria to nonlegume crops,” he says. One way to accomplish that might be to modify the DNA of the bacteria, plants, or both, he says, but that may not be necessary.“Our approach is almost agnostic to the kind of plant and bacteria,” he says, and it may be feasible “to stabilize, encapsulate and deliver [the bacteria] to the soil, so it becomes more benign for germination” of other kinds of plants as well.Even if limited to legume crops, the method could still make a significant difference to regions with large areas of saline soil. “Based on the excitement we saw with our collaboration in Morocco,” Marelli says, “this could be very impactful.”As a next step, the researchers are working on developing new coatings that could not only protect seeds from saline soil, but also make them more resistant to drought, using coatings that absorb water from the soil. Meanwhile, next year they will begin test plantings out in open experimental fields in Morocco; their previous plantings have been done indoors under more controlled conditions.The research was partly supported by the Université Mohammed VI Polytechnique-MIT Research Program, the Office of Naval Research, and the Office of the Dean for Graduate Fellowship and Research. Researchers have used silk derived from ordinary silkworm cocoons, like those seen here, mixed with bacteria and nutrients, to make a coating for seeds that can help them germinate and grow even in salty soil. Image courtesy of the researchers https://news.mit.edu/2019/microparticles-fight-malnutrition-1113 New strategy for encapsulating nutrients makes it easier to fortify foods with iron and vitamin A. Wed, 13 Nov 2019 13:59:59 -0500 https://news.mit.edu/2019/microparticles-fight-malnutrition-1113 Anne Trafton | MIT News Office About 2 billion people around the world suffer from deficiencies of key micronutrients such as iron and vitamin A. Two million children die from these deficiencies every year, and people who don’t get enough of these nutrients can develop blindness, anemia, and cognitive impairments.MIT researchers have now developed a new way to fortify staple foods with these micronutrients by encapsulating them in a biocompatible polymer that prevents the nutrients from being degraded during storage or cooking. In a small clinical trial, they showed that women who ate bread fortified with encapsulated iron were able to absorb iron from the food.“We are really excited that our team has been able to develop this unique nutrient-delivery system that has the potential to help billions of people in the developing world, and taken it all the way from inception to human clinical trials,” says Robert Langer, the David H. Koch Institute Professor at MIT and a member of MIT’s Koch Institute for Integrative Cancer Research.The researchers now hope to run clinical trials in developing nations where micronutrient deficiencies are common.Langer and Ana Jaklenec, a research scientist at the Koch Institute, are the senior authors of the study, which appears today in Science Translational Medicine. The paper’s lead authors are former MIT postdocs Aaron Anselmo and Xian Xu, and ETH Zurich graduate student Simone Buerkli.Protecting nutrientsLack of vitamin A is the world’s leading cause of preventable blindness, and it can also impair immunity, making children more susceptible to diseases such as measles. Iron deficiency can lead to anemia and also impairs cognitive development in children, contributing to a “cycle of poverty,” Jaklenec says.“These children don’t do well in school because of their poor health, and when they grow up, they may have difficulties finding a job, so their kids are also living in poverty and often without access to education,” she says.The MIT team, funded by the Bill and Melinda Gates Foundation, set out to develop new technology that could help with efforts to fortify foods with essential micronutrients. Fortification has proven successful in the past with iodized salt, for example, and offers a way to incorporate nutrients in a way that doesn’t require people to change their eating habits.“What’s been shown to be effective for food fortification is staple foods, something that’s in the household and people use every day,” Jaklenec says. “Everyone eats salt or flour, so you don’t need to change anything in their everyday practices.”However, simply adding vitamin A or iron to foods doesn’t work well. Vitamin A is very sensitive to heat and can be degraded during cooking, and iron can bind to other molecules in food, giving the food a metallic taste. To overcome that, the MIT team set out to find a way to encapsulate micronutrients in a material that would protect them from being broken down or interacting with other molecules, and then release them after being consumed.The researchers tested about 50 different polymers and settled on one known as BMC. This polymer is currently used in dietary supplements, and in the United States it is classified as “generally regarded as safe.”Using this polymer, the researchers showed that they could encapsulate 11 different micronutrients, including zinc, vitamin B2, niacin, biotin, and vitamin C, as well as iron and vitamin A. They also demonstrated that they could encapsulate combinations of up to four of the micronutrients together.Tests in the lab showed that the encapsulated micronutrients were unharmed after being boiled for two hours. The encapsulation also protected nutrients from ultraviolet light and from oxidizing chemicals, such as polyphenols, found in fruits and vegetables. When the particles were exposed to very acidic conditions (pH 1.5, typical of the pH in the stomach), the polymer become soluble and the micronutrients were released.In tests in mice, the researchers showed that particles broke down in the stomach, as expected, and the cargo traveled to the small intestine, where it can be absorbed.Iron boostAfter the successful animal tests, the researchers decided to test the encapsulated micronutrients in human subjects. The trial was led by Michael Zimmerman, a professor of health sciences and technology at ETH Zurich who studies nutrition and food fortification.In their first trial, the researchers incorporated encapsulated iron sulfate into maize porridge, a corn-derived product common in developing world, and mixed the maize with a vegetable sauce. In that initial study, they found that people who ate the fortified maize — female university students in Switzerland, most of whom were anemic — did not absorb as much iron as the researchers hoped they would. The amount of iron absorbed was a little less than half of what was absorbed by subjects who consumed iron sulfate that was not encapsulated.After that, the researchers decided to reformulate the particles and found that if they boosted the percentage of iron sulfate in the particles from 3 percent to about 18 percent, they could achieve iron absorption rates very similar to the percentage for unencapsulated iron sulfate. In that second trial, also conducted at ETH, they mixed the encapsulated iron into flour and then used it to bake bread.“Reformulation of the microparticles was possible because our platform was tunable and amenable to large-scale manufacturing approaches,” Anselmo says. “This allowed us to improve our formulation based on the feedback from the first trial.”The next step, Jaklenec says, is to try a similar study in a country where many people experience micronutrient deficiencies. The researchers are now working on gaining regulatory approval from the Joint Food and Agriculture Organization/World Health Organization Expert Committee on Food Additives. They are also working on identifying other foods that would be useful to fortify, and on scaling up their manufacturing process so they can produce large quantities of the powdered micronutrients.Other authors of the paper are Yingying Zeng, Wen Tang, Kevin McHugh, Adam Behrens, Evan Rosenberg, Aranda Duan, James Sugarman, Jia Zhuang, Joe Collins, Xueguang Lu, Tyler Graf, Stephany Tzeng, Sviatlana Rose, Sarah Acolatse, Thanh Nguyen, Xiao Le, Ana Sofia Guerra, Lisa Freed, Shelley Weinstock, Christopher Sears, Boris Nikolic, Lowell Wood, Philip Welkhoff, James Oxley, and Diego Moretti. MIT engineers have developed a way to encapsulate nutrients in a biocompatible polymer, making it easier to use them to fortify foods. Image: Second Bay Studios https://news.mit.edu/2019/j-wafs-global-food-security-challenges-agricultural-impacts-climate-crisis-1030 The Abdul Latif Jameel Water and Food Systems Lab presents a new report on climate, agriculture, water, and food security — with plans for more research. Wed, 30 Oct 2019 15:50:01 -0400 https://news.mit.edu/2019/j-wafs-global-food-security-challenges-agricultural-impacts-climate-crisis-1030 Andi Sutton | Abdul Latif Jameel Water and Food Systems Lab Early this August, the UN Intergovernmental Panel on Climate Change issued yet another in a series of grave and disquieting reports outlining the extreme challenges placed on the Earth’s systems by the climate crisis. Most IPCC reports and accompanying media coverage tend to emphasize greenhouse gas (GHG) emissions from energy and transportation sectors, along with the weather and sea-level impacts of climate change and their direct impact on vulnerable human populations. However, this particular report, the “Special Report on Climate Change and Land,” presents a sobering set of data and analyses addressing the substantial contributions of agriculture to climate change and the ways the climate crisis is projected to jeopardize global food security if urgent action is not taken at the individual, institutional, industry, and governmental levels. There is an ever-increasing public awareness about climate’s effects on the frequency and intensity of extreme weather, threats to coastal cities, and the rapid decline in the biodiversity of the Earth’s ecosystems. However, the impact of climate change on land and food production — and the impact of our food systems on climate change — is just beginning to enter the wider public discourse. Food systems are responsible for up to 30 percent of global GHG emissions, with agricultural activities accounting for up to 86 percent of total food-system emissions. And agriculture is a sector that is put at significant risk by the direct and indirect effects of the Earth’s rising temperatures. In order to adapt to future climate uncertainty and to minimize agricultural greenhouse gas emissions, strategies addressing the sustainability and adaptive capacity of food systems must be developed and rapidly implemented. With so much at stake, targeted research that reaches beyond disciplinary and institutional boundaries is needed. Since its 2014 launch at MIT, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has promoted research and innovation across diverse disciplines that will help ensure the resilience of the world’s water and food systems even as they are increasingly pressured by the effects of climate change. Its newly released report, “Climate Change, Agriculture, Water, and Food Security: What We Know and Don’t Know,” is part of this effort. The report collects the central findings of an expert workshop conducted by J-WAFS in May 2018. The workshop gathered 46 experts in agriculture, climate, engineering, and the physical and natural sciences from around the world — several of whom were also involved in writing the August 2019 IPCC report — to discuss current understanding of the complex relationship between climate change and agriculture. This report, based on the workshop deliberations, initiates a longer study that will directly engage stakeholders to address how research can be best targeted to the needs of policymakers, funders, and other decision-makers and stakeholders. Central to the conclusions of the 2018 workshop was widespread agreement among participants of the need for convergence research that addresses the climate crisis in food systems. Convergence research is built around deep integration across disciplines in order to address complex problems focusing on societal need. By deploying transdisciplinary teams with expertise in plant, soil, and climate science, agricultural technologies, agribusiness, economics, behavior change and communication, marketing, nutrition, and public policy, convergence research promotes innovative approaches to formulating and evaluating adaptation and mitigation strategies for future food security. A study that J-WAFS is now launching will take this approach. As part of the new study, J-WAFS is partnering with three internationally renowned institutions with complementary expertise in agriculture and food systems. Titled “Climate Change and Global Food Systems: Supporting Adaptation and Mitigation Strategies with Research,” the collaborative project will leverage the myriad disciplines and specialties of a cross-institutional group of researchers, along with stakeholders and decision-makers, in order to develop a prioritized, actionable, solutions-oriented research agenda. The project’s goal is to determine which research questions must be answered, and which innovations must be prioritized, in order to ensure that global food security can be met even while the climate crisis wreaks havoc on global food systems. The project will help develop stronger connections and collaborative partnerships across diverse research communities (in particular, MIT and the partner universities) and with the stakeholders and decision-makers who fund research, develop policy, and implement programs to support agriculture and food security. The three collaborating universities who are joining MIT in this effort are: Wageningen University in the Netherlands — an institution which is at forefront of agriculture and food systems research; Tufts University — an international leader in interdisciplinary food and nutrition research, especially through its Friedman School of Nutrition Science and Policy; and the University of California at Davis, whose College of Agricultural and Environmental Sciences ranks No. 1 in the United States for agriculture, plant sciences, animal science, forestry, and agricultural economics. Says Ermias Kebreab, associate dean for global engagement in the College of Agricultural and Environmental Sciences at UC Davis, “the project will address several grand challenges that align very well with the mission and goals of the UC Davis College of Agricultural and Environmental Sciences.  Collaborating with MIT and other project partners presents exciting opportunities to extend the reach and impact of the UC Davis research.” With potential dire impacts of the climate crisis on our global food systems, opportunities for transformative change must be found. But there currently exist significant knowledge gaps on the best practices, technologies, policies, and development approaches for achieving food security with win-win solutions at the nexus of climate change and food systems. J-WAFS’ workshop report emphasized that more research is required to better characterize specific challenges and to develop, evaluate, and implement effective strategies. Specific areas where research presents significant opportunities include understanding and improving soil quality and fertility; the development of technologies such as advanced biotechnology, carbon sequestration, and geospatial tools; fundamental research questions about crop response to environmental stresses, such as high temperatures and drought; improvements to crop and climate models; approaches to manage risk in the face of uncertain risk; and the development of strategies to effect behavioral change, particularly around food choices. It may yet be possible to sustainably produce enough nutritious food to feed the world while at the same time reversing the current trends in its production that damage the environment. As stated by John H. Lienhard V, J-WAFS director and MIT professor, “the next green revolution will be delivered using new farming practices, emerging scientific discoveries, technological breakthroughs, and insights from the social sciences, all combined to provide effective policies, equitable social programs, and much-needed changes in consumer behavior.”   If the world is to be free of hunger and malnutrition in accordance with the 2030 UN Sustainable Development Goals, actions to strengthen the resilience and adaptive capacity of food systems must be rapidly implemented in order to adapt to climate change. Research launched by J-WAFS seeks to map out the most strategic ways that research can be used to ensure a global transition toward food-system sustainability. https://news.mit.edu/2019/j-wafs-global-food-security-challenges-agricultural-impacts-climate-crisis-1030 The Abdul Latif Jameel Water and Food Systems Lab presents a new report on climate, agriculture, water, and food security — with plans for more research. Wed, 30 Oct 2019 15:50:01 -0400 https://news.mit.edu/2019/j-wafs-global-food-security-challenges-agricultural-impacts-climate-crisis-1030 Andi Sutton | Abdul Latif Jameel Water and Food Systems Lab Early this August, the UN Intergovernmental Panel on Climate Change issued yet another in a series of grave and disquieting reports outlining the extreme challenges placed on the Earth’s systems by the climate crisis. Most IPCC reports and accompanying media coverage tend to emphasize greenhouse gas (GHG) emissions from energy and transportation sectors, along with the weather and sea-level impacts of climate change and their direct impact on vulnerable human populations. However, this particular report, the “Special Report on Climate Change and Land,” presents a sobering set of data and analyses addressing the substantial contributions of agriculture to climate change and the ways the climate crisis is projected to jeopardize global food security if urgent action is not taken at the individual, institutional, industry, and governmental levels. There is an ever-increasing public awareness about climate’s effects on the frequency and intensity of extreme weather, threats to coastal cities, and the rapid decline in the biodiversity of the Earth’s ecosystems. However, the impact of climate change on land and food production — and the impact of our food systems on climate change — is just beginning to enter the wider public discourse. Food systems are responsible for up to 30 percent of global GHG emissions, with agricultural activities accounting for up to 86 percent of total food-system emissions. And agriculture is a sector that is put at significant risk by the direct and indirect effects of the Earth’s rising temperatures. In order to adapt to future climate uncertainty and to minimize agricultural greenhouse gas emissions, strategies addressing the sustainability and adaptive capacity of food systems must be developed and rapidly implemented. With so much at stake, targeted research that reaches beyond disciplinary and institutional boundaries is needed. Since its 2014 launch at MIT, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has promoted research and innovation across diverse disciplines that will help ensure the resilience of the world’s water and food systems even as they are increasingly pressured by the effects of climate change. Its newly released report, “Climate Change, Agriculture, Water, and Food Security: What We Know and Don’t Know,” is part of this effort. The report collects the central findings of an expert workshop conducted by J-WAFS in May 2018. The workshop gathered 46 experts in agriculture, climate, engineering, and the physical and natural sciences from around the world — several of whom were also involved in writing the August 2019 IPCC report — to discuss current understanding of the complex relationship between climate change and agriculture. This report, based on the workshop deliberations, initiates a longer study that will directly engage stakeholders to address how research can be best targeted to the needs of policymakers, funders, and other decision-makers and stakeholders. Central to the conclusions of the 2018 workshop was widespread agreement among participants of the need for convergence research that addresses the climate crisis in food systems. Convergence research is built around deep integration across disciplines in order to address complex problems focusing on societal need. By deploying transdisciplinary teams with expertise in plant, soil, and climate science, agricultural technologies, agribusiness, economics, behavior change and communication, marketing, nutrition, and public policy, convergence research promotes innovative approaches to formulating and evaluating adaptation and mitigation strategies for future food security. A study that J-WAFS is now launching will take this approach. As part of the new study, J-WAFS is partnering with three internationally renowned institutions with complementary expertise in agriculture and food systems. Titled “Climate Change and Global Food Systems: Supporting Adaptation and Mitigation Strategies with Research,” the collaborative project will leverage the myriad disciplines and specialties of a cross-institutional group of researchers, along with stakeholders and decision-makers, in order to develop a prioritized, actionable, solutions-oriented research agenda. The project’s goal is to determine which research questions must be answered, and which innovations must be prioritized, in order to ensure that global food security can be met even while the climate crisis wreaks havoc on global food systems. The project will help develop stronger connections and collaborative partnerships across diverse research communities (in particular, MIT and the partner universities) and with the stakeholders and decision-makers who fund research, develop policy, and implement programs to support agriculture and food security. The three collaborating universities who are joining MIT in this effort are: Wageningen University in the Netherlands — an institution which is at forefront of agriculture and food systems research; Tufts University — an international leader in interdisciplinary food and nutrition research, especially through its Friedman School of Nutrition Science and Policy; and the University of California at Davis, whose College of Agricultural and Environmental Sciences ranks No. 1 in the United States for agriculture, plant sciences, animal science, forestry, and agricultural economics. Says Ermias Kebreab, associate dean for global engagement in the College of Agricultural and Environmental Sciences at UC Davis, “the project will address several grand challenges that align very well with the mission and goals of the UC Davis College of Agricultural and Environmental Sciences.  Collaborating with MIT and other project partners presents exciting opportunities to extend the reach and impact of the UC Davis research.” With potential dire impacts of the climate crisis on our global food systems, opportunities for transformative change must be found. But there currently exist significant knowledge gaps on the best practices, technologies, policies, and development approaches for achieving food security with win-win solutions at the nexus of climate change and food systems. J-WAFS’ workshop report emphasized that more research is required to better characterize specific challenges and to develop, evaluate, and implement effective strategies. Specific areas where research presents significant opportunities include understanding and improving soil quality and fertility; the development of technologies such as advanced biotechnology, carbon sequestration, and geospatial tools; fundamental research questions about crop response to environmental stresses, such as high temperatures and drought; improvements to crop and climate models; approaches to manage risk in the face of uncertain risk; and the development of strategies to effect behavioral change, particularly around food choices. It may yet be possible to sustainably produce enough nutritious food to feed the world while at the same time reversing the current trends in its production that damage the environment. As stated by John H. Lienhard V, J-WAFS director and MIT professor, “the next green revolution will be delivered using new farming practices, emerging scientific discoveries, technological breakthroughs, and insights from the social sciences, all combined to provide effective policies, equitable social programs, and much-needed changes in consumer behavior.”   If the world is to be free of hunger and malnutrition in accordance with the 2030 UN Sustainable Development Goals, actions to strengthen the resilience and adaptive capacity of food systems must be rapidly implemented in order to adapt to climate change. Research launched by J-WAFS seeks to map out the most strategic ways that research can be used to ensure a global transition toward food-system sustainability. https://news.mit.edu/2019/mit-process-could-make-hydrogen-peroxide-available-remote-places-1023 MIT-developed method may lead to portable devices for making the disinfectant on-site where it’s needed. Wed, 23 Oct 2019 16:00:00 -0400 https://news.mit.edu/2019/mit-process-could-make-hydrogen-peroxide-available-remote-places-1023 David L. Chandler | MIT News Office Hydrogen peroxide, a useful all-purpose disinfectant, is found in most medicine cabinets in the developed world. But in remote villages in developing countries, where it could play an important role in health and sanitation, it can be hard to come by. Now, a process developed at MIT could lead to a simple, inexpensive, portable device that could produce hydrogen peroxide continuously from just air, water, and electricity, providing a way to sterilize wounds, food-preparation surfaces, and even water supplies. The new method is described this week in the journal Joule in a paper by MIT students Alexander Murray, Sahag Voskian, and Marcel Schreier and MIT professors T. Alan Hatton and Yogesh Surendranath. Even at low concentrations, hydrogen peroxide is an effective antibacterial agent, and after carrying out its sterilizing function it breaks down into plain water, in contrast to other agents such as chlorine that can leave unwanted byproducts from its production and use. Hydrogen peroxide is just water with an extra oxygen atom tacked on — it’s H2O2, instead of H2O. That extra oxygen is relatively loosely bound, making it a highly reactive chemical eager to oxidize any other molecules around it. It’s so reactive that in high concentrations it can be used as rocket fuel, and even concentrations of 35 percent require very special handling and shipping procedures. The kind used as a household disinfectant is typically only 3 percent hydrogen peroxide and 97 percent water. Because high concentrations are hard to transport, and low concentrations, being mostly water, are uneconomical to ship, the material is often hard to get in places where it could be especially useful, such as remote communities with untreated water. (Bacteria in water supplies can be effectively controlled by adding hydrogen peroxide.) As a result, many research groups around the world have been pursuing approaches to developing some form of portable hydrogen peroxide production equipment. Most of the hydrogen peroxide produced in the industrialized world is made in large chemical plants, where methane, or natural gas, is used to provide a source of hydrogen, which is then reacted with oxygen in a catalytic process under high heat. This process is energy-intensive and not easily scalable, requiring large equipment and a steady supply of methane, so it does not lend itself to smaller units or remote locations. “There’s a growing community interested in portable hydrogen peroxide,” Surendranath says, “because of the appreciation that it would really meet a lot of needs, both on the industrial side as well as in terms of human health and sanitation.” Other processes developed so far for potentially portable systems have key limitations. For example, most catalysts that promote the formation of hydrogen peroxide from hydrogen and oxygen also make a lot of water, leading to low concentrations of the desired product. Also, processes that involve electrolysis, as this new process does, often have a hard time separating the produced hydrogen peroxide from the electrolyte material used in the process, again leading to low efficiency. Surendranath and the rest of the team solved the problem by breaking the process down into two separate steps. First, electricity (ideally from solar cells or windmills) is used to break down water into hydrogen and oxygen, and the hydrogen then reacts with a “carrier” molecule. This molecule — a compound called anthroquinone, in these initial experiments — is then introduced into a separate reaction chamber where it meets with oxygen taken from the outside air, and a pair of hydrogen atoms binds to an oxygen molecule (O2) to form the hydrogen peroxide. In the process, the carrier molecule is restored to its original state and returns to carry out the cycle all over again, so none of this material is consumed. The process could address numerous challenges, Surendranath says, by making clean water, first-aid care for wounds, and sterile food preparation surfaces more available in places where they are presently scarce or unavailable. “Even at fairly low concentrations, you can use it to disinfect water of microbial contaminants and other pathogens,” Surendranath says. And, he adds, “at higher concentrations, it can be used even to do what’s called advanced oxidation,” where in combination with UV light it can be used to decontaminate water of even strong industrial wastes, for example from mining operations or hydraulic fracking. So, for example, a portable hydrogen peroxide plant might be set up adjacent to a fracking or mining site and used to clean up its effluent, then moved to another location once operations cease at the original site. In this initial proof-of-concept unit, the concentration of hydrogen peroxide produced is still low, but further engineering of the system should lead to being able to produce more concentrated output, Surendranath says. “One of the ways to do that is to just increase the concentration of the mediator, and fortunately, our mediator has already been used in flow batteries at really high concentrations, so we think there’s a route toward being able to increase those concentrations,” he says. “It’s kind of an amazing process,” he says, “because you take abundant things, water, air and electricity, that you can source locally, and you use it to make this important chemical that you can use to actually clean up the environment and for sanitation and water quality.” “The ability to create a hydrogen peroxide solution in water without electrolytes, salt, base, etc., all of which are intrinsic to other electrochemical processes, is noteworthy,” says Shannon Stahl, a professor of chemistry at the University of Wisconsin, who was not involved in this work. Stahl adds that “Access to salt-free aqueous solutions of H2O2 has broad implications for practical applications.” Stahl says that “This work represents an innovative application of ‘mediated electrolysis.’ Mediated electrochemistry provides a means to merge conventional chemical processes with electrochemistry, and this is a particularly compelling demonstration of this concept. … There are many potential applications of this concept.” In a new method to produce hydrogen peroxide portably, an electrolyzer (left) splits water into hydrogen and oxygen. The hydrogen atoms initially form in an electrolyte material (green), which transfers them to a mediator material (red), which then carries them to a separate unit where the mediator comes in contact with oxygen-rich water (blue), where the hydrogen combines with it to form hydrogen peroxide. The mediator then returns to begin the cycle again. Image courtesy of the researchers. https://news.mit.edu/2019/scaling-cleaner-burning-alternative-cookstoves-1023 Mechanical engineering students in MIT D-Lab are working with collaborators in Uganda on a solution for the health hazards associated with wood-burning stoves. Tue, 22 Oct 2019 23:59:59 -0400 https://news.mit.edu/2019/scaling-cleaner-burning-alternative-cookstoves-1023 Mary Beth Gallagher | Department of Mechanical Engineering For millions of people globally, cooking in their own homes can be detrimental to their health, and sometimes deadly. The World Health Organization estimates that 3.8 million people a year die as a result of the soot and smoke generated in traditional wood-burning cookstoves. Women and children in particular are at risk of pneumonia, stroke, lung cancer, or low birth weight.  “All their life they’re exposed to this smoke,” says Betty Ikalany, founder and chief executive director of Appropriate Energy Saving Technologies (AEST). “Ten thousand women die annually in Uganda because of inhaling smoke from cookstoves.” Ikalany is working to eliminate the health risks associated with cookstoves in Uganda. In 2012 she met Amy Smith, founding director of MIT D-Lab, who introduced her to D-Lab’s method of manufacturing briquettes that produce no soot and very little smoke. Ikalany saw an opportunity to use this technology in Uganda, and founded AEST that same year. She started assembling a team to produce and distribute the briquettes.Made of charcoal dust, carbonized agricultural waste such as peanut shells and corn husks, and a cassava-water porridge, which acts as a binding agent, the briquettes are wet initially. To be usable in a cookstove, they must be completely dried. Ikalany’s team dries the briquettes on open-air racks. In ideal sunny conditions, it takes three days for the briquettes to dry. Inclement weather or humidity can substantially slow down the evaporation needed to dry the briquettes. When it rains, the briquettes are covered with tarps, completely halting the drying process. “The drying of the briquettes is the bottleneck of the whole process,” says Danielle Gleason, a senior studying mechanical engineering. “In order to scale up production and keep growing as a business, Betty and her team realized that they needed to improve the drying process.” Gleason was one of several students who were connected to Ikalany through MIT D-Lab courses. While taking the cross-listed MIT D-Lab class 2.651/EC.711 (Introduction to Energy in Global Development) as a sophomore, she worked on a project that sought to optimize the drying process in charcoal briquettes. That summer, she traveled to Uganda to meet with Ikalany’s team along with Daniel Sweeney, a research scientist at MIT D-Lab. “Drawing upon their strong theoretical foundation and experiences in the lab and the classroom, we want our students to go out into the field and make real things that have a lasting impact,” explains Maria Yang, professor of mechanical engineering and faculty academic director at MIT D-Lab. During her first trip to Uganda, Gleason focused on information gathering and identifying where there were pain points in the production process of the briquettes. “I went to Uganda not to present an incredibly complex solution, but simply to learn from our community partners, to share some ideas our team has been working on, and to work directly with those who will be impacted by our designs,” adds Gleason. Armed with a better understanding of AEST’s production process, Gleason continued to develop ideas for improving the drying process when she returned to MIT last fall. In MIT D-Lab 2.652/EC.712 (Applications of Energy in Global Development), she worked with a team of students on various designs for a new drying system. “We spent a whole semester figuring out how to improve this airflow and naturally convect the air,” Gleason explains. With sponges acting as stand-ins for the charcoal briquettes, Gleason and her team used heat lamps to replicate the heat and humidity in Uganda. They developed three different designs for tent-like structures that could facilitate drying at all times — even when raining. At the end of the semester, it was time to put these designs to the test. “You can prototype and test all you want, but until you visit the field and experience the real-world conditions and work with the people who will be using your designs, you never fully understand the problem,” adds Gleason. Last January, during MIT’s Independent Activity Period, Gleason returned to Uganda to test designs. She and her team found out that their original idea of having a slanted dryer didn’t work in real-world conditions. Outside of the controlled conditions in the lab, their dryers didn’t have enough air flow to speed up the drying process. They spent several weeks troubleshooting dryer designs with Ikalany and her team. The team ended up designing covered dryers that allowed the briquettes to dry in both sun and rain, increasing the overall throughput. “We believe that once we are able to scale up what we have learned from Danielle and her team we should be able to produce five times more a day,” says Ikalany. “Our production capacity will increase and the demand for customers will be met.” In addition to helping Ikalany scale up the production of the potentially life-saving briquettes, Gleason and her fellow students left Uganda with a broadened world view. “For most students, this is the first time they will visit these countries,” adds Yang. “Not only do we want to benefit our collaborators, we want our students to gain formative and enriching experiences.” Gleason left Uganda with a deeper appreciation of community. “Seeing how close the community Betty and her team are a part of really made me value the idea of community more,” she recalls. While other students will pick up where Gleason and her team left off in their work with Ikalany in the coming months, Gleason hopes to continue working on solutions in the developing world as she explores future career paths. “I really love looking at how people interact with the things they use, and I think there’s so much room for growth in user-interfacing in the developing world,” she says. Senior Danielle Gleason (right) speaks with Goretti Ariago (center) and Salume Awiyo (left), employees of Appropriate Energy Saving Technologies, in Soroti, Uganda. Gleason has made two trips to Uganda to help streamline the production of charcoal briquettes which offer a low-smoke alternative for home cooking fuel. Photo: John Freidah https://news.mit.edu/2019/new-vending-machines-expand-fresh-food-options-campus-1018 Now at the Student Center and Building 16, Fresh Fridge by 6am Health offers healthy options for eating on the go. Fri, 18 Oct 2019 14:30:01 -0400 https://news.mit.edu/2019/new-vending-machines-expand-fresh-food-options-campus-1018 Julia Newman | Division of Student Life To expand the fresh, healthy meal choices on campus, MIT Dining recently rolled out a pilot of 6am Health’s Fresh Fridge vending machines in the 5th floor lounge of the Stratton Student Center (Building W20) and in the vending-machine area at the first-floor intersection of Buildings 16 and 26. “Fresh Fridge’s delicious meals are packed in reusable jars, which is great for students on the go who want something they can throw in their backpacks but is more healthful than a granola bar or instant ramen,” says Mark Hayes, MIT Dining director. Options include quinoa bowls, salads, overnight oats, sandwiches, and fresh juices.  The machines accept credit cards and phone-based contactless payment methods, but other ways to pay are in the works. Fresh Fridges are another way MIT Dining is working to add value and choice to eating options at MIT. Try out a Fresh Fridge machine and send suggestions, comments, or other any food for thought to foodstuff@mit.edu. 6am Health’s Fresh Fridge vending machines are located in the Stratton Student Center’s 5th floor lounge and in the vending-machine area at the first-floor intersection of Buildings 16 and 26. Photo: Julia Newman/DSL Communications https://news.mit.edu/2019/first-year-water-bottles-reuse-refill-replenish-0925 MIT welcomed the Class of 2023 with an initiative to reduce the impact of water consumption through reusable water bottles and other sustainable habits. Wed, 25 Sep 2019 15:25:01 -0400 https://news.mit.edu/2019/first-year-water-bottles-reuse-refill-replenish-0925 Lisa Miller | Abdul Latif Jameel Water and Food Systems Lab During the week of Aug. 26, MIT welcomed its Class of 2023. Participating in the usual orientation activities, they learned about research opportunities, course options, and important resources to help them navigate the Institute. Breaking for lunch each day, new MIT students poured into Kresge Oval where they could picnic under a large tent propped over the grass, providing much-needed shade on these hot August days. New to Kresge Oval this year was a mobile filling station full of cool, fresh, locally sourced water provided by the Massachusetts Water Resources Authority. Next to the filling station, free reusable water bottles were being given out to all MIT students. These bottles were more than just swag. They were part of a collaborative effort of MIT’s Office of Sustainability (MITOS), the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), the Environmental Solutions Initiative (ESI), MIT Dining, the Office of the First Year, and the MIT Water Club to encourage sustainable water use practices across MIT’s campus and reduce waste by advocating for the regular use of reusable bottles and other serviceware throughout campus. Only students who took a pledge to use their bottle at least 10 times were allowed to take one away. The idea for the bottle giveaway was first raised by the Sustainability Leadership Steering Committee, co-chaired by Julie Newman, the Institute’s director of sustainability, and David McGee, associate professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. When welcoming new students, they wanted to introduce them to MIT’s commitment to building a sustainable future and educate them about the benefits of choosing tap water over single-use plastic bottles. J-WAFS, ESI, MIT Dining, and the MIT Water Club joined the effort to help spread this message and expose incoming students to their sustainability work at MIT. Students who wanted a bottle were asked to take a pledge to use it at least 10 times. Why 10? MITOS, along with Greg Norris, director of the Sustainability and Health Initiative for NetPositive Enterprise at MIT, helped articulate that using the stainless steel bottle just 10 times would provide a better environmental performance than a typical single-use water bottle, and the positive impact would continue to grow with future use. When one first-year was asked if he could commit to using the bottle 10 times, he replied, “Oh, heck yeah!” With the water trailer right there, students could fill their bottles right away, bringing them one-tenth of the way to fulfilling their pledge. The bottles were branded with the reminder: “Reuse, refill, replenish.” They were designed by MIT architecture senior and MITOS student fellow Effie Jia to encourage students to incorporate reusable bottles use as well as other sustainable practices into their lives, such as using their own reusable serviceware for takeout and using reusable bags instead of disposable plastic or paper. The bottles are insulated for hot and cold beverages to make them even more flexible. “The perfect size for tea and coffee,” remarked one student. Staff members from MITOS, ESI, and J-WAFS also distributed bookmarks with information about drinking fountains on campus, advice to ask if local cafés offer discounts for bringing refillable bottles, and a reminder to always wash out their reusables to keep them clean and safe. To analyze the potential impact of the water bottle giveaway, event organizers will be conducting a pair of followup surveys with the over 600 bottle recipients to test the persistence of the bottle use and potential changes in awareness. “We are experimenting to determine if we can statistically articulate some impacts associated with the giveaway,” said MITOS director Julie Newman. “It is important with all our initiatives to try to measure success (or failure) so we can test our effectiveness.” While the event was about arming MIT students with sustainable tools, it was also focused on educating them about local water. Andrew Bouma and Patricia Stathatou, this year’s co-presidents of the MIT Water Club, shared information about the quality of Cambridge, Massachusetts, water, giving even more reason to choose tap water over bottled. The Water Club has run similar education events in the past, and has demonstrated that the taste of tap water, recycled water, and bottled water is virtually indistinguishable in a blind taste test. Educating incoming students about the high quality of Cambridge tap water, and the energy, cost, and waste that is saved by choosing to reuse, they hoped to further support sustainable behavior change among the incoming class members. Over the two days in which the water bottles were distributed, the turnout was remarkable. “We were thrilled with the outcome of the event,” said MITOS Sustainability Project Manager Steven Lanou. “Not only did we get to engage directly with over 600 first-year students to share information, we were also so encouraged by their enthusiasm and commitment to help MIT take this small step towards advancing sustainability on campus. ESI, J-WAFS, MIT Dining, MIT Water Club and Office of the First Year couldn’t have been better partners in this activity, and we look forward to many more collaborations in the future.” Water bottles designed by architecture student Effie Jia encourage students to take up environmentally friendly habits. Photo: Lisa Miller/J-WAFS https://news.mit.edu/2019/cody-friesen-awarded-lemelson-mit-prize-invention-0918 Materials scientist recognized for social, economic, and environmentally-sustaining inventions that impact millions of people around the world. Wed, 18 Sep 2019 10:10:01 -0400 https://news.mit.edu/2019/cody-friesen-awarded-lemelson-mit-prize-invention-0918 Stephanie Martinovich | Lemelson-MIT Program Cody Friesen PhD ’04, an associate professor of materials science at Arizona State University and founder of both Fluidic Energy and Zero Mass Water, was awarded the 2019 $500,000 Lemelson-MIT Prize for invention. Friesen has dedicated his career to inventing solutions that address two of the biggest challenges to social and economic advancement in the developing world: access to fresh water and reliable energy. His renewable water and energy technologies help fight climate change while providing valuable resources to underserved communities. Friesen’s first company, Fluidic Energy, was formed to commercialize and deploy the world’s first, and only, rechargeable metal-air battery, which can withstand many thousands of discharges. The technology has provided backup power during approximately 1 million long-duration outages, while simultaneously offsetting thousands of tons of carbon dioxide emissions. The batteries are currently being used as a secondary energy source on four continents at thousands of critical load sites and in dozens of microgrids. Several million people have benefited from access to reliable energy as a result of the technology. Fluidic Energy has been renamed NantEnergy, with Patrick Soon-Shiong investing significantly in the continued global expansion of the technology. Currently, Friesen’s efforts are focused on addressing the global water crisis through his company, Zero Mass Water. Friesen invented SOURCE Hydropanels, which are solar panels that make drinking water from sunlight and air. The invention is a true leapfrog technology and can make drinking water in dry conditions with as low as 5 percent relative humidity. SOURCE has been deployed in 33 countries spanning six continents. The hydropanels are providing clean drinking water in communities, refugee camps, government offices, hotels, hospitals, schools, restaurants, and homes around the world. “As inventors, we have a responsibility to ensure our technology serves all of humanity, not simply the elite,” says Friesen. “At the end of the day, our work is about impact, and this recognition propels us forward as we deploy SOURCE Hydropanels to change the human relationship to water across the globe.” Friesen joins a long lineage of inventors to receive the Lemelson-MIT Prize, the largest cash prize for invention in the United States for 25 years. He will be donating his prize to a project with Conservation International to provide clean drinking water via SOURCE Hydropanels to the Bahia Hondita community in Colombia. “Cody’s inventive spirit, fueled by his strong desire to help improve the lives of people everywhere, is an inspiring role model for future generations,” says Michael Cima, faculty director for the Lemelson-MIT Program and associate dean of innovation for the MIT School of Engineering. “Water scarcity is a prominent global issue, which Cody is combating through technology and innovation. We are excited that the use of this award will further elevate his work.” “Cody Friesen embodies what it means to be an impact inventor,” notes Carol Dahl, executive director at the Lemelson Foundation. “His inventions are truly improving lives, take into account environmental considerations, and have become the basis for companies that impact millions of people around the world each year. We are honored to recognize Dr. Friesen as this year’s LMIT Prize winner.”  Friesen will speak at EmTech MIT, the annual conference on emerging technologies hosted by MIT Technology Review at the MIT Media Lab on Sept. 18 at 5 p.m. Cody Friesen is the winner of the 2019 Lemelson-MIT Prize for invention. Photo: Zero Mass Water https://news.mit.edu/2019/j-wafs-solutions-grants-hlb-greening-citrus-disease-clean-water-nepal-0917 Projects address access to clean water in Nepal via wearable E. coli test kits, improving the resilience of commercial citrus groves, and more. Tue, 17 Sep 2019 12:00:01 -0400 https://news.mit.edu/2019/j-wafs-solutions-grants-hlb-greening-citrus-disease-clean-water-nepal-0917 Andi Sutton | Abdul Latif Jameel Water and Food Systems Lab The development of new technologies often starts with funded university research. Venture capital firms are eager to back well-tested products or services that are ready to enter the startup phase. However, funding that bridges the gap between these two stages can be hard to come by. The Abdul Latif Jameel Water and Food System Lab (J-WAFS) at MIT aims to fill this gap with their J-WAFS Solutions grant program. This program provides critical funding to students and faculty at MIT who have promising bench-scale technologies that can be applied to water and food systems challenges, but are not yet market-ready. By supporting the essential steps in any startup journey — customer discovery, market testing, prototyping, design, and more — as well as mentorship from industry experts throughout the life of the grant, this grant program helps to speed the development of new products and services that have the potential to increase the safety, resilience, and accessibility of the world’s water and food supplies. J-WAFS Solutions grants provide one year of financial support to MIT principal investigators with promising early-stage technologies, as well as mentorship from industry experts and experienced entrepreneurs throughout the grant. With additional networking and guidance provided by MIT’s Deshpande Center for Technological Innovation, project teams are supported as they advance their technologies toward commercialization. Since the start of the program in 2015, J-WAFS Solutions grants have already been instrumental in the launch of two MIT startups — Via Separations and Xibus Systems — as well as an open-source technology to support clean water access for the rural and urban poor in India. John H. Lienhard V, director of J-WAFS and Abdul Latif Jameel Professor of Water and Mechanical Engineering at MIT, describes the role of the J-WAFS Solutions program this way: “The combined effects of unsustainable human consumption patterns and the climate crisis threaten the world’s water and food supplies. These challenges are already present, and the risks were made plain in several recent, high-profile international news reports. Innovation in the water and food sectors can certainly help, and it is urgently needed. Through the J-WAFS Solutions program, we seek to identify nascent technologies with the greatest potential to transform local or even global food and water systems, and then to speed their transfer to market. We aim to leverage MIT’s entrepreneurial spirit to ensure that the water and food needs of our global human community can be met sustainably, now and far into the future.” Two projects funded by the J-WAFS Solutions program in 2019 are applying this entrepreneurial approach to sensors that support clean water and resilience in the agriculture industry. Three projects, all in the agriculture sector and funded by previous grants, are continuing this year, which together comprise a portfolio of exciting MIT technologies that are helping to resolve water and food challenges across the world.  Simplifying water quality testing in Nepal and beyond In 2018, the J-WAFS Solutions program supported a collaboration between the MIT-Nepal Initiative, led by professor of history Jeffrey Ravel, MIT D-Lab lecturer Susan Murcott, and the Nepalese non-governmental organization Environment and Public Health Organization (ENPHO). The project sought to refine the design of a wearable water test kit developed by Susan Murcott that provided simple, accessible ways to test the presence of E. coli in drinking water, even in the most remote settings. In that first year of J-WAFS funding, the research team worked with their Nepali partners, ENPHO, and their social business partner in Nepal, EcoConcern, to finalize the design of their product, called the ECC Vial, which, with the materials that they’ve now sourced, can be sold for less than $1 in Nepal — a significantly lower price than any other water-testing product on the market.   This technology is urgently needed by communities in Nepal, where many drinking water supplies are contaminated by E. coli. Standard testing practices are expensive, require significant laboratory infrastructure, or are just plain inaccessible to the many people exposed to unsafe drinking water. In fact, children under the age of 5 are the most vulnerable, and more than 40,000 children in Nepal alone die every year as a result of drinking contaminated water. The ECC Vial is intended to be the next-generation easy-to-use, portable, low-cost method for E. coli detection in water samples. It is particularly designed for simplicity and is appropriate for use in remote and low-resource settings. The 2019 renewal grant for the project “Manufacturing and Marketing EC-Kits in Nepal” will support the team in working with the same Nepali partners to optimize the manufacturing process for the ECC Vials and refine the marketing strategy in order to ensure that the technology that is sold to customers is reliable and that the business model for local purveyors is viable now and into the future. Once the product enters the market this year, the team plans to begin distribution in Bangladesh, and will assess market opportunities in India, Pakistan, Peru, and Ghana, where there is a comparable need for a simple and affordable and E.coli indicator testing product for use by government agencies, private water vendors, bottled water firms, international nonprofit organizations and low-income populations without access to safe water. Based on consumer demand in Nepal and beyond, this solution has the potential to reach more than 3 million people during just its first two years on the market. Supporting the resilience of the citrus industry Citrus plants are very high-value crops and nutrient-dense foods. They are an important part of diets for people in developing countries with micronutrient deficiencies, as well as for people in developed economies who suffer from obesity and diet-related chronic diseases. Citrus fruits have become staples across seasons, cultures, and geographies, yet the large-scale citrus farms in the United States that support much of our domestic citrus consumption are challenged by citrus greening disease. Also known as Huanglongbing (HLB), it is an uncurable disease caused by bacteria transmitted by a small insect, the Asian citrus psyllid. The bacterial infection causes trees to wither and fruit to develop an unpleasantly bitter taste, rendering the tree’s fruit inedible. If left undetected, HLB can very quickly spread throughout large citrus groves. Since there is no treatment, infected trees must be removed to prevent further spreading. The disease poses an immediate threat to the $3.3 billion-per-year worldwide citrus industry. One of the reasons HLB is so troubling is that there doesn’t yet exist an accessible and affordable early-detection strategy. Once the observable symptoms of the disease have shown up in one part of a citrus grove, it is likely many more trees are already infected. Taking on this challenge is a research team at MIT led by Karen Gleason, the Alexander and I. Michael Kasser (1960) Professor in the Department of Chemical Engineering. A 2019 J-WAFS Solutions grant for the project “Early detection of Huanglongbing (HLB) Citrus Greening Disease” is supporting the development of a new technology for early detection of HLB infection in citrus trees. The team’s strategy is to deploy a series of low-cost, high-sensitivity sensors that can be used on-site, and which are attuned to volatile organic compounds emitted by citrus trees that change in concentration during early-stage HLB infection when trees do not yet exhibit visible symptoms. Using the data gathered via these sensors, an algorithm developed by the team provides a high-accuracy prediction system for the presence of the disease so that farmers and farm managers can make informed decisions about tree removal in order to protect the remaining trees in their citrus groves. Their aim is to detect HLB disease in months, rather than the years it now takes for the infection to be found.  Currently funded J-WAFS Solutions technologies seeking to revolutionize agriculture practices Three other J-WAFS Solutions projects are continuing through the 2019-20 academic year. From a tractor-pulled reactor unit that can turn agricultural wastes on rural farms into nutrient-rich fertilizer, to a polymer-based additive for agriculture sprays that dramatically reduces runoff recently featured by the BBC, to an affordable soil sensor that aims to make precision farming strategies available to smallholder farmers in India, these J-WAFS-funded projects are each aiming to transform the sustainability of small- and large-scale farming practices.   The J-WAFS Solutions program is implemented in collaboration with Community Jameel — the global philanthropic organization founded by MIT alumnus Mohammed Jameel — and is administered by J-WAFS in partnership with the MIT Deshpande Center for Technological Innovation. Fady Jameel, president, international of Community Jameel, says: “Access to clean water, and better management of water resources, can boost countries’ economic growth and can contribute greatly to poverty reduction. We always aim through J-WAFS to support the development and deployment of technologies, policies, and programs which will contribute to help humankind adapt to a rapidly changing planet and combat worldwide water scarcity and food supply.” Left: A water sample undergoing testing using the J-WAFS-funded water quality test kit soon to be deployed throughout Nepal. Right: Citrus trees infected with citrus greening disease are highly contagious and can wipe out whole orange groves. A J-WAFS-funded sensor could help farmers detect the disease much earlier. Image: Murcott/Ravel research team https://news.mit.edu/2019/j-wafs-solutions-grants-hlb-greening-citrus-disease-clean-water-nepal-0917 Projects address access to clean water in Nepal via wearable E. coli test kits, improving the resilience of commercial citrus groves, and more. Tue, 17 Sep 2019 12:00:01 -0400 https://news.mit.edu/2019/j-wafs-solutions-grants-hlb-greening-citrus-disease-clean-water-nepal-0917 Andi Sutton | Abdul Latif Jameel Water and Food Systems Lab The development of new technologies often starts with funded university research. Venture capital firms are eager to back well-tested products or services that are ready to enter the startup phase. However, funding that bridges the gap between these two stages can be hard to come by. The Abdul Latif Jameel Water and Food System Lab (J-WAFS) at MIT aims to fill this gap with their J-WAFS Solutions grant program. This program provides critical funding to students and faculty at MIT who have promising bench-scale technologies that can be applied to water and food systems challenges, but are not yet market-ready. By supporting the essential steps in any startup journey — customer discovery, market testing, prototyping, design, and more — as well as mentorship from industry experts throughout the life of the grant, this grant program helps to speed the development of new products and services that have the potential to increase the safety, resilience, and accessibility of the world’s water and food supplies. J-WAFS Solutions grants provide one year of financial support to MIT principal investigators with promising early-stage technologies, as well as mentorship from industry experts and experienced entrepreneurs throughout the grant. With additional networking and guidance provided by MIT’s Deshpande Center for Technological Innovation, project teams are supported as they advance their technologies toward commercialization. Since the start of the program in 2015, J-WAFS Solutions grants have already been instrumental in the launch of two MIT startups — Via Separations and Xibus Systems — as well as an open-source technology to support clean water access for the rural and urban poor in India. John H. Lienhard V, director of J-WAFS and Abdul Latif Jameel Professor of Water and Mechanical Engineering at MIT, describes the role of the J-WAFS Solutions program this way: “The combined effects of unsustainable human consumption patterns and the climate crisis threaten the world’s water and food supplies. These challenges are already present, and the risks were made plain in several recent, high-profile international news reports. Innovation in the water and food sectors can certainly help, and it is urgently needed. Through the J-WAFS Solutions program, we seek to identify nascent technologies with the greatest potential to transform local or even global food and water systems, and then to speed their transfer to market. We aim to leverage MIT’s entrepreneurial spirit to ensure that the water and food needs of our global human community can be met sustainably, now and far into the future.” Two projects funded by the J-WAFS Solutions program in 2019 are applying this entrepreneurial approach to sensors that support clean water and resilience in the agriculture industry. Three projects, all in the agriculture sector and funded by previous grants, are continuing this year, which together comprise a portfolio of exciting MIT technologies that are helping to resolve water and food challenges across the world.  Simplifying water quality testing in Nepal and beyond In 2018, the J-WAFS Solutions program supported a collaboration between the MIT-Nepal Initiative, led by professor of history Jeffrey Ravel, MIT D-Lab lecturer Susan Murcott, and the Nepalese non-governmental organization Environment and Public Health Organization (ENPHO). The project sought to refine the design of a wearable water test kit developed by Susan Murcott that provided simple, accessible ways to test the presence of E. coli in drinking water, even in the most remote settings. In that first year of J-WAFS funding, the research team worked with their Nepali partners, ENPHO, and their social business partner in Nepal, EcoConcern, to finalize the design of their product, called the ECC Vial, which, with the materials that they’ve now sourced, can be sold for less than $1 in Nepal — a significantly lower price than any other water-testing product on the market.   This technology is urgently needed by communities in Nepal, where many drinking water supplies are contaminated by E. coli. Standard testing practices are expensive, require significant laboratory infrastructure, or are just plain inaccessible to the many people exposed to unsafe drinking water. In fact, children under the age of 5 are the most vulnerable, and more than 40,000 children in Nepal alone die every year as a result of drinking contaminated water. The ECC Vial is intended to be the next-generation easy-to-use, portable, low-cost method for E. coli detection in water samples. It is particularly designed for simplicity and is appropriate for use in remote and low-resource settings. The 2019 renewal grant for the project “Manufacturing and Marketing EC-Kits in Nepal” will support the team in working with the same Nepali partners to optimize the manufacturing process for the ECC Vials and refine the marketing strategy in order to ensure that the technology that is sold to customers is reliable and that the business model for local purveyors is viable now and into the future. Once the product enters the market this year, the team plans to begin distribution in Bangladesh, and will assess market opportunities in India, Pakistan, Peru, and Ghana, where there is a comparable need for a simple and affordable and E.coli indicator testing product for use by government agencies, private water vendors, bottled water firms, international nonprofit organizations and low-income populations without access to safe water. Based on consumer demand in Nepal and beyond, this solution has the potential to reach more than 3 million people during just its first two years on the market. Supporting the resilience of the citrus industry Citrus plants are very high-value crops and nutrient-dense foods. They are an important part of diets for people in developing countries with micronutrient deficiencies, as well as for people in developed economies who suffer from obesity and diet-related chronic diseases. Citrus fruits have become staples across seasons, cultures, and geographies, yet the large-scale citrus farms in the United States that support much of our domestic citrus consumption are challenged by citrus greening disease. Also known as Huanglongbing (HLB), it is an uncurable disease caused by bacteria transmitted by a small insect, the Asian citrus psyllid. The bacterial infection causes trees to wither and fruit to develop an unpleasantly bitter taste, rendering the tree’s fruit inedible. If left undetected, HLB can very quickly spread throughout large citrus groves. Since there is no treatment, infected trees must be removed to prevent further spreading. The disease poses an immediate threat to the $3.3 billion-per-year worldwide citrus industry. One of the reasons HLB is so troubling is that there doesn’t yet exist an accessible and affordable early-detection strategy. Once the observable symptoms of the disease have shown up in one part of a citrus grove, it is likely many more trees are already infected. Taking on this challenge is a research team at MIT led by Karen Gleason, the Alexander and I. Michael Kasser (1960) Professor in the Department of Chemical Engineering. A 2019 J-WAFS Solutions grant for the project “Early detection of Huanglongbing (HLB) Citrus Greening Disease” is supporting the development of a new technology for early detection of HLB infection in citrus trees. The team’s strategy is to deploy a series of low-cost, high-sensitivity sensors that can be used on-site, and which are attuned to volatile organic compounds emitted by citrus trees that change in concentration during early-stage HLB infection when trees do not yet exhibit visible symptoms. Using the data gathered via these sensors, an algorithm developed by the team provides a high-accuracy prediction system for the presence of the disease so that farmers and farm managers can make informed decisions about tree removal in order to protect the remaining trees in their citrus groves. Their aim is to detect HLB disease in months, rather than the years it now takes for the infection to be found.  Currently funded J-WAFS Solutions technologies seeking to revolutionize agriculture practices Three other J-WAFS Solutions projects are continuing through the 2019-20 academic year. From a tractor-pulled reactor unit that can turn agricultural wastes on rural farms into nutrient-rich fertilizer, to a polymer-based additive for agriculture sprays that dramatically reduces runoff recently featured by the BBC, to an affordable soil sensor that aims to make precision farming strategies available to smallholder farmers in India, these J-WAFS-funded projects are each aiming to transform the sustainability of small- and large-scale farming practices.   The J-WAFS Solutions program is implemented in collaboration with Community Jameel — the global philanthropic organization founded by MIT alumnus Mohammed Jameel — and is administered by J-WAFS in partnership with the MIT Deshpande Center for Technological Innovation. Fady Jameel, president, international of Community Jameel, says: “Access to clean water, and better management of water resources, can boost countries’ economic growth and can contribute greatly to poverty reduction. We always aim through J-WAFS to support the development and deployment of technologies, policies, and programs which will contribute to help humankind adapt to a rapidly changing planet and combat worldwide water scarcity and food supply.” Left: A water sample undergoing testing using the J-WAFS-funded water quality test kit soon to be deployed throughout Nepal. Right: Citrus trees infected with citrus greening disease are highly contagious and can wipe out whole orange groves. A J-WAFS-funded sensor could help farmers detect the disease much earlier. Image: Murcott/Ravel research team https://news.mit.edu/2019/ketones-stem-cell-intestine-0822 Molecules called ketone bodies may improve stem cells’ ability to regenerate new intestinal tissue. Thu, 22 Aug 2019 11:02:02 -0400 https://news.mit.edu/2019/ketones-stem-cell-intestine-0822 Anne Trafton | MIT News Office MIT biologists have discovered an unexpected effect of a ketogenic, or fat-rich, diet: They showed that high levels of ketone bodies, molecules produced by the breakdown of fat, help the intestine to maintain a large pool of adult stem cells, which are crucial for keeping the intestinal lining healthy.The researchers also found that intestinal stem cells produce unusually high levels of ketone bodies even in the absence of a high-fat diet. These ketone bodies activate a well-known signaling pathway called Notch, which has previously been shown to help regulate stem cell differentiation.“Ketone bodies are one of the first examples of how a metabolite instructs stem cell fate in the intestine,” says Omer Yilmaz, the Eisen and Chang Career Development Associate Professor of Biology and a member of MIT’s Koch Institute for Integrative Cancer Research. “These ketone bodies, which are normally thought to play a critical role in energy maintenance during times of nutritional stress, engage the Notch pathway to enhance stem cell function. Changes in ketone body levels in different nutritional states or diets enable stem cells to adapt to different physiologies.”In a study of mice, the researchers found that a ketogenic diet gave intestinal stem cells a regenerative boost that made them better able to recover from damage to the intestinal lining, compared to the stem cells of mice on a regular diet.Yilmaz is the senior author of the study, which appears in the Aug. 22 issue of Cell. MIT postdoc Chia-Wei Cheng is the paper’s lead author.An unexpected roleAdult stem cells, which can differentiate into many different cell types, are found in tissues throughout the body. These stem cells are particularly important in the intestine because the intestinal lining is replaced every few days. Yilmaz’ lab has previously shown that fasting enhances stem cell function in aged mice, and that a high-fat diet can stimulate rapid growth of stem cell populations in the intestine.In this study, the research team wanted to study the possible role of metabolism in the function of intestinal stem cells. By analyzing gene expression data, Cheng discovered that several enzymes involved in the production of ketone bodies are more abundant in intestinal stem cells than in other types of cells.When a very high-fat diet is consumed, cells use these enzymes to break down fat into ketone bodies, which the body can use for fuel in the absence of carbohydrates. However, because these enzymes are so active in intestinal stem cells, these cells have unusually high ketone body levels even when a normal diet is consumed.To their surprise, the researchers found that the ketones stimulate the Notch signaling pathway, which is known to be critical for regulating stem cell functions such as regenerating damaged tissue.“Intestinal stem cells can generate ketone bodies by themselves, and use them to sustain their own stemness through fine-tuning a hardwired developmental pathway that controls cell lineage and fate,” Cheng says.In mice, the researchers showed that a ketogenic diet enhanced this effect, and mice on such a diet were better able to regenerate new intestinal tissue. When the researchers fed the mice a high-sugar diet, they saw the opposite effect: Ketone production and stem cell function both declined.Stem cell functionThe study helps to answer some questions raised by Yilmaz’ previous work showing that both fasting and high-fat diets enhance intestinal stem cell function. The new findings suggest that stimulating ketogenesis through any kind of diet that limits carbohydrate intake helps promote stem cell proliferation.“Ketone bodies become highly induced in the intestine during periods of food deprivation and play an important role in the process of preserving and enhancing stem cell activity,” Yilmaz says. “When food isn’t readily available, it might be that the intestine needs to preserve stem cell function so that when nutrients become replete, you have a pool of very active stem cells that can go on to repopulate the cells of the intestine.”The findings suggest that a ketogenic diet, which would drive ketone body production in the intestine, might be helpful for repairing damage to the intestinal lining, which can occur in cancer patients receiving radiation or chemotherapy treatments, Yilmaz says.The researchers now plan to study whether adult stem cells in other types of tissue use ketone bodies to regulate their function. Another key question is whether ketone-induced stem cell activity could be linked to cancer development, because there is evidence that some tumors in the intestines and other tissues arise from stem cells.“If an intervention drives stem cell proliferation, a population of cells that serve as the origin of some tumors, could such an intervention possibly elevate cancer risk? That’s something we want to understand,” Yilmaz says. “What role do these ketone bodies play in the early steps of tumor formation, and can driving this pathway too much, either through diet or small molecule mimetics, impact cancer formation? We just don’t know the answer to those questions.”The research was funded by the National Institutes of Health, a V Foundation V Scholar Award, a Sidney Kimmel Scholar Award, a Pew-Stewart Trust Scholar Award, the MIT Stem Cell Initiative, the Koch Institute Frontier Research Program through the Kathy and Curt Marble Cancer Research Fund, the Koch Institute Dana Farber/Harvard Cancer Center Bridge Project, and the American Federation of Aging Research. MIT biologists found that intestinal stem cells express high levels of a ketogenic enzyme called HMGCS2, shown in brown. Image courtesy of the researcher https://news.mit.edu/2019/battery-free-sensor-underwater-exploration-0820 Submerged system uses the vibration of “piezoelectric” materials to generate power and send and receive data. Tue, 20 Aug 2019 08:59:59 -0400 https://news.mit.edu/2019/battery-free-sensor-underwater-exploration-0820 Rob Matheson | MIT News Office To investigate the vastly unexplored oceans covering most our planet, researchers aim to build a submerged network of interconnected sensors that send data to the surface — an underwater “internet of things.” But how to supply constant power to scores of sensors designed to stay for long durations in the ocean’s deep?MIT researchers have an answer: a battery-free underwater communication system that uses near-zero power to transmit sensor data. The system could be used to monitor sea temperatures to study climate change and track marine life over long periods — and even sample waters on distant planets. They are presenting the system at the SIGCOMM conference this week, in a paper that has won the conference’s “best paper” award.The system makes use of two key phenomena. One, called the “piezoelectric effect,” occurs when vibrations in certain materials generate an electrical charge. The other is “backscatter,” a communication technique commonly used for RFID tags, that transmits data by reflecting modulated wireless signals off a tag and back to a reader.In the researchers’ system, a transmitter sends acoustic waves through water toward a piezoelectric sensor that has stored data. When the wave hits the sensor, the material vibrates and stores the resulting electrical charge. Then the sensor uses the stored energy to reflect a wave back to a receiver — or it doesn’t reflect one at all. Alternating between reflection in that way corresponds to the bits in the transmitted data: For a reflected wave, the receiver decodes a 1; for no reflected wave, the receiver decodes a 0.“Once you have a way to transmit 1s and 0s, you can send any information,” says co-author Fadel Adib, an assistant professor in the MIT Media Lab and the Department of Electrical Engineering and Computer Science and founding director of the Signal Kinetics Research Group. “Basically, we can communicate with underwater sensors based solely on the incoming sound signals whose energy we are harvesting.”The researchers demonstrated their Piezo-Acoustic Backscatter System in an MIT pool, using it to collect water temperature and pressure measurements. The system was able to transmit 3 kilobits per second of accurate data from two sensors simultaneously at a distance of 10 meters between sensor and receiver.Applications go beyond our own planet. The system, Adib says, could be used to collect data in the recently discovered subsurface ocean on Saturn’s largest moon, Titan. In June, NASA announced the Dragonfly mission to send a rover in 2026 to explore the moon, sampling water reservoirs and other sites.“How can you put a sensor under the water on Titan that lasts for long periods of time in a place that’s difficult to get energy?” says Adib, who co-wrote the paper with Media Lab researcher JunSu Jang. “Sensors that communicate without a battery open up possibilities for sensing in extreme environments.” Preventing deformationInspiration for the system hit while Adib was watching “Blue Planet,” a nature documentary series exploring various aspects of sea life. Oceans cover about 72 percent of Earth’s surface. “It occurred to me how little we know of the ocean and how marine animals evolve and procreate,” he says. Internet-of-things (IoT) devices could aid that research, “but underwater you can’t use Wi-Fi or Bluetooth signals … and you don’t want to put batteries all over the ocean, because that raises issues with pollution.”That led Adib to piezoelectric materials, which have been around and used in microphones and other devices for about 150 years. They produce a small voltage in response to vibrations. But that effect is also reversible: Applying voltage causes the material to deform. If placed underwater, that effect produces a pressure wave that travels through the water. They’re often used to detect sunken vessels, fish, and other underwater objects.“That reversibility is what allows us to develop a very powerful underwater backscatter communication technology,” Adib says.Communicating relies on preventing the piezoelectric resonator from naturally deforming in response to strain. At the heart of the system is a submerged node, a circuit board that houses a piezoelectric resonator, an energy-harvesting unit, and a microcontroller. Any type of sensor can be integrated into the node by programming the microcontroller. An acoustic projector (transmitter) and underwater listening device, called a hydrophone (receiver), are placed some distance away.Say the sensor wants to send a 0 bit. When the transmitter sends its acoustic wave at the node, the piezoelectric resonator absorbs the wave and naturally deforms, and the energy harvester stores a little charge from the resulting vibrations. The receiver then sees no reflected signal and decodes a 0.However, when the sensor wants to send a 1 bit, the nature changes. When the transmitter sends a wave, the microcontroller uses the stored charge to send a little voltage to the piezoelectric resonator. That voltage reorients the material’s structure in a way that stops it from deforming, and instead reflects the wave. Sensing a reflected wave, the receiver decodes a 1.Long-term deep-sea sensingThe transmitter and receiver must have power but can be planted on ships or buoys, where batteries are easier to replace, or connected to outlets on land. One transmitter and one receiver can gather information from many sensors covering one area or many areas.“When you’re tracking a marine animal, for instance, you want to track it over a long range and want to keep the sensor on them for a long period of time. You don’t want to worry about the battery running out,” Adib says. “Or, if you want to track temperature gradients in the ocean, you can get information from sensors covering a number of different places.”Another interesting application is monitoring brine pools, large areas of brine that sit in pools in ocean basins, and are difficult to monitor long-term. They exist, for instance, on the Antarctic Shelf, where salt settles during the formation of sea ice, and could aid in studying melting ice and marine life interaction with the pools. “We could sense what’s happening down there, without needing to keep hauling sensors up when their batteries die,” Adib says. Polly Huang, a professor of electrical engineering at Taiwan National University, praised the work for its technical novelty and potential impact on environmental science. “This is a cool idea,” Huang says. “It’s not news one uses piezoelectric crystals to harvest energy … [but is the] first time to see it being used as a radio at the same time [which] is unheard of to the sensor network/system research community. Also interesting and unique is the hardware design and fabrication. The circuit and the design of the encapsulation are both sound and interesting.” While noting that the system still needs more experimentation, especially in sea water, Huang adds that “this might be the ultimate solution for researchers in marine biography, oceanography, or even meteorology — those in need of long-term, low-human-effort underwater sensing.”Next, the researchers aim to demonstrate that the system can work at farther distances and communicate with more sensors simultaneously. They’re also hoping to test if the system can transmit sound and low-resolution images.The work is sponsored, in part, by the U.S Office of Naval Research. A battery-free underwater “piezoelectric” sensor invented by MIT researchers transmits data by absorbing or reflecting sound waves back to a receiver, where a reflected wave decodes a 1 bit and an absorbed wave decodes a 0 bit — and simultaneously stores energy. Image courtesy of the researchers https://news.mit.edu/2019/following-current-mit-examines-water-consumption-amidst-climate-crisis-0805 The Institute aims to update its water management practices to prepare for droughts, sea level rise, and other risks posed by the climate crisis. Mon, 05 Aug 2019 16:00:01 -0400 https://news.mit.edu/2019/following-current-mit-examines-water-consumption-amidst-climate-crisis-0805 Archana Apte | Abdul Latif Jameel Water and Food Systems Lab At the 2019 MIT Commencement address, Michael Bloomberg highlighted the climate crisis as “the challenge of our time.” Climate change is expected to worsen drought and cause Boston, Massachusetts, sea level to rise by 1.5 feet by 2050. While numerous MIT students and researchers are working to ensure access to clean and sustainable sources of drinking water well into the future, MIT is also responding to the urgency of the climate crisis with a close examination of campus sustainability practices, including a recent focus on its own water consumption. A working group on campus water use, led by the MIT Office of Sustainability (MITOS) and Department of Facilities, is supported by the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) and includes representatives of numerous other groups, offices, students, and campus leaders. While the MITOS initiative is focusing on campus water management, MIT student clubs are raising local consciousness around drinking-water issues via research and outreach activities. Through all of these efforts, members of the community aim to help MIT change its water usage practices and become a model for sustainable water use at the university level. The water subcommittee: providing water leadership to promote institutional change Gathering campus stakeholders to develop sustainability recommendations is a practiced strategy for the Office of Sustainability. MITOS working groups have previously analyzed environmental issues such as energy use, storm water management, and the sustainability of MIT’s food system, another initiative in which J-WAFS has played a role. The current working group addressing campus water use practices is managed by Steven Lanou, sustainability project manager at MITOS. “Work done in the late 1990s reduced campus water use by an estimated 60 percent,” he explains. “And now, we need to look strategically again at all of our systems” to improve water management in the face of future climate uncertainty. Beginning in fall 2018, MITOS met with local stakeholders, including the Cambridge Water Department, the MIT Department of Facilities, and the MIT Water Club, to explore how water is used and managed on campus. The water subcommittee falls underneath the Sustainability Leadership Steering Committee, which was created by, and reports to, the Office of the Provost and the Office of the Executive Vice President and Treasurer, upon which Professor John H. Lienhard, director of J-WAFS and Abdul Latif Jameel Professor of Water and Mechanical Engineering, also sits. The steering committee is charged by the provost and the executive vice president and treasurer of MIT to recommend strategies for campus leadership on sustainability issues. The water subcommittee will bring concrete suggestions for water usage changes to the MIT administration and work to implement them across campus. Professor Lienhard has “been key in helping us shape what a water stewardship program might look like,” according to Lanou. Other J-WAFS staff are also involved in the subcommittee, as well as leaders from the Environmental Solutions Initiative (ESI), Department of Facilities, MIT Dining, the MIT Investment Management Company, and the Water Club. Based on a thorough review of data related to MIT’s water use, the subcommittee has started to identify the most strategic areas for intervention, and is gearing up now to get additional input this fall and begin to develop recommendations for how MIT can reduce water consumption, mitigate its overall climate impact, and adapt to an uncertain future. Water has been a focus of discussion and planning for sustainable campus practices for several years already. A MITOS stormwater and land management working group devoted to priority-setting for campus sustainability, which convened in the 2014 academic year, identified MIT’s water footprint as one of several key areas for discussion and intervention. Following the release of the stormwater and land management working group recommendations in 2016, MITOS teamed up with the Office of Campus Planning, the Department of Facilities, and the Office of Environment, Health and Safety to explore stormwater management solutions that improve the health of Cambridge, Massachusetts waterways and ecosystems. Among the outcomes was a draft stormwater management and landscape ecology plan that is focused on enhancing the productivity of the campus’ built and ecological systems in order to capture, absorb, reuse, and treat stormwater. This effort has informed the implementation of advanced stormwater management infrastructure on campus, including the recently completed North Corridor improvements in conjunction with the construction of the MIT.nano building. In addition, MITOS is leading a research effort with the MIT Center for Global Change Science and Department of Facilities to understand campus risks to flooding during current and future climate conditions. The team is evaluating probabilities and flood depths to a range of scenarios, including intense, short-duration rainfall over campus; 24-hour rainfall over campus/Cambridge from tropical storms or nor’easters; sea-level rise and coastal storm surge of the Charles River; and up-river rainfall that raises the level of the Charles River. To understand MIT’s water consumption and key areas for intervention, this year’s water subcommittee is informed by data gathered by Lanou on the water consumption across campus — in buildings, labs, and landscaping processes — as well as the consumption of water by the MIT community. An additional dimension of water stewardship to be considered by the subcommittee is the role and impact of bottled-water purchases on campus. The subcommittee has begun to look at data on annual bottled-water consumption to help understand the current trends. Understanding the impacts of single-use disposable bottles on campus is important. “I see so much bottled water consumption on campus,” notes John Lienhard. “It’s costly, energy-intensive, and adds plastic to the environment.” Only 9 percent of all plastics manufactured since 2015 has been recycled, and 12 billion metric tons of plastic will end up in landfills by 2050. Mark Hayes, director of MIT Dining and another subcommittee member, has participated in student-led bottled-water reduction efforts on two college campuses, and he hopes to help MIT better understand and address the issue here. Hayes would like to see MIT consider “expanding water refilling stations, exploring the impact and reduction [of] plastic recycling, and increasing campus education on these efforts.” Taking on the challenge of changing campus water consumption habits, and decreasing the associated waste, will hopefully position MIT as a leader in these kinds of sustainability efforts and encourage other campuses to adopt similar policies. Students taking action Student groups are also using education around bottled water alternatives to encourage behavior change. Andrew Bouma, a PhD student in John Lienhard’s lab, is investigating local attitudes toward bottled water. His interest in this issue began upon meeting several students who drank mostly bottled water. “It frustrated me that people had this perception that the tap water wasn’t safe,” Bouma explains, “even though Cambridge and Boston have really great water.” He became involved with the MIT Water Club and ran a blind taste test at the 2019 MIT Water Night to evaluate perceptions of tap water, bottled water, and recycled wastewater. Bouma explained that bottled-water drinkers often cite superior flavor as a motivating factor; however, only four or five of the 70-80 participants correctly identified the different sources, suggesting that the flavor argument holds little water. Many participants also held reservations about water safety. Bouma hopes that the taste test can address these barriers more effectively than sharing statistics. “When people can hold a cup of water in their hands and see it and taste it, it makes people confront their presumptions in a different way,” he explains. A broader impact The MIT Water Club, including Bouma, repeated the taste test at the Cambridge River Arts Festival in June to examine public perceptions of public and bottled water. Fewer than 5 percent of the 242 respondents identified all four water sources, approximately the same outcome as would be expected from random guessing. Many participants held concerns about the safety of public water, which the Water Club tried to combat with information about water treatment and testing procedures. Bouma hopes to continue addressing water consumption issues as co-president of the Water Club. Other student groups are encouraging behavior change around water consumption as well. The MIT Graduate Student Council (GSC) and the GSC Sustainability Subcommittee, with support from the Department of Facilities, funded five water-bottle refilling stations across campus in 2015. These efforts underscore the commitment of MIT students to promoting sustainable water consumption on campus. A unique “MIT spin” on campus water sustainability Lanou hopes that MIT will bring its technical strength to bear on water issues by using campus as a living laboratory to test water technologies. For example, Kripa Varanasi, professor of mechanical engineering and a J-WAFS-funded principal investigator, is piloting a water capture project at MIT’s Central Utility Plant that uses electricity to condense fog into liquid water for collection. Varanasi’s lab is able to test the technology in real-world conditions and improve the plant’s water efficiency at the same time. “It’s a great example of MIT being willing to use its facilities to test campus research,” explains Lanou. These technological advancements — many of which are supported by J-WAFS — could support water resilience at MIT and elsewhere. As the climate crisis brings water scarcity issues to the forefront, understanding and modeling water-use practices will become increasingly critical. With the water subcommittee working to bring recommendations for campus water use to the administration, and MIT students engaging with the broader Cambridge community on bottled water issues, the MIT community is poised to rise to the challenge. The MIT Water Club conducted a water taste test and outreach event at the Cambridge Arts River Festival. Photo: Patricia Stathatou https://news.mit.edu/2019/marcus-karel-food-science-pioneer-professor-emeritus-chemical-engineering-dies-0802 A giant in the field of food science and engineering, Karel developed important innovations in food packaging as well as food systems for long-term space travel. Fri, 02 Aug 2019 16:10:01 -0400 https://news.mit.edu/2019/marcus-karel-food-science-pioneer-professor-emeritus-chemical-engineering-dies-0802 Melanie Miller Kaufman | Department of Chemical Engineering Marcus “Marc” G. Karel PhD ’60, professor emeritus of chemical engineering, died on July 25 at age 91. A member of the MIT community since 1951, Karel inspired a generation of food scientists and engineers through his work in food technology and controlled release of active ingredients in food and pharmaceuticals. Karel was born in Lvov, Poland (now Lviv, Ukraine) to Cila and David Karel, who ran a small chain of women’s clothing stores in the town. After war arrived in Poland in 1939, the family business was lost, relatives were scattered and disappeared, and the Karels spent the last 22 months of the war in hiding. After the war, Karel and his family eventually emigrated to the United States, where they settled in Newton, Massachusetts, just outside of Boston. Karel completed his bachelor’s degree at Boston University in 1955 and earned his doctorate in 1960 at MIT. Before Karel started his graduate studies at MIT, he was invited by the head of the former Department of Food Technology to manage the Packaging Laboratory. Here he began his interest in the external and internal factors that influence food stability. In 1961, he was appointed professor of food engineering at MIT in the former Department of Nutrition and Food Science (Course 20), eventually becoming deputy head of the department. When Course 20 (then called Applied Biological Sciences) was disbanded in 1988, Karel was invited to join the Department of Chemical Engineering. After retiring from MIT in 1989, he became the State of New Jersey Professor at Rutgers University from 1989 to 1996, and from 1996 to 2007 he consulted for various government and industrial organizations. During his academic career at MIT and Rutgers, Karel supervised over 120 graduate students and postdocs. Most of them are now leaders in food engineering. Several of his trainees from industry are now vice presidents of research and development at several companies. Along with his engineering accomplishments, Karel was known for his ability to build and manage successful teams, nurture talent, and create a family environment among researchers. Karel was a pioneer in several areas, including oxidative reactions in food, drying of biological materials, and the preservation and packaging and stabilization of low-moisture foods. His fundamental work on oxidation of lipids and stabilization led to important improvements in food packaging. Also, when NASA needed expertise to design food and food systems for long-term space travel, it was Karel’s work that formed the platform for many of the enabling developments of the U.S. space program. MIT Professor Emeritus Charles Cooney relates, “When the solution to an important problem required improved analytical techniques, he pioneered the development of the techniques. When the solution required deeper insight into the physical chemistry of foods, he formulated the theoretical framework for the solution. When the solution required identification of new materials and new processes, he was on the front line with innovative technologies. No one has had the impact on the field of food science and engineering as Marc.” Karel earned many recognitions for his work, including a Life Achievement Award from the International Association for Engineering and Food, election to the American Institute of Medical and Biological Engineering, the Institute of Food Technologists (IFT)’s Nicholas Appert Medal (the highest honor in food technology), election to the Food Engineering Hall of Fame, several honorary doctorates, and the one of which he was most proud: the first William V. Cruess Award for Excellence in Teaching from the IFT. The first edition of his co-authored book, “The Physical Principles of Food Preservation,” is considered by many to be the “bible” of the field of food stability. Karel is survived by his wife of almost 61 years, Carolyn Frances (Weeks) Karel; son Steven Karel and daughters Karen Karel and Debra Karel Nardone; grandchildren Amanda Nardone, Kristen Nardone, Emma Griffith, and Bennet Karel; sister Rena Carmel, niece Julia Carmel, and great-nephew David Carmel; Leslie Griffith (mother of Emma and Ben); nephew James Weeks Jr., and niece Sharon Weeks Mancini. Funeral arrangements were private. A celebration of Karel’s life will take place later this year. Memorial contributions may be made to the American Red Cross. MIT Professor Emeritus Marcus Karel https://news.mit.edu/2019/mit-phd-students-awarded-j-wafs-fellowships-water-solutions-0617 J-WAFS announces graduate fellowships for Sahil Shah and Peter Godart, both of the Department of Mechanical Engineering. Mon, 17 Jun 2019 13:40:01 -0400 https://news.mit.edu/2019/mit-phd-students-awarded-j-wafs-fellowships-water-solutions-0617 Andi Sutton | Abdul Latif Jameel World Water and Food Systems Lab The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has announced the selection of their third cohort of graduate fellows. Two students will each receive one-semester graduate fellowships as part of J-WAFS’ Rasikbhai L. Meswani Fellowship for Water Solutions and J-WAFS Graduate Student Fellowship Programs. An additional student was awarded “honorable mention.” J-WAFS will also support the three students by providing networking, mentorship, and opportunities to showcase their research.  The awarded students, Sahil Shah and Peter Godart of the Department of Mechanical Engineering and Mark Brennan of the Department of Urban Studies and Planning, were selected for the quality of their research as well as its relevance to current global water challenges. Each of them demonstrates a long commitment to water issues, both in and outside of an academic setting. Their research projects focus on transforming water access opportunities for people in vulnerable communities where access to fresh water for human consumption or for agriculture can improve human health and livelihoods. From developing a way to use aluminum waste to produce electricity for clean water to making significant improvements to the energy efficiency of desalination systems, these students demonstrate how creativity and ingenuity can push forward transformational water access solutions. 2019-20 Rasikbhai L. Meswani Fellow for Water Solutions Sahil Shah is a PhD candidate in the Department of Mechanical Engineering. He spent his childhood in Tanzania, received his undergraduate education in Canada, and worked in Houston as an engineering consultant before being drawn to MIT to pursue his interest in mechanical design and hardware. As a PhD student in Professor Amos Winter’s lab, he is now working to decrease the cost of desalination and improve access to drinking water in developing countries. His PhD research focuses on new methods to decrease the cost and energy use of groundwater treatment for drinking water. Currently, he is exploring the use of electrodialysis, which is a membrane-based desalination process. By improving the design of the control mechanisms for this process, as well as by redesigning the devices to achieve higher desalination efficiency, he seeks to decrease the cost of these systems and their energy use. His solutions will be piloted in both on-grid and off-grid applications in India, supported through a collaboration with consumer goods maker Eureka Forbes and infrastructure company Tata Projects. The 2019-20 J-WAFS Graduate Student Fellow Peter Godart is a PhD candidate in the Department of Mechanical Engineering, and also holds BS and MS degrees in mechanical engineering and a BS in electrical engineering from MIT. From 2015 to 17, Godart also held a research scientist position at the NASA Jet Propulsion Laboratory (JPL), where he managed the development of water-reactive metal power systems, developed software for JPL’s Mars rovers, and supported rover operations. Godart’s current research at MIT focuses on improving global sustainability by using aluminum waste to power desalination and produce energy. Through this work, he aims to provide communities around the world with a means of improving both their waste management practices and their climate change resiliency. He is creating a complete system that can take in scrap aluminum and output potable water, electricity, and high-grade mineral boehmite. This suite of technologies leverages the energy available in aluminum, which is one of the most energy-dense materials to which we have ready access. The process enables recycled aluminum to react with water in order to produce hydrogen gas, which could be used in fuel cells or internal combustion engines to generate electricity, heat, and power for desalination systems. Honorable mention Mark Brennan is a PhD candidate in the Department of Urban Studies and Planning (DUSP). He studies the supply chains behind public programs that provide goods to vulnerable communities, especially in water- and food-insecure areas. His ongoing projects include studying which firms shoulder risk in irrigation supply chains in the Sahel, and how American federal assistance programs are structured to provide relief after disasters. Brennan is currently collaborating with a team of researchers at the MIT Sloan School of Management, MIT D-Lab, and DUSP on a J-WAFS-funded project that is investigating ways to increase the accessibility of irrigation systems to small rural sub-Saharan African farmers, with a specific focus on Senegal. PhD candidates Sahil Shah (left) and Peter Godart, both of the Department of Mechanical Engineering, have each received fellowships from MIT’s Abdul Latif Jameel Water and Food Systems Lab for 2019-20. Their research explores possible solutions to global and local water supply challenges through new approaches to desalination. https://news.mit.edu/2019/mit-phd-students-awarded-j-wafs-fellowships-water-solutions-0617 J-WAFS announces graduate fellowships for Sahil Shah and Peter Godart, both of the Department of Mechanical Engineering. Mon, 17 Jun 2019 13:40:01 -0400 https://news.mit.edu/2019/mit-phd-students-awarded-j-wafs-fellowships-water-solutions-0617 Andi Sutton | Abdul Latif Jameel World Water and Food Systems Lab The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has announced the selection of their third cohort of graduate fellows. Two students will each receive one-semester graduate fellowships as part of J-WAFS’ Rasikbhai L. Meswani Fellowship for Water Solutions and J-WAFS Graduate Student Fellowship Programs. An additional student was awarded “honorable mention.” J-WAFS will also support the three students by providing networking, mentorship, and opportunities to showcase their research.  The awarded students, Sahil Shah and Peter Godart of the Department of Mechanical Engineering and Mark Brennan of the Department of Urban Studies and Planning, were selected for the quality of their research as well as its relevance to current global water challenges. Each of them demonstrates a long commitment to water issues, both in and outside of an academic setting. Their research projects focus on transforming water access opportunities for people in vulnerable communities where access to fresh water for human consumption or for agriculture can improve human health and livelihoods. From developing a way to use aluminum waste to produce electricity for clean water to making significant improvements to the energy efficiency of desalination systems, these students demonstrate how creativity and ingenuity can push forward transformational water access solutions. 2019-20 Rasikbhai L. Meswani Fellow for Water Solutions Sahil Shah is a PhD candidate in the Department of Mechanical Engineering. He spent his childhood in Tanzania, received his undergraduate education in Canada, and worked in Houston as an engineering consultant before being drawn to MIT to pursue his interest in mechanical design and hardware. As a PhD student in Professor Amos Winter’s lab, he is now working to decrease the cost of desalination and improve access to drinking water in developing countries. His PhD research focuses on new methods to decrease the cost and energy use of groundwater treatment for drinking water. Currently, he is exploring the use of electrodialysis, which is a membrane-based desalination process. By improving the design of the control mechanisms for this process, as well as by redesigning the devices to achieve higher desalination efficiency, he seeks to decrease the cost of these systems and their energy use. His solutions will be piloted in both on-grid and off-grid applications in India, supported through a collaboration with consumer goods maker Eureka Forbes and infrastructure company Tata Projects. The 2019-20 J-WAFS Graduate Student Fellow Peter Godart is a PhD candidate in the Department of Mechanical Engineering, and also holds BS and MS degrees in mechanical engineering and a BS in electrical engineering from MIT. From 2015 to 17, Godart also held a research scientist position at the NASA Jet Propulsion Laboratory (JPL), where he managed the development of water-reactive metal power systems, developed software for JPL’s Mars rovers, and supported rover operations. Godart’s current research at MIT focuses on improving global sustainability by using aluminum waste to power desalination and produce energy. Through this work, he aims to provide communities around the world with a means of improving both their waste management practices and their climate change resiliency. He is creating a complete system that can take in scrap aluminum and output potable water, electricity, and high-grade mineral boehmite. This suite of technologies leverages the energy available in aluminum, which is one of the most energy-dense materials to which we have ready access. The process enables recycled aluminum to react with water in order to produce hydrogen gas, which could be used in fuel cells or internal combustion engines to generate electricity, heat, and power for desalination systems. Honorable mention Mark Brennan is a PhD candidate in the Department of Urban Studies and Planning (DUSP). He studies the supply chains behind public programs that provide goods to vulnerable communities, especially in water- and food-insecure areas. His ongoing projects include studying which firms shoulder risk in irrigation supply chains in the Sahel, and how American federal assistance programs are structured to provide relief after disasters. Brennan is currently collaborating with a team of researchers at the MIT Sloan School of Management, MIT D-Lab, and DUSP on a J-WAFS-funded project that is investigating ways to increase the accessibility of irrigation systems to small rural sub-Saharan African farmers, with a specific focus on Senegal. PhD candidates Sahil Shah (left) and Peter Godart, both of the Department of Mechanical Engineering, have each received fellowships from MIT’s Abdul Latif Jameel Water and Food Systems Lab for 2019-20. Their research explores possible solutions to global and local water supply challenges through new approaches to desalination. https://news.mit.edu/2019/electrified-droplet-air-purification-0617 Researchers have found a simple formula that could be useful for air purification, space propulsion, and molecular analyses. Mon, 17 Jun 2019 00:00:00 -0400 https://news.mit.edu/2019/electrified-droplet-air-purification-0617 Jennifer Chu | MIT News Office When a raindrop falls through a thundercloud, it is subject to strong electric fields that pull and tug on the droplet, like a soap bubble in the wind. If the electric field is strong enough, it can cause the droplet to burst apart, creating a fine, electrified mist.Scientists began taking notice of how droplets behave in electric fields in the early 1900s, amid concerns over lightning strikes that were damaging newly erected power lines. They soon realized that the power lines’ own electric fields were causing raindrops to burst around them, providing a conductive path for lightning to strike. This revelation led engineers to design thicker coverings around power lines to limit lightning strikes.Today, scientists understand that the stronger the electric field, the more likely it is that a droplet within it will burst. But, calculating the exact field strength that will burst a particular droplet has always been an involved mathematical task.Now, MIT researchers have found that the conditions for which a droplet bursts in an electric field all boil down to one simple formula, which the team has derived for the first time.With this simple new equation, the researchers can predict the exact strength an electric field should be to burst a droplet or keep it stable. The formula applies to three cases previously analyzed separately: a droplet pinned on a surface, sliding on a surface, or free-floating in the air.Their results, published today in the journal Physical Review Letters, may help engineers tune the electric field or the size of droplets for a range of applications that depend on electrifying droplets. These include  technologies for air or water purification, space propulsion, and molecular analysis.“Before our result, engineers and scientists had to perform computationally intensive simulations to assess the stability of an electrified droplet,” says lead author Justin Beroz, a graduate student in MIT’s departments of Mechanical Engineering and Physics. “With our equation, one can predict this behavior immediately, with a simple paper-and-pencil calculation. This is of great practical benefit to engineers working with, or trying to design, any system that involves liquids and electricity.”Beroz’ co-authors are A. John Hart, associate professor of mechanical engineering, and John Bush, professor of mathematics.“Something unexpectedly simple”Droplets tend to form as perfect little spheres due to surface tension, the cohesive force that binds water molecules at a droplet’s surface and pulls the molecules inward. The droplet may distort from its spherical shape in the presence of other forces, such as the force from an electric field. While surface tension acts to hold a droplet together, the electric field acts as an opposing force, pulling outward on the droplet as charge builds on its surface.“At some point, if the electric field is strong enough, the droplet can’t find a shape that balances the electrical force, and at that point, it becomes unstable and bursts,” Beroz explains.He and his team were interested in the moment just before bursting, when the droplet has been distorted to its critically stable shape. The team set up an experiment in which they slowly dispensed water droplets onto a metal plate that was electrified to produce an electric field, and used a high-speed camera to record the distorted shapes of each droplet.“The experiment is really boring at first — you’re watching the droplet slowly change shape, and then all of a sudden it just bursts,” Beroz says.After experimenting on droplets of different sizes and under various electric field strengths, Beroz isolated the video frame just before each droplet burst, then outlined its critically stable shape and calculated several parameters such as the droplet’s volume, height, and radius. He plotted the data from each droplet and found, to his surprise, that they all fell along an unmistakably straight line.“From a theoretical point of view, it was an unexpectedly simple result given the mathematical complexity of the problem,” Beroz says. “It suggested that there might be an overlooked, yet simple, way to calculate the burst criterion for the droplets.” A water droplet, subject to an electric field of slowly increasing strength, suddenly bursts by emitting a fine, electrified mist from its apex.Volume above heightPhysicists have long known that a liquid droplet in an electric field can be represented by a set of coupled nonlinear differential equations. These equations, however, are incredibly difficult to solve. To find a solution requires determining the configuration of the electric field, the shape of the droplet, and the pressure inside the droplet, simultaneously.“This is commonly the case in physics: It’s easy to write down the governing equations but very hard to actually solve them,” Beroz says. “But for the droplets, it turns out that if you choose a particular combination of physical parameters to define the problem from the start, a solution can be derived in a few lines. Otherwise, it’s impossible.”Physicists who attempted to solve these equations in the past did so by factoring in, among other parameters, a droplet’s height — an easy and natural choice for characterizing a droplet’s shape. But Beroz made a different choice, reframing the equations in terms of a droplet’s volume rather than its height. This was the key insight for reformulating the problem into an easy-to-solve formula.“For the last 100 years, the convention was to choose height,” Beroz says. “But as a droplet deforms, its height changes, and therefore the mathematical complexity of the problem is inherent in the height. On the other hand, a droplet’s volume remains fixed regardless of how it deforms in the electric field.”By formulating the equations using only parameters that are “fixed” in the same sense as a droplet’s volume, “the complicated, unsolvable parts of the equation cancel out, leaving a simple equation that matches the experimental results,” Beroz says.Specifically, the new formula the team derived relates five parameters: a droplet’s surface tension, radius, volume, electric field strength, and the electric permittivity of the air surrounding the droplet. Plugging any four of these parameters into the formula will calculate the fifth.Beroz says engineers can use the formula to develop techniques such as electrospraying, which involves the bursting of a droplet maintained at the orifice of an electrified nozzle to produce a fine spray. Electrospraying is commonly used to aerosolize biomolecules from a solution, so that they can pass through a spectrometer for detailed analysis. The technique is also used to produce thrust and propel satellites in space.“If you’re designing a system that involves liquids and electricity, it’s very practical to have an equation like this, that you can use every day,” Beroz says.This research was funded in part by the MIT Deshpande Center for Technological Innovation, BAE Systems, the Assistant Secretary of Defense for Research and Engineering via MIT Lincoln Laboratory, the National Science Foundation, and a Department of Defense National Defence Science and Engineering Graduate Fellowship. Electrified water droplets take on a variety of distorted shapes just before bursting, based on the strength of the electric field. The profiles of different distorted droplet shapes are shown, overlaid on an image of one particular distorted droplet for comparison. Courtesy of the researchers https://news.mit.edu/2019/j-wafs-graduate-fellow-phd-student-andrea-beck-untangles-social-dynamics-water-0610 J-WAFS Fellow and DUSP PhD student Andrea Beck examines the success factors behind water utility partnerships in Africa. Mon, 10 Jun 2019 10:30:01 -0400 https://news.mit.edu/2019/j-wafs-graduate-fellow-phd-student-andrea-beck-untangles-social-dynamics-water-0610 Archana Apte | Abdul Latif Jameel Water and Food Systems Lab Water operator partnerships, or WOPs, bring together water utility employees from different countries to improve public water delivery and sanitation services. “In these partnerships, interpersonal dynamics are so important,” explains Andrea Beck, “and I’m really passionate about hearing people’s stories.” Beck, a PhD candidate in the Department of Urban Studies and Planning (DUSP) and a 2018-19 J-WAFS Fellow for Water Solutions, is studying the dynamics of water operator partnerships to understand how they create mutual benefit for water utilities worldwide. WOPs bring together utilities from different countries as peer-to-peer partnerships to encourage mutual learning. Topics covered by these partnerships range from operational issues to finance and human resources.  WOPs were conceived by a United Nations advisory board in 2006 as an alternative to public-private partnerships and have since gained traction across Europe, Africa, Asia, and Latin America, with over 200 partnerships formed to date. Beck’s research focuses on the development of WOPs in global policy circles, differences between WOPs and public-private partnerships, and conditions for successful partnerships.A journey of interest Beck’s interest in water issues and African culture began long before she came to MIT. After finishing high school, Beck volunteered at a cultural center in rural Malawi, where she developed an appreciation for cultural immersion. Her undergraduate and master’s work focused on water resources and trans-boundary water cooperation; during her PhD studies at MIT, Beck shifted her focus to urban water issues, seeking a topic that more personally affected people at smaller scale. Water issues “have always been close to my heart,” she explains. When Beck returned to Malawi for her doctoral fieldwork in 2018, she found her urban water perspective “eye-opening.”  “I was suddenly seeing all of the valves in the ground. I was looking for pipes,” she explained. “If I hadn’t studied that here [at DUSP], I would have been blind” to those elements. Inspired by Associate Professor Gabriella Carolini in the International Development Group at DUSP, Beck focused her doctoral research on water and sanitation services and the water operators that serve urban populations. In addition to Carolini, she is working with Professor Lawrence Susskind in the DUSP Environmental Policy and Planning Group and Professor James Wescoat in the Department of Architecture. Beck used the United Nations Habitat database of WOPs to gain an overview of all partnerships worldwide. From this background research, she decided to focus on partnerships in Africa due to their prevalence and her previous experience in the region. In 2018, MIT’s MISTI-Netherlands program sponsored Beck’s participation in a short course on partnerships for water supply and sanitation in the Netherlands. The course’s lecturers were part of a Dutch water company conducting international water partnerships with a range of African countries, including Malawi. Beck then used the connections from the short course and the support from her 2018-19 fellowship from the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) to research partnerships underway in the Lilongwe, Malawi water utility, which has worked with partners from the Netherlands, Rwanda, Uganda, and South Africa. She observed meetings between representatives, shadowed workers in the field, and conducted interviews. Beck found that many utilities faced similar challenges, such as non-revenue water, or water lost after pumping. She also found that utilities had much to gain from exchanges with colleagues and peers. For instance, the utility representatives in Lilongwe, Malawi were excited about their partnership with Rwanda because they saw an opportunity to share their experiences as peers. Beck found ample support at MIT for her dissertation project. “I’m drawing on development studies, urban planning, geography, and ethnographic approaches, and MIT has allowed me to bring all of this together,” she explains. Beck has received funding from J-WAFS, DUSP, MISTI-Netherlands, the Center for International Studies, and MISTI-Africa. “They’ve been great resources,” she says, “and I’ve felt that there is an understanding and an appreciation for qualitative research and the contributions it can make.” Beck also highlighted that the short course sponsored by MISTI-Netherlands, and the water utility connections she forged there, were “absolutely instrumental in [her] research.” Beck has great appreciation for the J-WAFS Fellowship as well. The open-ended nature of the funding gave her the academic freedom to pursue the research questions she was interested in, while the additional time allowed Beck to digest her fieldwork and think about how to drive her research forward in new ways. Taking a deeper dive In the future, Beck would like to study high-performing utilities across Africa, in places such as Morocco, Burkina Faso, and Swaziland. “I want to do more research into these utilities,” she explains, “and understand what other utilities could learn from them.” She will begin this work soon, having recently received an award from the Water Resource Specialty Group of the American Association of Geographers that will support a research trip to Rabat, Morocco, to study WOPs there. She would also like to conduct additional interviews in the Netherlands, since Dutch representatives are involved in many utility partnerships in Africa. Beck’s qualitative research into partnership dynamics provides a necessary perspective on the effectiveness of WOPs. Being able to “follow along [with utility partners], hang out with them, chat with them while they’re doing their work, is something that has really enriched my research,” she explains. Beck’s analysis is one of the first to compare learning dynamics between north-south and south-south WOPs; most studies examine one partnership in detail. Her work could pinpoint ways to improve current water utility partnerships. As the world grows increasingly interconnected and water grows scarcer, integrating multiple perspectives into these issues will provide a more stable grounding to create robust solutions for issues of water access and social equity. 2018-19 J-WAFS Fellow Andrea Beck sits by the Charles River. Photo: Andi Sutton/J-WAFS https://news.mit.edu/2019/j-wafs-graduate-fellow-phd-student-andrea-beck-untangles-social-dynamics-water-0610 J-WAFS Fellow and DUSP PhD student Andrea Beck examines the success factors behind water utility partnerships in Africa. Mon, 10 Jun 2019 10:30:01 -0400 https://news.mit.edu/2019/j-wafs-graduate-fellow-phd-student-andrea-beck-untangles-social-dynamics-water-0610 Archana Apte | Abdul Latif Jameel Water and Food Systems Lab Water operator partnerships, or WOPs, bring together water utility employees from different countries to improve public water delivery and sanitation services. “In these partnerships, interpersonal dynamics are so important,” explains Andrea Beck, “and I’m really passionate about hearing people’s stories.” Beck, a PhD candidate in the Department of Urban Studies and Planning (DUSP) and a 2018-19 J-WAFS Fellow for Water Solutions, is studying the dynamics of water operator partnerships to understand how they create mutual benefit for water utilities worldwide. WOPs bring together utilities from different countries as peer-to-peer partnerships to encourage mutual learning. Topics covered by these partnerships range from operational issues to finance and human resources.  WOPs were conceived by a United Nations advisory board in 2006 as an alternative to public-private partnerships and have since gained traction across Europe, Africa, Asia, and Latin America, with over 200 partnerships formed to date. Beck’s research focuses on the development of WOPs in global policy circles, differences between WOPs and public-private partnerships, and conditions for successful partnerships.A journey of interest Beck’s interest in water issues and African culture began long before she came to MIT. After finishing high school, Beck volunteered at a cultural center in rural Malawi, where she developed an appreciation for cultural immersion. Her undergraduate and master’s work focused on water resources and trans-boundary water cooperation; during her PhD studies at MIT, Beck shifted her focus to urban water issues, seeking a topic that more personally affected people at smaller scale. Water issues “have always been close to my heart,” she explains. When Beck returned to Malawi for her doctoral fieldwork in 2018, she found her urban water perspective “eye-opening.”  “I was suddenly seeing all of the valves in the ground. I was looking for pipes,” she explained. “If I hadn’t studied that here [at DUSP], I would have been blind” to those elements. Inspired by Associate Professor Gabriella Carolini in the International Development Group at DUSP, Beck focused her doctoral research on water and sanitation services and the water operators that serve urban populations. In addition to Carolini, she is working with Professor Lawrence Susskind in the DUSP Environmental Policy and Planning Group and Professor James Wescoat in the Department of Architecture. Beck used the United Nations Habitat database of WOPs to gain an overview of all partnerships worldwide. From this background research, she decided to focus on partnerships in Africa due to their prevalence and her previous experience in the region. In 2018, MIT’s MISTI-Netherlands program sponsored Beck’s participation in a short course on partnerships for water supply and sanitation in the Netherlands. The course’s lecturers were part of a Dutch water company conducting international water partnerships with a range of African countries, including Malawi. Beck then used the connections from the short course and the support from her 2018-19 fellowship from the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) to research partnerships underway in the Lilongwe, Malawi water utility, which has worked with partners from the Netherlands, Rwanda, Uganda, and South Africa. She observed meetings between representatives, shadowed workers in the field, and conducted interviews. Beck found that many utilities faced similar challenges, such as non-revenue water, or water lost after pumping. She also found that utilities had much to gain from exchanges with colleagues and peers. For instance, the utility representatives in Lilongwe, Malawi were excited about their partnership with Rwanda because they saw an opportunity to share their experiences as peers. Beck found ample support at MIT for her dissertation project. “I’m drawing on development studies, urban planning, geography, and ethnographic approaches, and MIT has allowed me to bring all of this together,” she explains. Beck has received funding from J-WAFS, DUSP, MISTI-Netherlands, the Center for International Studies, and MISTI-Africa. “They’ve been great resources,” she says, “and I’ve felt that there is an understanding and an appreciation for qualitative research and the contributions it can make.” Beck also highlighted that the short course sponsored by MISTI-Netherlands, and the water utility connections she forged there, were “absolutely instrumental in [her] research.” Beck has great appreciation for the J-WAFS Fellowship as well. The open-ended nature of the funding gave her the academic freedom to pursue the research questions she was interested in, while the additional time allowed Beck to digest her fieldwork and think about how to drive her research forward in new ways. Taking a deeper dive In the future, Beck would like to study high-performing utilities across Africa, in places such as Morocco, Burkina Faso, and Swaziland. “I want to do more research into these utilities,” she explains, “and understand what other utilities could learn from them.” She will begin this work soon, having recently received an award from the Water Resource Specialty Group of the American Association of Geographers that will support a research trip to Rabat, Morocco, to study WOPs there. She would also like to conduct additional interviews in the Netherlands, since Dutch representatives are involved in many utility partnerships in Africa. Beck’s qualitative research into partnership dynamics provides a necessary perspective on the effectiveness of WOPs. Being able to “follow along [with utility partners], hang out with them, chat with them while they’re doing their work, is something that has really enriched my research,” she explains. Beck’s analysis is one of the first to compare learning dynamics between north-south and south-south WOPs; most studies examine one partnership in detail. Her work could pinpoint ways to improve current water utility partnerships. As the world grows increasingly interconnected and water grows scarcer, integrating multiple perspectives into these issues will provide a more stable grounding to create robust solutions for issues of water access and social equity. 2018-19 J-WAFS Fellow Andrea Beck sits by the Charles River. Photo: Andi Sutton/J-WAFS https://news.mit.edu/2019/empowering-african-farmers-with-data-0530 Research from the Institute for Data, Systems, and Society aims to help African farmers increase their production and profits with better prediction. Thu, 30 May 2019 15:55:01 -0400 https://news.mit.edu/2019/empowering-african-farmers-with-data-0530 Scott Murray | Institute for Data, Systems, and Society With a couple billion more people estimated to join the global population in the next few decades, world food production could use an upgrade. Africa has a key role to play: Agriculture is Africa’s biggest industry, but much of Africa’s agricultural land is currently underutilized. Crop yields could be increased with more efficient farming techniques and new equipment — but that would require investment capital, which is often an obstacle for farmers. A new research collaboration at the MIT Institute for Data, Systems, and Society (IDSS) aims to address this challenge with data. The group plans to use data from technologically advanced farms to better predict the value of intervention in underperforming farms. Ultimately, the goal is to create a platform for sharing data and risk among invested parties, from farmers and lenders to insurers and equipment manufacturers. Sharing data, sharing risk Many African farmers lack the capital to invest in yield-increasing upgrades like new irrigation systems, new machinery, new fertilizers, and technology for sensing and tracking crop growth. The most common path to capital is bank loans, with land as collateral. This is an unattractive proposition for farmers, who already bear the many risks of production, including bad weather, changing market prices, or even the shocks of geopolitical events. Lenders, on the other hand, have an incomplete assessment of their risk, especially with potential borrowers who have no credit history. Lenders also lack data and tools to predict their return on investment. “Building a platform for risk-sharing is key to upgrading farming practices,” says Munther Dahleh, a professor of electrical engineering and computer science at MIT and director of IDSS. In order to create such a platform, Dahleh and the IDSS team aim to better predict the value of employing advanced farming practices on the production of individual farms. This prediction needs to be accurate enough to incentivize investment from economic stakeholders and the farmers themselves, who are in competition with each other and may be reluctant to share information. The IDSS approach proposes a data-sharing platform that incentivizes all parties to participate: Technologically advanced farms are rewarded for their valuable data, bankers benefit from data that support their credit risk models, farmers get better loan terms and recommendations that increase their profits and production, and technology companies get recommendations on how to best support the needs of their farmer customers. “Such a platform has to have the correct incentives to engage everyone to participate, have sufficient protection from players with market power, and ultimately provide valuable data for farmers and creditors alike,” says Dahleh. The absence of data from underperforming farms presents a challenge to extrapolating the value of intervention and assessing the uncertainty in such predictions. With sparse available data, researchers are looking to conduct experiments in strategically selected farms to provide valuable new data for the rest. Researchers will use advanced machine learning, including active learning methodology, to try to achieve both a quantification of the predicted value of intervention and a quantification of the uncertainty of that prediction to a degree of confidence. Once more data is available, IDSS researchers intend to refine their calculations and develop new techniques for extrapolating the value of intervention in less-advanced farms. Engaging stakeholders One likely intervention for many African farmers involves using different fertilizers. Many farmers aren’t currently using fertilizers targeted to specific soil or various stages of farming — so fertilizer producers are another vested interest in this agriculture economy. To help these farmers get access to better loan terms, Moroccan phosphate company OCP is funding a collaboration between IDSS researchers and Mohammed VI Polytechnic University (UM6P) in Morocco. This research collaboration with OCP, a leading global company in the phosphate fertilizer industry, includes building the data- and risk-sharing platform as well as other foundational research in agriculture. The collaboration has the potential to engage other stakeholders working or investing in African agriculture. “This collaboration will help accelerate our efforts to develop pertinent solutions for African agriculture using high-level agri-tech tools,” says Fassil Kebede, professor of soil science and head of the Center for Soil and Fertilizer Research in Africa. “This will offer farmers possibilities for better production and growth, which is part of our mission to contribute to Africa’s food-security objectives.” “African farmers are at the heart of the OCP Group’s mission and strategy, while data analytics and predictive tools are today essential for agriculture development in Africa,” adds Mostafa Terrab, OCP Group chair and CEO. “This collaboration with IDSS will help us bring together new technology and analytical methods from one side, and our expertise with African farmers and their challenges from the other side. It will reinforce our capabilities to offer adapted solutions to African farmers, especially small holders, to enable them to make more precise and timely decisions.” Ultimately, IDSS aims to bring wins across an entire economic ecosystem, from insurers to lenders to equipment and fertilizer companies. But most importantly, boosting this ecosystem could help lift many farmers out of poverty — and bring about a much-needed increase in the world’s aggregate food production. Says Dahleh: “To accomplish this mission, this project will demonstrate the power of data coupled with advanced tools from predictive analytics, machine learning, reinforcement learning, and data sharing markets.” New IDSS research “will demonstrate the power of data coupled with advanced tools from predictive analytics, machine learning, reinforcement learning, and data sharing markets,” says IDSS Director Munther Dahleh. https://news.mit.edu/2019/j-wafs-announces-seven-new-seed-grants-0529 Nine principal investigators from MIT will receive grants totaling over $1 million for solutions-oriented research into global food and water challenges. Wed, 29 May 2019 14:20:01 -0400 https://news.mit.edu/2019/j-wafs-announces-seven-new-seed-grants-0529 Andi Sutton | Abdul Latif Jameel Water and Food Systems Lab Agricultural productivity technologies for small-holder farmers; food safety solutions for everyday consumers; sustainable supply chain interventions in the palm oil industry; water purification methods filtering dangerous micropollutants from industrial and wastewater streams — these are just a few of the research-based solutions being supported by the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT. J-WAFS is funding these and other projects through its fifth round of seed grants, providing over $1 million in funding to the MIT research community. These grants, which are funded competitively to MIT principal investigators (PIs) across all five schools at the Institute, exemplify the ambitious goals of MIT’s Institute-wide effort to address global water and food systems challenges through research and innovation.  This year, seven new projects led by nine faculty PIs across all five schools will be funded with two-year grants of up to $150,000, overhead-free. Interest in water and food systems research at MIT is substantial, and growing. By the close of this grant cycle, over 12 percent of MIT faculty will have submitted J-WAFS grant proposals. Thirty-four principal investigators submitted proposals to this latest call, nearly one third of whom were proposing to J-WAFS for the first time. “The broad range of disciplines that this applicant pool represents demonstrates how meeting today’s water and food challenges is motivating many diverse researchers in our community,” comments Renee Robins, executive director of J-WAFS. “Our reach across all of MIT’s schools further attests to the strength of the Institute’s capabilities that can be applied to the search for solutions to pressing water and food sector challenges.” The nine faculty who were funded represent eight departments and labs, including the departments of Civil and Environmental Engineering, Mechanical Engineering, Chemical Engineering, Chemistry, and Economics, as well as the Media Lab (School of Architecture and Planning), MIT D-Lab (Office of the Vice Chancellor), and the Sloan School of Management. New approaches to ensure safe drinking water Nearly 1 billion people worldwide receive their drinking water through underground pipes that only operate intermittently. In contrast to continuous water supplies, pipes like these that are only filled with water during limited supply periods are vulnerable to contamination. However, it is challenging to quantify the quality of water that comes out of these pipes because of the vast differences in how the pipe networks are arranged and where they are located, especially in dense urban settings. Andrew J. Whittle, the Edmund K. Turner Professor in Civil Engineering, seeks to address this problem by gathering and making available more precise data on how water quality is affected by how the pipe is used — i.e., during periods of filling, flushing, or stagnation. Supported by the seed grant, he and his research team will perform tests in a section of abandoned pipe in Singapore, one that is still connected to the urban water pipe network there. By controlling flushing rates, monitoring stagnation, and measuring contamination, the study will analyze how variances in flow affect water quality, and evaluate how these data might be able to inform future water quality studies in cities with similar piped water challenges. Patrick Doyle, the Robert T. Haslam (1911) Professor of Chemical Engineering, is taking a different approach to water quality: creating a filter to remove micropollutants. Wastewater from industrial and agricultural processes often contains solvents, petrochemicals, lubricants, pharmaceuticals, hormones, and pesticides, which can enter natural water systems. While these micropollutants may be present at low concentrations, they can still have a significant negative impact on aquatic ecosystems, as well as human health. The challenge is in detecting and removing these micropollutants, because of the low concentrations in which they occur. For this project, Doyle and his team will develop a system to remove a variety of micropollutants, at even the smallest concentrations, using a special hydrogel particle that can be “tuned” to fit the size and shape of particular particles. Leveraging the flexibility of these hydrogels, this technology can improve the speed, precision, efficiency, and environmental sustainability of industrial water purification systems, and improve the health of the natural water systems upon which humans and our surrounding ecosystems rely. Developing support tools for small-holder farmers More than half of food calories consumed globally — and 70 percent of food calories consumed in developing countries — are supplied by approximately 475 million small-holder households in developing and emerging economies. These farmers typically operate through informal contracts and processes, which can lead to large economic inefficiencies and lack of traceability in the supply chains that they are a part of. Joann de Zegher, the Maurice F. Strong Career Development Professor in the operations management program at the MIT Sloan School of Management, seeks to address these challenges by developing a mobile-based trading platform that links small-holder farmers, middlemen, and mills in the palm oil supply chain in Indonesia. Rapid growth in demand in this industry has led to high environmental costs, and recently pressure from consumers and nongovernmental organizations is motivating producers to employ more sustainable practices. However, these pressures deepen market access challenges for small-holder palm oil farmers. Her project seeks to improve the efficiency and effectiveness of the current supply chain, and create transparency as a byproduct. Another small-holder farmer intervention is being developed by Robert M. Townsend, the Elizabeth and James Killian Professor of Economics. He is leading a research effort to improve access to crop insurance for small-holder farmers, who are particularly vulnerable to weather-related crop failures. Crop cultivation worldwide is highly vulnerable to unfavorable weather. In developing countries, farmers bear the financial burden of their crops’ exposure to weather ravages, the extent of which will only increase due to the effects of climate change. As a result, they rely on low-risk, low-yield cultivation practices that do not allow for the food and financial gains that can be possible when favorable weather supports higher yields. While crop insurance can help, it is often prohibitively expensive for these small-scale producers. Townsend and his research team seek to make crop insurance more accessible and affordable for farmers in developing regions by developing a new system of insurance pricing and payoff schedules that takes into account the widely varying ways through which weather affects crop’s development and yield throughout the growth cycle. Their goal is to provide a new, personalized insurance tool that improves farmers’ ability to protect their yields, invest in their crops, and adapt to climate change in order to stabilize food supply and farmer livelihoods worldwide.  Access to affordable fertilizer is another challenge that small holders face. Ammonia is the key ingredient in fertilizers; however, most of the world’s supply is produced by the Haber-Bosch process, which directly converts nitrogen and hydrogen gas to ammonia in a highly capital-intensive process that is difficult to downscale. Finding an alternative way to synthesize ammonia could transform access to fertilizer and improve food security, particularly in the developing world where current fertilizers are prohibitively expensive. For this seed grant project, Yogesh Surendranath, Paul M Cook Career Development Assistant Professor in the Department of Chemistry, will develop an electrochemical process to synthesize ammonia, one that can be powered using renewable energy sources such as solar or wind. Designed to be implemented in a decentralized way, this technology could enable fertilizer production directly in the fields where it is needed, and would be especially beneficial in developing regions without access to existing ammonia production infrastructure. Even when crops produce high yields, post-harvest preservation is a challenge, especially to fruit and vegetable farmers on small plots of land in developing regions. The lack of affordable and effective post-harvest vegetable cooling and storage poses a significant challenge for them, and can lead to vegetable spoilage, reduced income, and lost time. Most techniques for cooling and storing vegetables rely on electricity, which is either unaffordable or unavailable for many small-holder farmers, especially those living on less than $3 per day in remote areas. The solution posed by an interdisciplinary team led by Daniel Frey, professor in the Department of Mechanical Engineering and D-Lab faculty director, along with Leon Glicksman, professor of architecture and mechanical engineering, is a storage technology that uses the natural evaporation of water to create a cool and humid environment that prevents rot and dehydration, all without the need for electricity. This system is particularly suited for hot, dry regions such as Kenya, where the research team will be focusing their efforts. The research will be conducted in partnership with researchers from University of Nairobi’s Department of Plant Science and Crop Protection, who have extensive experience working with low-income rural communities on issues related to horticulture and improving livelihoods. The team will build and test evaporative cooling chambers in rural Kenya to optimize the design for performance, practical construction, and user preferences, and will build evidence for funders and implementing organizations to support the dissemination of these systems to improve post-harvest storage challenges. Combatting food safety challenges through wireless sensors Food safety is a matter of global concern, and a subject that several J-WAFS-funded researchers seek to tackle with innovative technologies. And for good reason: Food contamination and foodborne pathogens cause sickness and even death, as well as significant economic costs including the wasted labor and resources that occur when a contaminated product is disposed of, the lost profit to affected companies, and the lost food products that could have nourished a number of people. Fadel Adib, an assistant professor at the MIT Media Lab, will receive a seed grant to develop a new tool that quickly and accurately assesses whether a given food product is contaminated. This food safety sensor uses wireless signals to determine the quality and safety of packaged food using a radio-frequency identification sticker placed on the product’s container. The system turns off-the-shelf RFID tags into spectroscopes which, when read, can measure the material contents of a product without the need to open its package. The sensor can also identify the presence of contaminants — pathogens as well as adulterants that affect the nutritional quality of the food product. If successful, this research, and the technology that results, will pave the way for wireless sensing technologies that can inform their users about the health and safety of their food and drink. With these seven newly funded projects, J-WAFS will have funded 37 total seed research projects since its founding in 2014. These grants serve as important catalysts of new water and food sector research at MIT, resulting in publications, patents, and other significant research support. To date, J-WAFS’ seed grant PIs have been awarded over $11M in follow-on funding. J-WAFS’ director, Professor John Lienhard, commented on the influence of this grant program: “The betterment of society drives our research community at MIT. Water and food, our world’s most vital resources, are currently put at great risk by a variety of global-scale challenges, and MIT researchers are responding forcefully. Through this, and J-WAFS’ other grant programs, we see MIT’s creative innovations and actionable solutions that will help to ensure a sustainable future.”J-WAFS Seed Grants, 2019 Learning Food and Water Contaminants using Wireless Signals PI: Fadel Adib, assistant professor, MIT Media Lab Designing Supply Chain Platforms for Smallholders in Indonesia PI: Joann de Zegher, Maurice F. Strong Career Development Professor, Sloan School of Management Microparticle Systems for the Removal of Organic Micropollutants PI: Patrick Doyle, Robert T. Haslam (1911) Professor of Chemical Engineering, Department of Chemical Engineering Evaporative Cooling Technologies for Vegetable Preservation in Kenya PIs: Daniel Frey, professor, Department of Mechanical Engineering, and faculty research director, MIT D-Lab; Leon Glicksman, professor of building technology and mechanical engineering, Department of Mechanical Engineering; Eric Verploegen, research engineer, MIT D-Lab Electrocatalytic Ammonia Synthesis for Distributed Agriculture PI: Yogesh Surendranath, Paul M Cook Career Development Assistant Professor, Department of Chemistry Designing Purely Weather-Contingent Crop Insurance with Personalized Coverage to Improve Farmers’ Investments in their Crops for Higher Yields PI:  Robert M. Townsend, Elizabeth and James Killian Professor of Economics, Department of Economics Understanding Effects of Intermittent Flow on Drinking Water Quality PI: Andrew J. Whittle, Edmund K. Turner Professor in Civil Engineering, Department of Civil and Environmental Engineering Agricultural productivity technologies for small-holder farmers; sustainable supply chain interventions in the palm oil industry; interventions that can provide clean water for cities and surrounding ecosystems — these are just a few of the research topics that grantees are pursuing, supported by the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT. https://news.mit.edu/2019/j-wafs-announces-seven-new-seed-grants-0529 Nine principal investigators from MIT will receive grants totaling over $1 million for solutions-oriented research into global food and water challenges. Wed, 29 May 2019 14:20:01 -0400 https://news.mit.edu/2019/j-wafs-announces-seven-new-seed-grants-0529 Andi Sutton | Abdul Latif Jameel Water and Food Systems Lab Agricultural productivity technologies for small-holder farmers; food safety solutions for everyday consumers; sustainable supply chain interventions in the palm oil industry; water purification methods filtering dangerous micropollutants from industrial and wastewater streams — these are just a few of the research-based solutions being supported by the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT. J-WAFS is funding these and other projects through its fifth round of seed grants, providing over $1 million in funding to the MIT research community. These grants, which are funded competitively to MIT principal investigators (PIs) across all five schools at the Institute, exemplify the ambitious goals of MIT’s Institute-wide effort to address global water and food systems challenges through research and innovation.  This year, seven new projects led by nine faculty PIs across all five schools will be funded with two-year grants of up to $150,000, overhead-free. Interest in water and food systems research at MIT is substantial, and growing. By the close of this grant cycle, over 12 percent of MIT faculty will have submitted J-WAFS grant proposals. Thirty-four principal investigators submitted proposals to this latest call, nearly one third of whom were proposing to J-WAFS for the first time. “The broad range of disciplines that this applicant pool represents demonstrates how meeting today’s water and food challenges is motivating many diverse researchers in our community,” comments Renee Robins, executive director of J-WAFS. “Our reach across all of MIT’s schools further attests to the strength of the Institute’s capabilities that can be applied to the search for solutions to pressing water and food sector challenges.” The nine faculty who were funded represent eight departments and labs, including the departments of Civil and Environmental Engineering, Mechanical Engineering, Chemical Engineering, Chemistry, and Economics, as well as the Media Lab (School of Architecture and Planning), MIT D-Lab (Office of the Vice Chancellor), and the Sloan School of Management. New approaches to ensure safe drinking water Nearly 1 billion people worldwide receive their drinking water through underground pipes that only operate intermittently. In contrast to continuous water supplies, pipes like these that are only filled with water during limited supply periods are vulnerable to contamination. However, it is challenging to quantify the quality of water that comes out of these pipes because of the vast differences in how the pipe networks are arranged and where they are located, especially in dense urban settings. Andrew J. Whittle, the Edmund K. Turner Professor in Civil Engineering, seeks to address this problem by gathering and making available more precise data on how water quality is affected by how the pipe is used — i.e., during periods of filling, flushing, or stagnation. Supported by the seed grant, he and his research team will perform tests in a section of abandoned pipe in Singapore, one that is still connected to the urban water pipe network there. By controlling flushing rates, monitoring stagnation, and measuring contamination, the study will analyze how variances in flow affect water quality, and evaluate how these data might be able to inform future water quality studies in cities with similar piped water challenges. Patrick Doyle, the Robert T. Haslam (1911) Professor of Chemical Engineering, is taking a different approach to water quality: creating a filter to remove micropollutants. Wastewater from industrial and agricultural processes often contains solvents, petrochemicals, lubricants, pharmaceuticals, hormones, and pesticides, which can enter natural water systems. While these micropollutants may be present at low concentrations, they can still have a significant negative impact on aquatic ecosystems, as well as human health. The challenge is in detecting and removing these micropollutants, because of the low concentrations in which they occur. For this project, Doyle and his team will develop a system to remove a variety of micropollutants, at even the smallest concentrations, using a special hydrogel particle that can be “tuned” to fit the size and shape of particular particles. Leveraging the flexibility of these hydrogels, this technology can improve the speed, precision, efficiency, and environmental sustainability of industrial water purification systems, and improve the health of the natural water systems upon which humans and our surrounding ecosystems rely. Developing support tools for small-holder farmers More than half of food calories consumed globally — and 70 percent of food calories consumed in developing countries — are supplied by approximately 475 million small-holder households in developing and emerging economies. These farmers typically operate through informal contracts and processes, which can lead to large economic inefficiencies and lack of traceability in the supply chains that they are a part of. Joann de Zegher, the Maurice F. Strong Career Development Professor in the operations management program at the MIT Sloan School of Management, seeks to address these challenges by developing a mobile-based trading platform that links small-holder farmers, middlemen, and mills in the palm oil supply chain in Indonesia. Rapid growth in demand in this industry has led to high environmental costs, and recently pressure from consumers and nongovernmental organizations is motivating producers to employ more sustainable practices. However, these pressures deepen market access challenges for small-holder palm oil farmers. Her project seeks to improve the efficiency and effectiveness of the current supply chain, and create transparency as a byproduct. Another small-holder farmer intervention is being developed by Robert M. Townsend, the Elizabeth and James Killian Professor of Economics. He is leading a research effort to improve access to crop insurance for small-holder farmers, who are particularly vulnerable to weather-related crop failures. Crop cultivation worldwide is highly vulnerable to unfavorable weather. In developing countries, farmers bear the financial burden of their crops’ exposure to weather ravages, the extent of which will only increase due to the effects of climate change. As a result, they rely on low-risk, low-yield cultivation practices that do not allow for the food and financial gains that can be possible when favorable weather supports higher yields. While crop insurance can help, it is often prohibitively expensive for these small-scale producers. Townsend and his research team seek to make crop insurance more accessible and affordable for farmers in developing regions by developing a new system of insurance pricing and payoff schedules that takes into account the widely varying ways through which weather affects crop’s development and yield throughout the growth cycle. Their goal is to provide a new, personalized insurance tool that improves farmers’ ability to protect their yields, invest in their crops, and adapt to climate change in order to stabilize food supply and farmer livelihoods worldwide.  Access to affordable fertilizer is another challenge that small holders face. Ammonia is the key ingredient in fertilizers; however, most of the world’s supply is produced by the Haber-Bosch process, which directly converts nitrogen and hydrogen gas to ammonia in a highly capital-intensive process that is difficult to downscale. Finding an alternative way to synthesize ammonia could transform access to fertilizer and improve food security, particularly in the developing world where current fertilizers are prohibitively expensive. For this seed grant project, Yogesh Surendranath, Paul M Cook Career Development Assistant Professor in the Department of Chemistry, will develop an electrochemical process to synthesize ammonia, one that can be powered using renewable energy sources such as solar or wind. Designed to be implemented in a decentralized way, this technology could enable fertilizer production directly in the fields where it is needed, and would be especially beneficial in developing regions without access to existing ammonia production infrastructure. Even when crops produce high yields, post-harvest preservation is a challenge, especially to fruit and vegetable farmers on small plots of land in developing regions. The lack of affordable and effective post-harvest vegetable cooling and storage poses a significant challenge for them, and can lead to vegetable spoilage, reduced income, and lost time. Most techniques for cooling and storing vegetables rely on electricity, which is either unaffordable or unavailable for many small-holder farmers, especially those living on less than $3 per day in remote areas. The solution posed by an interdisciplinary team led by Daniel Frey, professor in the Department of Mechanical Engineering and D-Lab faculty director, along with Leon Glicksman, professor of architecture and mechanical engineering, is a storage technology that uses the natural evaporation of water to create a cool and humid environment that prevents rot and dehydration, all without the need for electricity. This system is particularly suited for hot, dry regions such as Kenya, where the research team will be focusing their efforts. The research will be conducted in partnership with researchers from University of Nairobi’s Department of Plant Science and Crop Protection, who have extensive experience working with low-income rural communities on issues related to horticulture and improving livelihoods. The team will build and test evaporative cooling chambers in rural Kenya to optimize the design for performance, practical construction, and user preferences, and will build evidence for funders and implementing organizations to support the dissemination of these systems to improve post-harvest storage challenges. Combatting food safety challenges through wireless sensors Food safety is a matter of global concern, and a subject that several J-WAFS-funded researchers seek to tackle with innovative technologies. And for good reason: Food contamination and foodborne pathogens cause sickness and even death, as well as significant economic costs including the wasted labor and resources that occur when a contaminated product is disposed of, the lost profit to affected companies, and the lost food products that could have nourished a number of people. Fadel Adib, an assistant professor at the MIT Media Lab, will receive a seed grant to develop a new tool that quickly and accurately assesses whether a given food product is contaminated. This food safety sensor uses wireless signals to determine the quality and safety of packaged food using a radio-frequency identification sticker placed on the product’s container. The system turns off-the-shelf RFID tags into spectroscopes which, when read, can measure the material contents of a product without the need to open its package. The sensor can also identify the presence of contaminants — pathogens as well as adulterants that affect the nutritional quality of the food product. If successful, this research, and the technology that results, will pave the way for wireless sensing technologies that can inform their users about the health and safety of their food and drink. With these seven newly funded projects, J-WAFS will have funded 37 total seed research projects since its founding in 2014. These grants serve as important catalysts of new water and food sector research at MIT, resulting in publications, patents, and other significant research support. To date, J-WAFS’ seed grant PIs have been awarded over $11M in follow-on funding. J-WAFS’ director, Professor John Lienhard, commented on the influence of this grant program: “The betterment of society drives our research community at MIT. Water and food, our world’s most vital resources, are currently put at great risk by a variety of global-scale challenges, and MIT researchers are responding forcefully. Through this, and J-WAFS’ other grant programs, we see MIT’s creative innovations and actionable solutions that will help to ensure a sustainable future.”J-WAFS Seed Grants, 2019 Learning Food and Water Contaminants using Wireless Signals PI: Fadel Adib, assistant professor, MIT Media Lab Designing Supply Chain Platforms for Smallholders in Indonesia PI: Joann de Zegher, Maurice F. Strong Career Development Professor, Sloan School of Management Microparticle Systems for the Removal of Organic Micropollutants PI: Patrick Doyle, Robert T. Haslam (1911) Professor of Chemical Engineering, Department of Chemical Engineering Evaporative Cooling Technologies for Vegetable Preservation in Kenya PIs: Daniel Frey, professor, Department of Mechanical Engineering, and faculty research director, MIT D-Lab; Leon Glicksman, professor of building technology and mechanical engineering, Department of Mechanical Engineering; Eric Verploegen, research engineer, MIT D-Lab Electrocatalytic Ammonia Synthesis for Distributed Agriculture PI: Yogesh Surendranath, Paul M Cook Career Development Assistant Professor, Department of Chemistry Designing Purely Weather-Contingent Crop Insurance with Personalized Coverage to Improve Farmers’ Investments in their Crops for Higher Yields PI:  Robert M. Townsend, Elizabeth and James Killian Professor of Economics, Department of Economics Understanding Effects of Intermittent Flow on Drinking Water Quality PI: Andrew J. Whittle, Edmund K. Turner Professor in Civil Engineering, Department of Civil and Environmental Engineering Agricultural productivity technologies for small-holder farmers; sustainable supply chain interventions in the palm oil industry; interventions that can provide clean water for cities and surrounding ecosystems — these are just a few of the research topics that grantees are pursuing, supported by the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT. https://news.mit.edu/2019/mit-team-nasa-big-idea-challenge-martian-greenhouse-0520 Multilevel Mars greenhouse could provide food to sustain astronauts for several years. Mon, 20 May 2019 13:00:01 -0400 https://news.mit.edu/2019/mit-team-nasa-big-idea-challenge-martian-greenhouse-0520 Sarah Jensen | Department of Aeronautics and Astronautics An MIT student team took second place for its design of a multilevel greenhouse to be used on Mars in NASA’s 2019 Breakthrough, Innovative and Game-changing (BIG) Idea Challenge last month.  Each year, NASA holds the BIG Idea competition in its search for innovative and futuristic ideas. This year’s challenge invited universities across the United States to submit designs for a sustainable, cost-effective, and efficient method of supplying food to astronauts during future crewed explorations of Mars. Dartmouth College was awarded first place in this year’s closely contested challenge. “This was definitely a full-team success,” says team leader Eric Hinterman, a graduate student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). The team had contributions from 10 undergraduates and graduate students from across MIT departments. Support and assistance were provided by four architects and designers in Italy. This project was completely voluntary; all 14 contributors share a similar passion for space exploration and enjoyed working on the challenge in their spare time. The MIT team dubbed its design “BEAVER” (Biosphere Engineered Architecture for Viable Extraterrestrial Residence). “We designed our greenhouse to provide 100 percent of the food requirements for four active astronauts every day for two years,” explains Hinterman. The ecologists and agriculture specialists on the MIT team identified eight types of crops to provide the calories, protein, carbohydrates, and oils and fats that astronauts would need; these included potatoes, rice, wheat, oats, and peanuts. The flexible menu suggested substitutes, depending on astronauts’ specific dietary requirements. “Most space systems are metallic and very robotic,” Hinterman says. “It was fun working on something involving plants.” Parameters provided by NASA — a power budget, dimensions necessary for transporting by rocket, the capacity to provide adequate sustenance — drove the shape and the overall design of the greenhouse. Last October, the team held an initial brainstorming session and pitched project ideas. The iterative process continued until they reached their final design: a cylindrical growing space 11.2 meters in diameter and 13.4 meters tall after deployment. An innovative design The greenhouse would be packaged inside a rocket bound for Mars and, after landing, a waiting robot would move it to its site. Programmed with folding mechanisms, it would then expand horizontally and vertically and begin forming an ice shield around its exterior to protect plants and humans from the intense radiation on the Martian surface. Two years later, when Earth and Mars orbits were again in optimal alignment for launching and landing, a crew would arrive on Mars, where they would complete the greenhouse setup and begin growing crops. “About every two years, the crew would leave and a new crew of four would arrive and continue to use the greenhouse,” explains Hinterman. To maximize space, BEAVER employs a large spiral that moves around a central core within the cylinder. Seedlings are planted at the top and flow down the spiral as they grow. By the time they reach the bottom, the plants are ready for harvesting, and the crew enters at the ground floor to reap the potatoes and peanuts and grains. The planting trays are then moved to the top of the spiral, and the process begins again. “A lot of engineering went into the spiral,” says Hinterman. “Most of it is done without any moving parts or mechanical systems, which makes it ideal for space applications. You don’t want a lot of moving parts or things that can break.” The human factor “One of the big issues with sending humans into space is that they will be confined to seeing the same people every day for a couple of years,” Hinterman explains. “They’ll be living in an enclosed environment with very little personal space.” The greenhouse provides a pleasant area to ensure astronauts’ psychological well-being. On the top floor, just above the spiral, a windowed “mental relaxation area” overlooks the greenery. The ice shield admits natural light, and the crew can lounge on couches and enjoy the view of the Mars landscape. And rather than running pipes from the water tank at the top level down to the crops, Hinterman and his team designed a cascading waterfall at the area’s periphery, further adding to the ambiance. Sophomore Sheila Baber, an Earth, atmospheric, and planetary sciences (EAPS) major and the team’s ecology lead, was eager to take part in the project. “My grandmother used to farm in the mountains in Korea, and I remember going there and picking the crops,” she says. “Coming to MIT, I felt like I was distanced from my roots. I am interested in life sciences and physics and all things space, and this gave me the opportunity to combine all those.” Her work on BEAVER led to Baber’s award of one of five NASA internships at Langley Research Center in Hampton, Virginia this summer. She expects to continue exploration of the greenhouse project and its applications on Earth, such as in urban settings where space for growing food is constrained. “Some of the agricultural decisions that we made about hydroponics and aquaponics could potentially be used in environments on Earth to raise food,” she says. “The MIT team was great to work with,” says Hinterman. “They were very enthusiastic and hardworking, and we came up with a great design as a result.” In addition to Baber and Hinterman, team members included Siranush Babakhanova (Physics), Joe Kusters (AeroAstro), Hans Nowak (Leaders for Global Operations), Tajana Schneiderman (EAPS), Sam Seaman (Architecture), Tommy Smith (System Design and Management), Natasha Stamler (Mechanical Engineering and Urban Studies and Planning), and Zhuchang Zhan (EAPS). Assistance was provided by Italian designers and architects Jana Lukic, Fabio Maffia, Aldo Moccia, and Samuele Sciarretta. The team’s advisors were Jeff Hoffman, Sara Seager, Matt Silver, Vladimir Aerapetian, Valentina Sumini, and George Lordos. The BIG Idea Challenge is sponsored by NASA’s Space Technology Mission Directorate’s Game Changing Development program and managed by the National Institute of Aerospace. MIT team that designed BEAVER (Biosphere Engineered Architecture for Viable Extraterrestrial Residence), a proposed Martian greenhouse that could provide 100 percent of the food required by four astronauts for up to two years Photo: William Litant https://news.mit.edu/2019/seed-fund-addresses-indian-food-water-agriculture-0509 MIT-India, J-WAFS, and the Indian Institute of Technology Ropar launch fund to facilitate collaborations between faculty and scientists from MIT and ITT Ropar. Thu, 09 May 2019 12:00:01 -0400 https://news.mit.edu/2019/seed-fund-addresses-indian-food-water-agriculture-0509 Madeline Smith | MISTI Representatives of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), MIT-India, and the Indian Institute of Technology Ropar (IIT Ropar) gathered recently for a signing ceremony to formally launch a new faculty seed fund. The seed fund will help initiate new water- and food systems-related research collaborations between faculty and research scientists from MIT and IIT Ropar. Through it, MIT will provide grants for early-stage research projects on topics primarily related to water, food, and agriculture. “Our goal is to establish a pathway for research collaboration with IIT Ropar’s faculty and students on the world’s pressing challenges around water supply and food security. The interchange will provide MIT researchers with a direct window into the agricultural and water resources environment of Punjab and Himachal Pradesh in India,” says John Lienhard, J-WAFS director and Abdul Latif Jameel Professor of Water at MIT. J-WAFS promotes and supports research at MIT, with a mission to make meaningful contributions to solving the diverse challenges surrounding the world’s food and water needs. Through research funding and other activities, they support the development and deployment of effective technologies, programs and policies to address concerns stemming from population growth, climate change, urbanization, and development. “IIT Ropar is a fast-rising research institution, already highly-ranked for research citations in India,” says Lienhard. “IIT Ropar lies in one of India’s most important agricultural regions and will be an excellent partner for research around water and food.” IIT Ropar is one of eight new Indian Institutes of Technology (IITs) set up by the government of India to expand the reach and enhance the quality of technical education in India. The IITs emphasize research as a primary focus of the institution. Director of IIT Ropar Professor Sarit K. Das, who has also served as a visiting professor at MIT in 2007 and 2011, spoke at the ceremony about the need for research to address these critical issues. “We are one of the new generation IITs, and we want a very large focus on research,” he says. “There are large problems with agriculture, with water resources, and we want this as one of our focus areas. This is where J-WAFS comes in; we decided that we must join hands together to do something.” To provide these opportunities for joint research, J-WAFS and IIT Ropar will work with MIT-India, part of MIT International Science and Technology Initiatives (MISTI), to expand the outreach to MIT’s research community.  “This is a natural partnership for us,” says Renee Robins, executive director of J-WAFS. “Most faculty members are aware of MISTI Global Seed Funds, and the MIT-India program has a long track record of successful international collaborations, with established infrastructure for program management and proposal review.” “We are committed to working with our MIT colleagues through these interdisciplinary initiatives to address the research interests of our MIT community and our Indian colleagues,” says Mala Ghosh, managing director of MIT-India. “This new seed fund will create a cross-fertilization of ideas in critical areas, generate student involvement, and link the overlapping networks of J-WAFS, MIT-India, and IIT, thereby launching an even more robust and effective research environment.” The MIT-IIT Ropar Seed Fund will become a part of the MISTI Global Seed Funds. Open to faculty and researchers from MIT and IIT Ropar who are pursuing water- and food-related research, MISTI Global Seed Funds create opportunities for international cooperation by funding early-stage collaboration between MIT researchers and their counterparts around the world. The call for proposals will open in May with a deadline in September. A new agreement will help initiate research collaborations between scholars at MIT and IIT-Ropar that respond to water- and food-sector challenges in India. IIT Ropar is located in Punjab, a primarily agriculture-based region of India, and has interdisciplinary centers focusing on water and agriculture. Photo courtesy of MISTI https://news.mit.edu/2019/seed-fund-addresses-indian-food-water-agriculture-0509 MIT-India, J-WAFS, and the Indian Institute of Technology Ropar launch fund to facilitate collaborations between faculty and scientists from MIT and ITT Ropar. Thu, 09 May 2019 12:00:01 -0400 https://news.mit.edu/2019/seed-fund-addresses-indian-food-water-agriculture-0509 Madeline Smith | MISTI Representatives of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), MIT-India, and the Indian Institute of Technology Ropar (IIT Ropar) gathered recently for a signing ceremony to formally launch a new faculty seed fund. The seed fund will help initiate new water- and food systems-related research collaborations between faculty and research scientists from MIT and IIT Ropar. Through it, MIT will provide grants for early-stage research projects on topics primarily related to water, food, and agriculture. “Our goal is to establish a pathway for research collaboration with IIT Ropar’s faculty and students on the world’s pressing challenges around water supply and food security. The interchange will provide MIT researchers with a direct window into the agricultural and water resources environment of Punjab and Himachal Pradesh in India,” says John Lienhard, J-WAFS director and Abdul Latif Jameel Professor of Water at MIT. J-WAFS promotes and supports research at MIT, with a mission to make meaningful contributions to solving the diverse challenges surrounding the world’s food and water needs. Through research funding and other activities, they support the development and deployment of effective technologies, programs and policies to address concerns stemming from population growth, climate change, urbanization, and development. “IIT Ropar is a fast-rising research institution, already highly-ranked for research citations in India,” says Lienhard. “IIT Ropar lies in one of India’s most important agricultural regions and will be an excellent partner for research around water and food.” IIT Ropar is one of eight new Indian Institutes of Technology (IITs) set up by the government of India to expand the reach and enhance the quality of technical education in India. The IITs emphasize research as a primary focus of the institution. Director of IIT Ropar Professor Sarit K. Das, who has also served as a visiting professor at MIT in 2007 and 2011, spoke at the ceremony about the need for research to address these critical issues. “We are one of the new generation IITs, and we want a very large focus on research,” he says. “There are large problems with agriculture, with water resources, and we want this as one of our focus areas. This is where J-WAFS comes in; we decided that we must join hands together to do something.” To provide these opportunities for joint research, J-WAFS and IIT Ropar will work with MIT-India, part of MIT International Science and Technology Initiatives (MISTI), to expand the outreach to MIT’s research community.  “This is a natural partnership for us,” says Renee Robins, executive director of J-WAFS. “Most faculty members are aware of MISTI Global Seed Funds, and the MIT-India program has a long track record of successful international collaborations, with established infrastructure for program management and proposal review.” “We are committed to working with our MIT colleagues through these interdisciplinary initiatives to address the research interests of our MIT community and our Indian colleagues,” says Mala Ghosh, managing director of MIT-India. “This new seed fund will create a cross-fertilization of ideas in critical areas, generate student involvement, and link the overlapping networks of J-WAFS, MIT-India, and IIT, thereby launching an even more robust and effective research environment.” The MIT-IIT Ropar Seed Fund will become a part of the MISTI Global Seed Funds. Open to faculty and researchers from MIT and IIT Ropar who are pursuing water- and food-related research, MISTI Global Seed Funds create opportunities for international cooperation by funding early-stage collaboration between MIT researchers and their counterparts around the world. The call for proposals will open in May with a deadline in September. A new agreement will help initiate research collaborations between scholars at MIT and IIT-Ropar that respond to water- and food-sector challenges in India. IIT Ropar is located in Punjab, a primarily agriculture-based region of India, and has interdisciplinary centers focusing on water and agriculture. Photo courtesy of MISTI More

  • in

    MIT News – Environment | Climate

    MIT News – Environment | Climate | Climate changeTwo projects receive funding for technologies that avoid carbon emissionsMobility Systems Center awards four projects for low-carbon transportation researchMIT researchers and Wyoming representatives explore energy and climate solutions3 Questions: Asegun Henry on five “grand thermal challenges” to stem the tide of global warmingMIT Energy Conference goes virtualShrinking deep learning’s carbon footprintShrinking deep learning’s carbon footprintWhen the chemical industry met modern architectureStudy: A plunge in incoming sunlight may have triggered “Snowball Earths”Study: A plunge in incoming sunlight may have triggered “Snowball Earths”Study: A plunge in incoming sunlight may have triggered “Snowball Earths”$25 million gift launches ambitious new effort tackling poverty and climate changeLetter from President Reif: Tackling the grand challenges of climate changeCovid-19 shutdown led to increased solar power outputBuilding a more sustainable MIT — from homeDecarbonize and diversifyDecarbonize and diversifyA new approach to carbon captureA new approach to carbon captureInnovations in environmental training for the mining industryD-Lab moves online, without compromising on impactIdeaStream 2020 goes virtualResearchers find benefits of solar photovoltaics outweigh costsResearchers find benefits of solar photovoltaics outweigh costsIce, ice, maybeWhy the Mediterranean is a climate change hotspotWhat moves people?Startup with MIT roots develops lightweight solar panelsStartup with MIT roots develops lightweight solar panelsTiny sand grains trigger massive glacial surgesTiny sand grains trigger massive glacial surgesUnlocking the secrets of a plastic-eaterPeatland drainage in Southeast Asia adds to climate changePeatland drainage in Southeast Asia adds to climate changeStudy: Reflecting sunlight to cool the planet will cause other global changesStudy: Reflecting sunlight to cool the planet will cause other global changesStudy: Reflecting sunlight to cool the planet will cause other global changesMachine learning helps map global ocean communitiesMachine learning helps map global ocean communitiesMaking nuclear energy cost-competitiveSolve at MIT builds partnerships to tackle complex challenges during Covid-19 crisisSolar energy farms could offer second life for electric vehicle batteriesTransportation policymaking in Chinese citiesTransportation policymaking in Chinese citiesThe quest for practical fusion energy sourcesTowable sensor free-falls to measure vertical slices of ocean conditionsA scientist turns to entrepreneurshipQ&A: Energy studies at MIT and the next generation of energy leadersQ&A: Energy studies at MIT and the next generation of energy leadersMelting glaciers cool the Southern Oceanhttps://news.mit.edu/rss/topic/environment MIT news feed about: Environment | Climate | Climate change en Thu, 20 Aug 2020 18:00:00 +0000 https://news.mit.edu/2020/two-research-projects-receive-funding-advance-technologies-avoid-carbon-emissions-0820 Asegun Henry, Paul Barton, and Matěj Peč will lead research supported by the MIT Energy Initiative's Carbon Capture, Utilization, and Storage Center. Thu, 20 Aug 2020 14:00:00 -0400 https://news.mit.edu/2020/two-research-projects-receive-funding-advance-technologies-avoid-carbon-emissions-0820 Emily Dahl | MIT Energy Initiative The Carbon Capture, Utilization, and Storage Center, one of the MIT Energy Initiative (MITEI)’s Low-Carbon Energy Centers, has awarded $900,000 in funding to two new research projects to advance technologies that avoid carbon dioxide (CO2) emissions into the atmosphere and help address climate change. The winning project is receiving $750,000, and an additional project receives $150,000. The winning project, led by principal investigator Asegun Henry, the Robert N. Noyce Career Development Professor in the Department of Mechanical Engineering, and co-principal investigator Paul Barton, the Lammot du Pont Professor of Chemical Engineering, aims to produce hydrogen without CO2 emissions while creating a second revenue stream of solid carbon. The additional project, led by principal investigator Matěj Peč, the Victor P. Starr Career Development Chair in the Department of Earth, Atmospheric and Planetary Sciences, seeks to expand understanding of new processes for storing CO2 in basaltic rocks by converting it from an aqueous solution into carbonate minerals. Carbon capture, utilization, and storage (CCUS) technologies have the potential to play an important role in limiting or reducing the amount of CO2 in the atmosphere, as part of a suite of approaches to mitigating to climate change that includes renewable energy and energy efficiency technologies, as well as policy measures. While some CCUS technologies are being deployed at the million-ton-of-CO2 per year scale, there are substantial needs to improve costs and performance of those technologies and to advance more nascent technologies. MITEI’s CCUS center is working to meet these challenges with a cohort of industry members that are supporting promising MIT research, such as these newly funded projects. A new process for producing hydrogen without CO2 emissions Henry and Barton’s project, “Lower cost, CO2-free, H2 production from CH4 using liquid tin,” investigates the use of methane pyrolysis instead of steam methane reforming (SMR) for hydrogen production. Currently, hydrogen production accounts for approximately 1 percent of global CO2 emissions, and the predominant production method is SMR. The SMR process relies on the formation of CO2, so replacing it with another economically competitive approach to making hydrogen would avoid emissions.  “Hydrogen is essential to modern life, as it is primarily used to make ammonia for fertilizer, which plays an indispensable role in feeding the world’s 7.5 billion people,” says Henry. “But we need to be able to feed a growing population and take advantage of hydrogen’s potential as a carbon-free fuel source by eliminating CO2 emissions from hydrogen production. Our process results in a solid carbon byproduct, rather than CO2 gas. The sale of the solid carbon lowers the minimum price at which hydrogen can be sold to break even with the current, CO2 emissions-intensive process.” Henry and Barton’s work is a new take on an existing process, pyrolysis of methane. Like SMR, methane pyrolysis uses methane as the source of hydrogen, but follows a different pathway. SMR uses the oxygen in water to liberate the hydrogen by preferentially bonding oxygen to the carbon in methane, producing CO2 gas in the process. In methane pyrolysis, the methane is heated to such a high temperature that the molecule itself becomes unstable and decomposes into hydrogen gas and solid carbon — a much more valuable byproduct than CO2 gas. Although the idea of methane pyrolysis has existed for many years, it has been difficult to commercialize because of the formation of the solid byproduct, which can deposit on the walls of the reactor, eventually plugging it up. This issue makes the process impractical. Henry and Barton’s project uses a new approach in which the reaction is facilitated with inert molten tin, which prevents the plugging from occurring. The proposed approach is enabled by recent advances in Henry’s lab that enable the flow and containment of liquid metal at extreme temperatures without leakage or material degradation.  Studying CO2 storage in basaltic reservoirs With his project, “High-fidelity monitoring for carbon sequestration: integrated geophysical and geochemical investigation of field and laboratory data,” Peč plans to conduct a comprehensive study to gain a holistic understanding of the coupled chemo-mechanical processes that accompany CO2 storage in basaltic reservoirs, with hopes of increasing adoption of this technology. The Intergovernmental Panel on Climate Change estimates that 100 to 1,000 gigatonnes of CO2 must be removed from the atmosphere by the end of the century. Such large volumes can only be stored below the Earth’s surface, and that storage must be accomplished safely and securely, without allowing any leakage back into the atmosphere. One promising storage strategy is CO2 mineralization — specifically by dissolving gaseous CO2 in water, which then reacts with reservoir rocks to form carbonate minerals. Of the technologies proposed for carbon sequestration, this approach is unique in that the sequestration is permanent: the CO2 becomes part of an inert solid, so it cannot escape back into the environment. Basaltic rocks, the most common volcanic rock on Earth, present good sites for CO2 injection due to their widespread occurrence and high concentrations of divalent cations such as calcium and magnesium that can form carbonate minerals. In one study, more than 95 percent of the CO2 injected into a pilot site in Iceland was precipitated as carbonate minerals in less than two years. However, ensuring the subsurface integrity of geological formations during fluid injection and accurately evaluating the reaction rates in such reservoirs require targeted studies such as Peč’s. “The funding by MITEI’s Low-Carbon Energy Center for Carbon Capture, Utilization, and Storage allows me to start a new research direction, bringing together a group of experts from a range of disciplines to tackle climate change, perhaps the greatest scientific challenge our generation is facing,” says Peč. The two projects were selected from a call for proposals that resulted in 15 entries by MIT researchers. “The application process revealed a great deal of interest from MIT researchers in advancing carbon capture, utilization, and storage processes and technologies,” says Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences, who co-directs the CCUS center with T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering. “The two projects funded through the center will result in fundamental, higher-risk research exploring novel approaches that have the potential to have high impact in the longer term. Given the short-term focus of the industry, projects like this might not have otherwise been funded, so having support for this kind of early-stage fundamental research is crucial.” Postdoc Tiange Xing conducts an experiment in the Peč Lab related to the group’s newly funded project to expand understanding of new processes for storing CO2 in basaltic rocks by converting it from an aqueous solution into carbonate minerals. Photo courtesy of the Peč Lab. https://news.mit.edu/2020/mitei-mobility-systems-center-awards-four-new-projects-low-carbon-transportation-research-0818 Topics include Covid-19 and urban mobility, strategies for electric vehicle charging networks, and infrastructure and economics for hydrogen-fueled transportation. Tue, 18 Aug 2020 14:00:00 -0400 https://news.mit.edu/2020/mitei-mobility-systems-center-awards-four-new-projects-low-carbon-transportation-research-0818 Turner Jackson | MIT Energy Initiative The Mobility Systems Center (MSC), one of the MIT Energy Initiative (MITEI)’s Low-Carbon Energy Centers, will fund four new research projects that will allow for deeper insights into achieving a decarbonized transportation sector. “Based on input from our Mobility Systems Center members, we have selected an excellent and diverse set of projects to initiate this summer,” says Randall Field, the center’s executive director. “The awarded projects will address a variety of pressing topics including the impacts of Covid-19 on urban mobility, strategies for electric vehicle charging networks, and infrastructure and economics for hydrogen-fueled transportation.” The projects are spearheaded by faculty and researchers from across the Institute, with experts in several fields including economics, urban planning, and energy systems. In addition to pursuing new avenues of research, the Mobility Systems Center also welcomes Jinhua Zhao as co-director. Zhao serves alongside Professor William H. Green, the Hoyt C. Hottel Professor in Chemical Engineering. Zhao is an associate professor in the Department of Urban Studies and Planning and the director of the JTL Urban Mobility Lab. He succeeds Sanjay Sarma, the vice president for open learning and the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor of Mechanical Engineering. “Jinhua already has a strong relationship with mobility research at MITEI, having been a major contributor to MITEI’s Mobility of the Future study and serving as a principal investigator for MSC projects. He will provide excellent leadership to the center,” says MITEI Director Robert C. Armstrong, the Chevron Professor of Chemical Engineering. “We also thank Sanjay for his valuable leadership during the MSC’s inaugural year, and look forward to collaborating with him in his role as vice president for open learning — an area that is vitally important in MIT’s response to research and education in the Covid-19 era.” The impacts of Covid-19 on urban mobility The Covid-19 pandemic has transformed all aspects of life in a remarkably short amount of time, including how, when, and why people travel. In addition to becoming the center’s new co-director, Zhao will lead one of the MSC’s new projects to identify how Covid-19 has impacted use of, preferences toward, and energy consumption of different modes of urban transportation, including driving, walking, cycling, and most dramatically, ridesharing services and public transit. Zhao describes four primary objectives for the project. The first is to quantify large-scale behavioral and preference changes in response to the pandemic, tracking how these change from the beginning of the outbreak through the medium-term recovery period. Next, the project will break down these changes by sociodemographic groups, with a particular emphasis on low-income and marginalized communities. The project will then use these insights to posit how changes to infrastructure, equipment, and policies could help shape travel recovery to be more sustainable and equitable. Finally, Zhao and his research team will translate these behavioral changes into energy consumption and carbon dioxide emissions estimates. “We make two distinctions: first, between impacts on amount of travel (e.g., number of trips) and impacts on type of travel (e.g., mixture of different travel modes); and second, between temporary shocks and longer-term structural changes,” says Zhao. “Even when the coronavirus is no longer a threat to public health, we expect to see lasting effects on activity, destination, and mode preferences. These changes, in turn, affect energy consumption and emissions from the transportation sector.” The economics of electric vehicle charging In the transition toward a low-carbon transportation system, refueling infrastructure is crucial for the viability of any alternative fuel vehicle. Jing Li, an assistant professor in the MIT Sloan School of Management, aims to develop a model of consumer vehicle and travel choices based on data regarding travel patterns, electric vehicle (EV) charging demand, and EV adoption. Li’s research team will implement a two-pronged approach. First, they will quantify the value that each charging location provides to the rest of the refueling network, which may be greater than that location’s individual profitability due to network spillovers. Second, they will simulate the profits of EV charging networks and the adoption rates of EVs using different pricing and location strategies. “We hypothesize that some charging locations may not be privately profitable, but would be socially valuable. If so, then a charging network may increase profits by subsidizing entry at ‘missing’ locations that are underprovided by the market,” she says. If proven correct, this research could be valuable in making EVs accessible to broader portions of the population.  Cost reduction and emissions savings strategies for hydrogen mobility systems Hydrogen-based transportation and other energy services have long been discussed, but what role will they play in a clean energy transition? Jessika Trancik, an associate professor of energy studies in the Institute for Data, Systems, and Society, will examine and identify cost-reducing and emissions-saving mechanisms for hydrogen-fueled mobility services. She plans to analyze production and distribution scenarios, evolving technology costs, and the lifecycle greenhouse gas emissions of hydrogen-based mobility systems, considering both travel activity patterns and fluctuations in the primary energy supply for hydrogen production. “Modeling the mechanisms through which the design of hydrogen-based mobility systems can achieve lower costs and emissions can help inform the development of future infrastructure,” says Trancik. “Models and theory to inform this development can have a significant impact on whether or not hydrogen-based systems succeed in contributing measurably to the decarbonization of the transportation sector. The goals for the project are threefold: quantifying the emissions and costs of hydrogen production and storage pathways, with a focus on the potential use of excess renewable energy; modeling costs and requirements of the distribution and refueling infrastructure for different forms of transportation, from personal vehicles to long-haul trucking based on existing and projected demand; and modeling the costs and emissions associated with the use of hydrogen-fueled mobility services. Analysis of forms of hydrogen for use in transportation MITEI research scientist Emre Gençer will lead a team including Yang Shao-Horn, the W.M. Keck Professor of Energy in the Department of Materials Science and Engineering, and Dharik Mallapragada, a MITEI research scientist, to assess the alternative forms of hydrogen that could serve the transportation sector. This project will develop an end-to-end techno-economic and greenhouse gas emissions analysis of hydrogen-based energy supply chains for road transportation. The analysis will focus on two classes of supply chains: pure hydrogen (transported as a compressed gas or cryogenic liquid) and cyclic supply chains (based on liquid organic hydrogen carriers for powering on-road transportation). The low energy density of gaseous hydrogen is currently a barrier to the large-scale deployment of hydrogen-based transportation; liquid carriers are a potential solution in enabling an energy-dense means for storing and delivering hydrogen fuel. The scope of the analysis will include the generation, storage, distribution, and use of hydrogen, as well as the carrier molecules that are used in the supply chain. Additionally, the researchers will estimate the economic and environmental performance of various technology options across the entire supply chain. “Hydrogen has long been discussed as a fuel of the future,” says Shao-Horn. “As the energy transition progresses, opportunities for carbon-free fuels will only grow throughout the energy sector. Thorough analyses of hydrogen-based technologies are vital for providing information necessary to a greener transportation and energy system.” Broadening MITEI’s mobility research portfolio The mobility sector needs a multipronged approach to mitigate its increasing environmental impact. The four new projects will complement the MSC’s current portfolio of research projects, which includes an evaluation of operational designs for highly responsive urban last-mile delivery services; a techno-economic assessment of options surrounding long-haul road freight; an investigation of tradeoffs between data privacy and performance in shared mobility services; and an examination of mobility-as-a-service and its implications for private car ownership in U.S. cities.  “The pressures to adapt our transportation systems have never been greater with the Covid-19 crisis and increasing environmental concerns. While new technologies, business models, and governmental policies present opportunities to advance, research is needed to understand how they interact with one another and help to shape our mobility patterns,” says Field. “We are very excited to have such a strong breadth of projects to contribute multidisciplinary insights into the evolution of a cleaner, more sustainable mobility future.” The MIT Energy Initiative’s Mobility Systems Center has selected four new low-carbon transportation research projects to add to its growing portfolio. Photo: Benjamin Cruz https://news.mit.edu/2020/mit-researchers-wyoming-representatives-explore-energy-climate-solutions-0811 Members of Wyoming’s government and public university met with MIT researchers to discuss climate-friendly economic growth. Tue, 11 Aug 2020 00:00:00 -0400 https://news.mit.edu/2020/mit-researchers-wyoming-representatives-explore-energy-climate-solutions-0811 Environmental Solutions Initiative The following is a joint release from the MIT Environmental Solutions Initiative and the office of Wyoming Governor Mark Gordon. The State of Wyoming supplies 40 percent of the country’s coal used to power electric grids. The production of coal and other energy resources contributes over half of the state’s revenue, funding the government and many of the social services — including K-12 education — that residents rely on. With the consumption of coal in a long-term decline, decreased revenues from oil and natural gas, and growing concerns about carbon dioxide (CO2) emissions, the state is actively looking at how to adapt to a changing marketplace. Recently, representatives from the Wyoming Governor’s Office, University of Wyoming School of Energy Resources, and Wyoming Energy Authority met with faculty and researchers from MIT in a virtual, two-day discussion to discuss avenues for the state to strengthen its energy economy while lowering CO2 emissions. “This moment in time presents us with an opportunity to seize: creating a strong economic future for the people of Wyoming while protecting something we all care about — the climate,” says Wyoming Governor Mark Gordon. “Wyoming has tremendous natural resources that create thousands of high-paying jobs. This conversation with MIT allows us to consider how we use our strengths and adapt to the changes that are happening nationally and globally.” The two dozen participants from Wyoming and MIT discussed pathways for long-term economic growth in Wyoming, given the global need to reduce carbon dioxide emissions. The wide-ranging and detailed conversation covered topics such as the future of carbon capture technology, hydrogen, and renewable energy; using coal for materials and advanced manufacturing; climate policy; and how communities can adapt and thrive in a changing energy marketplace. The discussion paired MIT’s global leadership in technology development, economic modeling, and low-carbon energy research with Wyoming’s unique competitive advantages: its geology that provides vast underground storage potential for CO2; its existing energy and pipeline infrastructure; and the tight bonds between business, government, and academia. “Wyoming’s small population and statewide support of energy technology development is an advantage,” says Holly Krutka, executive director of the University of Wyoming’s School of Energy Resources. “Government, academia, and industry work very closely together here to scale up technologies that will benefit the state and beyond. We know each other, so we can get things done and get them done quickly.” “There’s strong potential for MIT to work with the State of Wyoming on technologies that could not only benefit the state, but also the country and rest of the world as we combat the urgent crisis of climate change,” says Bob Armstrong, director of the MIT Energy Initiative, who attended the forum. “It’s a very exciting conversation.” The event was convened by the MIT Environmental Solutions Initiative as part of its Here & Real project, which works with regions in the United States to help further initiatives that are both climate-friendly and economically just. “At MIT, we are focusing our attention on technologies that combat the challenge of climate change — but also, with an eye toward not leaving people behind,” says Maria Zuber, MIT’s vice president for research and the E. A. Griswold Professor of Geophysics. “It is inspiring to see Wyoming’s state leadership seriously committed to finding solutions for adapting the energy industry, given what we know about the risks of climate change,” says Laur Hesse Fisher, director of the Here & Real project. “Their determination to build an economically and environmentally sound future for the people of Wyoming has been evident in our discussions, and I am excited to see this conversation continue and deepen.” The Wyoming State Capitol in Cheyenne https://news.mit.edu/2020/asegun-henry-thermal-challenges-global-warming-0810 “Our mission here is to save humanity from extinction due to climate change,” says MIT professor. Mon, 10 Aug 2020 11:00:00 -0400 https://news.mit.edu/2020/asegun-henry-thermal-challenges-global-warming-0810 Jennifer Chu | MIT News Office More than 90 percent of the world’s energy use today involves heat, whether for producing electricity, heating and cooling buildings and vehicles, manufacturing steel and cement, or other industrial activities. Collectively, these processes emit a staggering amount of greenhouse gases into the environment each year. Reinventing the way we transport, store, convert, and use thermal energy would go a long way toward avoiding a global rise in temperature of more than 2 degrees Celsius — a critical increase that is predicted to tip the planet into a cascade of catastrophic climate scenarios. But, as three thermal energy experts write in a letter published today in Nature Energy, “Even though this critical need exists, there is a significant disconnect between current research in thermal sciences and what is needed for deep decarbonization.” In an effort to motivate the scientific community to work on climate-critical thermal issues, the authors have laid out five thermal energy “grand challenges,” or broad areas where significant innovations need to be made in order to stem the rise of global warming. MIT News spoke with Asegun Henry, the lead author and the Robert N. Noyce Career Development Associate Professor in the Department of Mechanical Engineering, about this grand vision. Q: Before we get into the specifics of the five challenges you lay out, can you say a little about how this paper came about, and why you see it as a call to action? A: This paper was born out of this really interesting meeting, where my two co-authors and I were asked to meet with Bill Gates and teach him about thermal energy. We did a several-hour session with him in October of 2018, and when we were leaving, at the airport, we all agreed that the message we shared with Bill needs to be spread much more broadly. This particular paper is about thermal science and engineering specifically, but it’s an interdisciplinary field with lots of intersections. The way we frame it, this paper is about five grand challenges that if solved, would literally alter the course of humanity. It’s a big claim — but we back it up. And we really need this to be declared as a mission, similar to the declaration that we were going to put a man on the moon, where you saw this concerted effort among the scientific community to achieve that mission. Our mission here is to save humanity from extinction due to climate change. The mission is clear. And this is a subset of five problems that will get us the majority of the way there, if we can solve them. Time is running out, and we need all hands on deck.  Q: What are the five thermal energy challenges you outline in your paper? A: The first challenge is developing thermal storage systems for the power grid, electric vehicles, and buildings. Take the power grid: There is an international race going on to develop a grid storage system to store excess electricity from renewables so you can use it at a later time. This would allow renewable energy to penetrate the grid. If we can get to a place of fully decarbonizing the grid, that alone reduces carbon dioxide emissions from electricity production by 25 percent. And the beauty of that is, once you decarbonize the grid you open up decarbonizing the transportation sector with electric vehicles. Then you’re talking about a 40 percent reduction of global carbon emissions. The second challenge is decarbonizing industrial processes, which contribute 15 percent of global carbon dioxide emissions. The big actors here are cement, steel, aluminum, and hydrogen. Some of these industrial processes intrinsically involve the emission of carbon dioxide, because the reaction itself has to release carbon dioxide for it to work, in the current form. The question is, is there another way? Either we think of another way to make cement, or come up with something different. It’s an extremely difficult challenge, but there are good ideas out there, and we need way more people thinking about this. The third challenge is solving the cooling problem. Air conditioners and refrigerators have chemicals in them that are very harmful to the environment, 2,000 times more harmful than carbon dioxide on a molar basis. If the seal breaks and that refrigerant gets out, that little bit of leakage will cause global warming to shift significantly. When you account for India and other developing nations that are now getting access to electricity infrastructures to run AC systems, the leakage of these refrigerants will become responsible for 15 to 20 percent of global warming by 2050. The fourth challenge is long-distance transmission of heat. We transmit electricity because it can be transmitted with low loss, and it’s cheap. The question is, can we transmit heat like we transmit electricity? There is an overabundance of waste heat available at power plants, and the problem is, where the power plants are and where people live are two different places, and we don’t have a connector to deliver heat from these power plants, which is literally wasted. You could satisfy the entire residential heating load of the world with a fraction of that waste heat. What we don’t have is the wire to connect them. And the question is, can someone create one? The last challenge is variable conductance building envelopes. There are some demonstrations that show it is physically possible to create a thermal material, or a device that will change its conductance, so that when it’s hot, it can block heat from getting through a wall, but when you want it to, you could change its conductance to let the heat in or out. We’re far away from having a functioning system, but the foundation is there. Q: You say that these five challenges represent a new mission for the scientific community, similar to the mission to land a human on the moon, which came with a clear deadline. What sort of timetable are we talking about here, in terms of needing to solve these five thermal problems to mitigate climate change? A: In short, we have about 20 to 30 years of business as usual, before we end up on an inescapable path to an average global temperature rise of over 2 degrees Celsius. This may seem like a long time, but it’s not when you consider that it took natural gas 70 years to become 20 percent of our energy mix. So imagine that now we have to not just switch fuels, but do a complete overhaul of the entire energy infrastructure in less than one third the time. We need dramatic change, not yesterday, but years ago. So every day I fear we will do too little too late, and we as a species may not survive Mother Earth’s clapback. MIT’s Asegun Henry on tackling five “grand thermal challenges” to stem the global warming tide: “Our mission here is to save humanity from extinction due to climate change.” Portrait photo courtesy of MIT MechE. https://news.mit.edu/2020/mit-energy-conference-goes-virtual-0807 Annual student-run energy conference pivots to successful online event with short notice in response to the coronavirus. Fri, 07 Aug 2020 16:55:00 -0400 https://news.mit.edu/2020/mit-energy-conference-goes-virtual-0807 Turner Jackson | MIT Energy Initiative For the past 14 years, the MIT Energy Conference — a two-day event organized by energy students — has united students, faculty, researchers, and industry representatives from around the world to discuss cutting-edge developments in energy. Under the supervision of Thomas “Trey” Wilder, an MBA candidate at the MIT Sloan School of Management, and a large team of student event organizers, the final pieces for the 2020 conference were falling into place by early March — and then the Covid-19 pandemic hit the United States. As the Institute canceled in-person events to reduce the spread of the virus, much of the planning that had gone into hosting the conference in its initial format was upended. The Energy Conference team had less than a month to transition the entire event — scheduled for early April — online. During the conference’s opening remarks, Wilder recounted the month leading up to the event. “Coincidently, the same day that we received the official notice that all campus events were canceled, we had a general body Energy Club meeting,” says Wilder. “All the leaders looked at each other in disbelief — seeing a lot of the work that we had put in for almost a year now, seemingly go down the drain. We decided that night to retain whatever value we could find from this event.” The team immediately started contacting vendors and canceling orders, issuing refunds to guests, and informing panelists and speakers about the conference’s new format. “One of the biggest issues was getting buy-in from the speakers. Everyone was new to this virtual world back at the end of March. Our speakers didn’t know what this was going to look like, and many backed out,” says Wilder. The team worked hard to find new speakers, with one even being brought on 12 hours before the start of the event. Another challenge posed by taking the conference virtual was learning the ins and outs of running a Zoom webinar in a remarkably short time frame. “With the webinar, there are so many functions that the host controls that really affect the outcome of the event. Similarly, the speakers didn’t quite know how to operate it, either.” In spite of the multitude of challenges posed by switching to an online format on a tight deadline, this year’s coordinating team managed to pull off an incredibly informative and timely conference that reached a much larger audience than those in years past. This was the first year the conference was offered for free online, which allowed for over 3,500 people globally to tune in — a marked increase from the 500 attendees planned for the original, in-person event. Over the course of two days, panelists and speakers discussed a wide range of energy topics, including electric vehicles, energy policy, and the future of utilities. The three keynote speakers were Daniel M. Kammen, a professor of energy and the chair of the Goldman School of Public Policy at the University of California at Berkeley; Rachel Kyte, the dean of the Tufts Fletcher School of Law and Diplomacy; and John Deutch, the Institute Professor of Chemistry at MIT. Many speakers modified their presentations to address Covid-19 and how it relates to energy and the environment. For example, Kammen adjusted his address to cover what those who are working to address the climate emergency can learn from the Covid-19 pandemic. He emphasized the importance of individual actions for both the climate crisis and Covid-19; how global supply chains are vulnerable in a crowded, denuded planet; and how there is no substitute for thorough research and education when tackling these issues. Wilder credits the team of dedicated, hardworking energy students as the most important contributors to the conference’s success. A couple of notable examples include Joe Connelly, an MBA candidate, and Leah Ellis, a materials science and engineering postdoc, who together managed the Zoom operations during the conference. They ensured that the panels and presentations flowed seamlessly. Anna Sheppard, another MBA candidate, live-tweeted throughout the conference, managed the YouTube stream, and responded to emails during the event, with assistance from Michael Cheng, a graduate student in the Technology and Policy Program. Wilder says MBA candidate Pervez Agwan “was the Swiss Army knife of the group”; he worked on everything from marketing to tickets to operations — and, because he had a final exam on the first day of the conference, Agwan even pulled an all-nighter to ensure that the event and team were in good shape. “What I loved most about this team was that they were extremely humble and happy to do the dirty work,” Wilder says. “Everyone was content to put their head down and grind to make this event great. They did not desire praise or accolades, and are therefore worthy of both.” The 2020 MIT Energy Conference organizers. Thomas “Trey” Wilder (bottom row, fourth from left), an MBA candidate at the MIT Sloan School of Management, spearheaded the organization of this year’s conference, which had less than a month to transition to a virtual event. Image: Trey Wilder https://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807 Through innovation in software and hardware, researchers move to reduce the financial and environmental costs of modern artificial intelligence. Fri, 07 Aug 2020 17:00:00 -0400 https://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807 Kim Martineau | MIT Quest for Intelligence In June, OpenAI unveiled the largest language model in the world, a text-generating tool called GPT-3 that can write creative fiction, translate legalese into plain English, and answer obscure trivia questions. It’s the latest feat of intelligence achieved by deep learning, a machine learning method patterned after the way neurons in the brain process and store information. But it came at a hefty price: at least $4.6 million and 355 years in computing time, assuming the model was trained on a standard neural network chip, or GPU. The model’s colossal size — 1,000 times larger than a typical language model — is the main factor in its high cost. “You have to throw a lot more computation at something to get a little improvement in performance,” says Neil Thompson, an MIT researcher who has tracked deep learning’s unquenchable thirst for computing. “It’s unsustainable. We have to find more efficient ways to scale deep learning or develop other technologies.” Some of the excitement over AI’s recent progress has shifted to alarm. In a study last year, researchers at the University of Massachusetts at Amherst estimated that training a large deep-learning model produces 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars. As models grow bigger, their demand for computing is outpacing improvements in hardware efficiency. Chips specialized for neural-network processing, like GPUs (graphics processing units) and TPUs (tensor processing units), have offset the demand for more computing, but not by enough.  “We need to rethink the entire stack — from software to hardware,” says Aude Oliva, MIT director of the MIT-IBM Watson AI Lab and co-director of the MIT Quest for Intelligence. “Deep learning has made the recent AI revolution possible, but its growing cost in energy and carbon emissions is untenable.” Computational limits have dogged neural networks from their earliest incarnation — the perceptron — in the 1950s. As computing power exploded, and the internet unleashed a tsunami of data, they evolved into powerful engines for pattern recognition and prediction. But each new milestone brought an explosion in cost, as data-hungry models demanded increased computation. GPT-3, for example, trained on half a trillion words and ballooned to 175 billion parameters — the mathematical operations, or weights, that tie the model together — making it 100 times bigger than its predecessor, itself just a year old. In work posted on the pre-print server arXiv, Thompson and his colleagues show that the ability of deep learning models to surpass key benchmarks tracks their nearly exponential rise in computing power use. (Like others seeking to track AI’s carbon footprint, the team had to guess at many models’ energy consumption due to a lack of reporting requirements). At this rate, the researchers argue, deep nets will survive only if they, and the hardware they run on, become radically more efficient. Toward leaner, greener algorithms The human perceptual system is extremely efficient at using data. Researchers have borrowed this idea for recognizing actions in video and in real life to make models more compact. In a paper at the European Conference on Computer Vision (ECCV) in August, researchers at the MIT-IBM Watson AI Lab describe a method for unpacking a scene from a few glances, as humans do, by cherry-picking the most relevant data. Take a video clip of someone making a sandwich. Under the method outlined in the paper, a policy network strategically picks frames of the knife slicing through roast beef, and meat being stacked on a slice of bread, to represent at high resolution. Less-relevant frames are skipped over or represented at lower resolution. A second model then uses the abbreviated CliffsNotes version of the movie to label it “making a sandwich.” The approach leads to faster video classification at half the computational cost as the next-best model, the researchers say. “Humans don’t pay attention to every last detail — why should our models?” says the study’s senior author, Rogerio Feris, research manager at the MIT-IBM Watson AI Lab. “We can use machine learning to adaptively select the right data, at the right level of detail, to make deep learning models more efficient.” In a complementary approach, researchers are using deep learning itself to design more economical models through an automated process known as neural architecture search. Song Han, an assistant professor at MIT, has used automated search to design models with fewer weights, for language understanding and scene recognition, where picking out looming obstacles quickly is acutely important in driving applications.  In a paper at ECCV, Han and his colleagues propose a model architecture for three-dimensional scene recognition that can spot safety-critical details like road signs, pedestrians, and cyclists with relatively less computation. They used an evolutionary-search algorithm to evaluate 1,000 architectures before settling on a model they say is three times faster and uses eight times less computation than the next-best method.  In another recent paper, they use evolutionary search within an augmented designed space to find the most efficient architectures for machine translation on a specific device, be it a GPU, smartphone, or tiny Raspberry Pi. Separating the search and training process leads to huge reductions in computation, they say. In a third approach, researchers are probing the essence of deep nets to see if it might be possible to train a small part of even hyper-efficient networks like those above. Under their proposed lottery ticket hypothesis, PhD student Jonathan Frankle and MIT Professor Michael Carbin proposed that within each model lies a tiny subnetwork that could have been trained in isolation with as few as one-tenth as many weights — what they call a “winning ticket.”  They showed that an algorithm could retroactively find these winning subnetworks in small image-classification models. Now, in a paper at the International Conference on Machine Learning (ICML), they show that the algorithm finds winning tickets in large models, too; the models just need to be rewound to an early, critical point in training when the order of the training data no longer influences the training outcome.  In less than two years, the lottery ticket idea has been cited more than more than 400 times, including by Facebook researcher Ari Morcos, who has shown that winning tickets can be transferred from one vision task to another, and that winning tickets exist in language and reinforcement learning models, too.  “The standard explanation for why we need such large networks is that overparameterization aids the learning process,” says Morcos. “The lottery ticket hypothesis disproves that — it’s all about finding an appropriate starting point. The big downside, of course, is that, currently, finding these ‘winning’ starting points requires training the full overparameterized network anyway.” Frankle says he’s hopeful that an efficient way to find winning tickets will be found. In the meantime, recycling those winning tickets, as Morcos suggests, could lead to big savings. Hardware designed for efficient deep net algorithms As deep nets push classical computers to the limit, researchers are pursuing alternatives, from optical computers that transmit and store data with photons instead of electrons, to quantum computers, which have the potential to increase computing power exponentially by representing data in multiple states at once. Until a new paradigm emerges, researchers have focused on adapting the modern chip to the demands of deep learning. The trend began with the discovery that video-game graphical chips, or GPUs, could turbocharge deep-net training with their ability to perform massively parallelized matrix computations. GPUs are now one of the workhorses of modern AI, and have spawned new ideas for boosting deep net efficiency through specialized hardware.  Much of this work hinges on finding ways to store and reuse data locally, across the chip’s processing cores, rather than waste time and energy shuttling data to and from a designated memory site. Processing data locally not only speeds up model training but improves inference, allowing deep learning applications to run more smoothly on smartphones and other mobile devices. Vivienne Sze, a professor at MIT, has literally written the book on efficient deep nets. In collaboration with book co-author Joel Emer, an MIT professor and researcher at NVIDIA, Sze has designed a chip that’s flexible enough to process the widely-varying shapes of both large and small deep learning models. Called Eyeriss 2, the chip uses 10 times less energy than a mobile GPU. Its versatility lies in its on-chip network, called a hierarchical mesh, that adaptively reuses data and adjusts to the bandwidth requirements of different deep learning models. After reading from memory, it reuses the data across as many processing elements as possible to minimize data transportation costs and maintain high throughput.  “The goal is to translate small and sparse networks into energy savings and fast inference,” says Sze. “But the hardware should be flexible enough to also efficiently support large and dense deep neural networks.” Other hardware innovators are focused on reproducing the brain’s energy efficiency. Former Go world champion Lee Sedol may have lost his title to a computer, but his performance was fueled by a mere 20 watts of power. AlphaGo, by contrast, burned an estimated megawatt of energy, or 500,000 times more. Inspired by the brain’s frugality, researchers are experimenting with replacing the binary, on-off switch of classical transistors with analog devices that mimic the way that synapses in the brain grow stronger and weaker during learning and forgetting. An electrochemical device, developed at MIT and recently published in Nature Communications, is modeled after the way resistance between two neurons grows or subsides as calcium, magnesium or potassium ions flow across the synaptic membrane dividing them. The device uses the flow of protons — the smallest and fastest ion in solid state — into and out of a crystalline lattice of tungsten trioxide to tune its resistance along a continuum, in an analog fashion. “Even though the device is not yet optimized, it gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain,” says the study’s senior author, Bilge Yildiz, a professor at MIT. Energy-efficient algorithms and hardware can shrink AI’s environmental impact. But there are other reasons to innovate, says Sze, listing them off: Efficiency will allow computing to move from data centers to edge devices like smartphones, making AI accessible to more people around the world; shifting computation from the cloud to personal devices reduces the flow, and potential leakage, of sensitive data; and processing data on the edge eliminates transmission costs, leading to faster inference with a shorter reaction time, which is key for interactive driving and augmented/virtual reality applications. “For all of these reasons, we need to embrace efficient AI,” she says. Deep learning has driven much of the recent progress in artificial intelligence, but as demand for computation and energy to train ever-larger models increases, many are raising concerns about the financial and environmental costs. To address the problem, researchers at MIT and the MIT-IBM Watson AI Lab are experimenting with ways to make software and hardware more energy efficient, and in some cases, more like the human brain. Image: Niki Hinkle/MIT Spectrum https://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807 Through innovation in software and hardware, researchers move to reduce the financial and environmental costs of modern artificial intelligence. Fri, 07 Aug 2020 17:00:00 -0400 https://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807 Kim Martineau | MIT Quest for Intelligence In June, OpenAI unveiled the largest language model in the world, a text-generating tool called GPT-3 that can write creative fiction, translate legalese into plain English, and answer obscure trivia questions. It’s the latest feat of intelligence achieved by deep learning, a machine learning method patterned after the way neurons in the brain process and store information. But it came at a hefty price: at least $4.6 million and 355 years in computing time, assuming the model was trained on a standard neural network chip, or GPU. The model’s colossal size — 1,000 times larger than a typical language model — is the main factor in its high cost. “You have to throw a lot more computation at something to get a little improvement in performance,” says Neil Thompson, an MIT researcher who has tracked deep learning’s unquenchable thirst for computing. “It’s unsustainable. We have to find more efficient ways to scale deep learning or develop other technologies.” Some of the excitement over AI’s recent progress has shifted to alarm. In a study last year, researchers at the University of Massachusetts at Amherst estimated that training a large deep-learning model produces 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars. As models grow bigger, their demand for computing is outpacing improvements in hardware efficiency. Chips specialized for neural-network processing, like GPUs (graphics processing units) and TPUs (tensor processing units), have offset the demand for more computing, but not by enough.  “We need to rethink the entire stack — from software to hardware,” says Aude Oliva, MIT director of the MIT-IBM Watson AI Lab and co-director of the MIT Quest for Intelligence. “Deep learning has made the recent AI revolution possible, but its growing cost in energy and carbon emissions is untenable.” Computational limits have dogged neural networks from their earliest incarnation — the perceptron — in the 1950s. As computing power exploded, and the internet unleashed a tsunami of data, they evolved into powerful engines for pattern recognition and prediction. But each new milestone brought an explosion in cost, as data-hungry models demanded increased computation. GPT-3, for example, trained on half a trillion words and ballooned to 175 billion parameters — the mathematical operations, or weights, that tie the model together — making it 100 times bigger than its predecessor, itself just a year old. In work posted on the pre-print server arXiv, Thompson and his colleagues show that the ability of deep learning models to surpass key benchmarks tracks their nearly exponential rise in computing power use. (Like others seeking to track AI’s carbon footprint, the team had to guess at many models’ energy consumption due to a lack of reporting requirements). At this rate, the researchers argue, deep nets will survive only if they, and the hardware they run on, become radically more efficient. Toward leaner, greener algorithms The human perceptual system is extremely efficient at using data. Researchers have borrowed this idea for recognizing actions in video and in real life to make models more compact. In a paper at the European Conference on Computer Vision (ECCV) in August, researchers at the MIT-IBM Watson AI Lab describe a method for unpacking a scene from a few glances, as humans do, by cherry-picking the most relevant data. Take a video clip of someone making a sandwich. Under the method outlined in the paper, a policy network strategically picks frames of the knife slicing through roast beef, and meat being stacked on a slice of bread, to represent at high resolution. Less-relevant frames are skipped over or represented at lower resolution. A second model then uses the abbreviated CliffsNotes version of the movie to label it “making a sandwich.” The approach leads to faster video classification at half the computational cost as the next-best model, the researchers say. “Humans don’t pay attention to every last detail — why should our models?” says the study’s senior author, Rogerio Feris, research manager at the MIT-IBM Watson AI Lab. “We can use machine learning to adaptively select the right data, at the right level of detail, to make deep learning models more efficient.” In a complementary approach, researchers are using deep learning itself to design more economical models through an automated process known as neural architecture search. Song Han, an assistant professor at MIT, has used automated search to design models with fewer weights, for language understanding and scene recognition, where picking out looming obstacles quickly is acutely important in driving applications.  In a paper at ECCV, Han and his colleagues propose a model architecture for three-dimensional scene recognition that can spot safety-critical details like road signs, pedestrians, and cyclists with relatively less computation. They used an evolutionary-search algorithm to evaluate 1,000 architectures before settling on a model they say is three times faster and uses eight times less computation than the next-best method.  In another recent paper, they use evolutionary search within an augmented designed space to find the most efficient architectures for machine translation on a specific device, be it a GPU, smartphone, or tiny Raspberry Pi. Separating the search and training process leads to huge reductions in computation, they say. In a third approach, researchers are probing the essence of deep nets to see if it might be possible to train a small part of even hyper-efficient networks like those above. Under their proposed lottery ticket hypothesis, PhD student Jonathan Frankle and MIT Professor Michael Carbin proposed that within each model lies a tiny subnetwork that could have been trained in isolation with as few as one-tenth as many weights — what they call a “winning ticket.”  They showed that an algorithm could retroactively find these winning subnetworks in small image-classification models. Now, in a paper at the International Conference on Machine Learning (ICML), they show that the algorithm finds winning tickets in large models, too; the models just need to be rewound to an early, critical point in training when the order of the training data no longer influences the training outcome.  In less than two years, the lottery ticket idea has been cited more than more than 400 times, including by Facebook researcher Ari Morcos, who has shown that winning tickets can be transferred from one vision task to another, and that winning tickets exist in language and reinforcement learning models, too.  “The standard explanation for why we need such large networks is that overparameterization aids the learning process,” says Morcos. “The lottery ticket hypothesis disproves that — it’s all about finding an appropriate starting point. The big downside, of course, is that, currently, finding these ‘winning’ starting points requires training the full overparameterized network anyway.” Frankle says he’s hopeful that an efficient way to find winning tickets will be found. In the meantime, recycling those winning tickets, as Morcos suggests, could lead to big savings. Hardware designed for efficient deep net algorithms As deep nets push classical computers to the limit, researchers are pursuing alternatives, from optical computers that transmit and store data with photons instead of electrons, to quantum computers, which have the potential to increase computing power exponentially by representing data in multiple states at once. Until a new paradigm emerges, researchers have focused on adapting the modern chip to the demands of deep learning. The trend began with the discovery that video-game graphical chips, or GPUs, could turbocharge deep-net training with their ability to perform massively parallelized matrix computations. GPUs are now one of the workhorses of modern AI, and have spawned new ideas for boosting deep net efficiency through specialized hardware.  Much of this work hinges on finding ways to store and reuse data locally, across the chip’s processing cores, rather than waste time and energy shuttling data to and from a designated memory site. Processing data locally not only speeds up model training but improves inference, allowing deep learning applications to run more smoothly on smartphones and other mobile devices. Vivienne Sze, a professor at MIT, has literally written the book on efficient deep nets. In collaboration with book co-author Joel Emer, an MIT professor and researcher at NVIDIA, Sze has designed a chip that’s flexible enough to process the widely-varying shapes of both large and small deep learning models. Called Eyeriss 2, the chip uses 10 times less energy than a mobile GPU. Its versatility lies in its on-chip network, called a hierarchical mesh, that adaptively reuses data and adjusts to the bandwidth requirements of different deep learning models. After reading from memory, it reuses the data across as many processing elements as possible to minimize data transportation costs and maintain high throughput.  “The goal is to translate small and sparse networks into energy savings and fast inference,” says Sze. “But the hardware should be flexible enough to also efficiently support large and dense deep neural networks.” Other hardware innovators are focused on reproducing the brain’s energy efficiency. Former Go world champion Lee Sedol may have lost his title to a computer, but his performance was fueled by a mere 20 watts of power. AlphaGo, by contrast, burned an estimated megawatt of energy, or 500,000 times more. Inspired by the brain’s frugality, researchers are experimenting with replacing the binary, on-off switch of classical transistors with analog devices that mimic the way that synapses in the brain grow stronger and weaker during learning and forgetting. An electrochemical device, developed at MIT and recently published in Nature Communications, is modeled after the way resistance between two neurons grows or subsides as calcium, magnesium or potassium ions flow across the synaptic membrane dividing them. The device uses the flow of protons — the smallest and fastest ion in solid state — into and out of a crystalline lattice of tungsten trioxide to tune its resistance along a continuum, in an analog fashion. “Even though the device is not yet optimized, it gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain,” says the study’s senior author, Bilge Yildiz, a professor at MIT. Energy-efficient algorithms and hardware can shrink AI’s environmental impact. But there are other reasons to innovate, says Sze, listing them off: Efficiency will allow computing to move from data centers to edge devices like smartphones, making AI accessible to more people around the world; shifting computation from the cloud to personal devices reduces the flow, and potential leakage, of sensitive data; and processing data on the edge eliminates transmission costs, leading to faster inference with a shorter reaction time, which is key for interactive driving and augmented/virtual reality applications. “For all of these reasons, we need to embrace efficient AI,” she says. Deep learning has driven much of the recent progress in artificial intelligence, but as demand for computation and energy to train ever-larger models increases, many are raising concerns about the financial and environmental costs. To address the problem, researchers at MIT and the MIT-IBM Watson AI Lab are experimenting with ways to make software and hardware more energy efficient, and in some cases, more like the human brain. Image: Niki Hinkle/MIT Spectrum https://news.mit.edu/2020/jessica-varner-chemical-architecture-0806 PhD student Jessica Varner traces the way synthetic building materials have transformed our environment. Thu, 06 Aug 2020 00:00:00 -0400 https://news.mit.edu/2020/jessica-varner-chemical-architecture-0806 Sofia Tong | MIT News correspondent Just months before starting her PhD, Jessica Varner and her partner bought a small house built in 1798. Located on tidal wetlands along Connecticut’s Patchogue River, the former residence of an ironworker had endured over two centuries of history and neglect. As Varner began to slowly restore the house — discovering its nail-less construction and thin horsehair plaster walls, learning plumbing skills, and burning oyster shells to make lime wash — she discovered a deep connection between her work inside and outside academia. For her dissertation in MIT’s History, Theory and Criticism of Architecture and Art program, Varner had been investigating how the chemical industry wooed the building and construction industry with the promise of “invisible,” “new,” and “durable” synthetic materials at the turn of the 20th century. In the process, these companies helped transform modern architecture while also disregarding or actively obscuring the health and environmental risks posed by these materials. While researching the history of these dyes, additives, and foams, Varner was also considering the presence of similar synthetics in her own new home. Coming into closer contact with these types of materials as a builder herself gave Varner a new perspective on the widespread implications of her research. “I think with my hands … and both projects began to inform each other,” she says. “Making and writing at the same time, I’m amazed how much this house is a part of the work.” The reverse proved true as well. Next year Varner will launch the Black House Project, an interdisciplinary artist-in-residence space on the Connecticut property. Artists who participate will be asked to engage with a seasonal theme relating to the intersection of history, environment, and community. The inaugural theme will be, “building from the ashes,” with a focus on burning and invasive species. A personal chemical history The chemical industry has a longer history for Varner than she even initially understood: She comes from a long line of farming families in Nebraska, a state with a complex relationship with the agricultural-chemical industry. “That was just our way of life and we never questioned it,” she says of the way farm life became entwined with the chemical necessities and economic hardships of American industrial agriculture. She recalls spraying herbicide, without a mask, on thistles on the farm after her family received government letters threatening daily fines if her family did not remove the plant. She also remembers how their farm, and much of the region, depended on seeds and other products from DeKalb. “Coming from a place that depends so much on the economy of an industry, there are nuances and deeper layers to the story” of modern agriculture, she says, noting that the subsistence farming and often industrial farming go hand in hand. At MIT, Varner has continued to probe beneath the surface of how chemical products are promoted and adopted. For her thesis, with the help of a Fulbright scholarship, she began digging through the chemical companies’ corporate archives. Her research has revealed how these companies generated research strategies, advertising, and publicity to transform the materials of the “modern interior and exterior.” Underneath a veneer of technological innovation and promises of novelty, Varner argues, these companies carefully masked their supply chains, adjusted building codes, and created marketing teams knowns as “truth squads,” which monitored and reshaped conversations around these products and growing concerns about their environmental harms. The result, she writes in her dissertation, was “one of the most successful, and toxic, material transformations in modern history.” Bridging activism and academia Varner has a long-running interest in environmental activism, from the conservation and restoration efforts in her home state, to vegetarianism, to studying glaciers in Alaska, to her current conception of the Black House Project. “At every point I feel like my life has had environmental activism in it,” she says. Environmental concerns have always been an integral part of her studies as well. After her undergraduate education at the University of Nebraska, Varner went on to study architecture and environmental design at Yale University, where she studied the debates between climate scientists and architects in the 1970s. Then she headed to Los Angeles as a practicing architect and professor. Working with as a designer with Michael Maltzan Architecture while teaching seminars and studios on at the University of Southern California and Woodbury University, she realized her students had bigger, historical questions, such as about the origin of sustainability catchphrases like “passive cooling,” “circular economy,” and “net-zero.” “There were deeper questions behind what environmentalism was, how you can enact it, how you know what the rules of sustainability are, and I realized I didn’t have answers,” Varner says. “It was taken for granted.” Those questions brought her to MIT, where she says the cross-cutting nature of her work benefitted from the Institute’s intersection with chemistry and engineering and history of technology. “The questions I was asking were interdisciplinary questions, so it was helpful to have those people around to bounce ideas off of,” she says. This fall, Varner will return to MIT as a lecturer while also working with the Environmental Data and Governance Initiative. At EDGI, she is the assistant curator for the EPA Interviewing Working Group, an ongoing oral history project chronicling the inner workings of the EPA and the way the organization has been affected by the current administration. “I’m excited to get back in the classroom,” she says, as well as finding a new way to take her academic interests into a more activist and policy-oriented sphere at EDGI. “I definitely think that’s what MIT brought to me in my education, other ways to carry your knowledge and your expertise to engage at different levels. It’s what I want to keep, going forward as a graduate.” MIT graduate student Jessica Varner has explored how the chemical industry wooed the building and construction industry with new synthetic materials at the turn of the 20th century. The result, she writes in her dissertation, was “one of the most successful, and toxic, material transformations in modern history.” Photo: Sarah Cal https://news.mit.edu/2020/sunlight-triggered-snowball-earths-ice-ages-0729 Findings also suggest exoplanets lying within habitable zones may be susceptible to ice ages. Wed, 29 Jul 2020 09:51:35 -0400 https://news.mit.edu/2020/sunlight-triggered-snowball-earths-ice-ages-0729 Jennifer Chu | MIT News Office At least twice in Earth’s history, nearly the entire planet was encased in a sheet of snow and ice. These dramatic “Snowball Earth” events occurred in quick succession, somewhere around 700 million years ago, and evidence suggests that the consecutive global ice ages set the stage for the subsequent explosion of complex, multicellular life on Earth.Scientists have considered multiple scenarios for what may have tipped the planet into each ice age. While no single driving process has been identified, it’s assumed that whatever triggered the temporary freeze-overs must have done so in a way that pushed the planet past a critical threshold, such as reducing incoming sunlight or atmospheric carbon dioxide to levels low enough to set off a global expansion of ice.But MIT scientists now say that Snowball Earths were likely the product of “rate-induced glaciations.” That is, they found the Earth can be tipped into a global ice age when the level of solar radiation it receives changes quickly over a geologically short period of time. The amount of solar radiation doesn’t have to drop to a particular threshold point; as long as the decrease in incoming sunlight occurs faster than a critical rate, a temporary glaciation, or Snowball Earth, will follow.These findings, published today in the Proceedings of the Royal Society A, suggest that whatever triggered the Earth’s ice ages most likely involved processes that quickly reduced the amount of solar radiation coming to the surface, such as widespread volcanic eruptions or biologically induced cloud formation that could have significantly blocked out the sun’s rays. The findings may also apply to the search for life on other planets. Researchers have been keen on finding exoplanets within the habitable zone — a distance from their star that would be within a temperature range that could support life. The new study suggests that these planets, like Earth, could also ice over temporarily if their climate changes abruptly. Even if they lie within a habitable zone, Earth-like planets may be more susceptible to global ice ages than previously thought.“You could have a planet that stays well within the classical habitable zone, but if incoming sunlight changes too fast, you could get a Snowball Earth,” says lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What this highlights is the notion that there’s so much more nuance in the concept of habitability.”Arnscheidt has co-authored the paper with Daniel Rothman, EAPS professor of geophysics, and co-founder and co-director of the Lorenz Center.A runaway snowballRegardless of the particular processes that triggered past glaciations, scientists generally agree that Snowball Earths arose from a “runaway” effect involving an ice-albedo feedback: As incoming sunlight is reduced, ice expands from the poles to the equator. As more ice covers the globe, the planet becomes more reflective, or higher in albedo, which further cools the surface for more ice to expand. Eventually, if the ice reaches a certain extent, this becomes a runaway process, resulting in a global glaciation. Global ice ages on Earth are temporary in nature, due to the planet’s carbon cycle. When the planet is not covered in ice, levels of carbon dioxide in the atmosphere are somewhat controlled by the weathering of rocks and minerals. When the planet is covered in ice, weathering is vastly reduced, so that carbon dioxide builds up in the atmosphere, creating a greenhouse effect that eventually thaws the planet out of its ice age.Scientists generally agree that the formation of Snowball Earths has something to do with the balance between incoming sunlight, the ice-albedo feedback, and the global carbon cycle.“There are lots of ideas for what caused these global glaciations, but they all really boil down to some implicit modification of solar radiation coming in,” Arnscheidt says. “But generally it’s been studied in the context of crossing a threshold.”He and Rothman had previously studied other periods in Earth’s history where the speed, or rate at which certain changes in climate occurred had a role in triggering events, such as past mass extinctions.“In the course of this exercise, we realized there was an immediate way to make a serious point by applying such ideas of rate-induced tipping, to Snowball Earth and habitability,” Rothman says.“Be wary of speed”The researchers developed a simple mathematical model of the Earth’s climate system that includes equations to represent relations between incoming and outgoing solar radiation, the surface temperature of the Earth, the concentration of carbon dioxide in the atmosphere, and the effects of weathering in taking up and storing atmospheric carbon dioxide. The researchers were able to tune each of these parameters to observe which conditions generated a Snowball Earth.Ultimately, they found that a planet was more likely to freeze over if incoming solar radiation decreased quickly, at a rate that was faster than a critical rate, rather than to a critical threshold, or particular level of sunlight. There is some uncertainty in exactly what that critical rate would be, as the model is a simplified representation of the Earth’s climate. Nevertheless, Arnscheidt estimates that the Earth would have to experience about a 2 percent drop in incoming sunlight over a period of about 10,000 years to tip into a global ice age.“It’s reasonable to assume past glaciations were induced by geologically quick changes to solar radiation,” Arnscheidt says.The particular mechanisms that may have quickly darkened the skies over tens of thousands of years is still up for debate. One possibility is that widespread volcanoes may have spewed aerosols into the atmosphere, blocking incoming sunlight around the world. Another is that primitive algae may have evolved mechanisms that facilitated the formation of light-reflecting clouds. The results from this new study suggest scientists may consider processes such as these, that quickly reduce incoming solar radiation, as more likely triggers for Earth’s ice ages.“Even though humanity will not trigger a snowball glaciation on our current climate trajectory, the existence of such a ‘rate-induced tipping point’ at the global scale may still remain a cause for concern,” Arnscheidt points out. “For example, it teaches us that we should be wary of the speed at which we are modifying Earth’s climate, not just the magnitude of the change. There could be other such rate-induced tipping points that might be triggered by anthropogenic warming. Identifying these and constraining their critical rates is a worthwhile goal for further research.”This research was funded, in part, by the MIT Lorenz Center. The trigger for “Snowball Earth” global ice ages may have been drops in incoming sunlight that happened quickly, in geological terms, according to an MIT study. Image: Wikimedia, Oleg Kuznetsov https://news.mit.edu/2020/sunlight-triggered-snowball-earths-ice-ages-0729 Findings also suggest exoplanets lying within habitable zones may be susceptible to ice ages. Wed, 29 Jul 2020 09:51:35 -0400 https://news.mit.edu/2020/sunlight-triggered-snowball-earths-ice-ages-0729 Jennifer Chu | MIT News Office At least twice in Earth’s history, nearly the entire planet was encased in a sheet of snow and ice. These dramatic “Snowball Earth” events occurred in quick succession, somewhere around 700 million years ago, and evidence suggests that the consecutive global ice ages set the stage for the subsequent explosion of complex, multicellular life on Earth.Scientists have considered multiple scenarios for what may have tipped the planet into each ice age. While no single driving process has been identified, it’s assumed that whatever triggered the temporary freeze-overs must have done so in a way that pushed the planet past a critical threshold, such as reducing incoming sunlight or atmospheric carbon dioxide to levels low enough to set off a global expansion of ice.But MIT scientists now say that Snowball Earths were likely the product of “rate-induced glaciations.” That is, they found the Earth can be tipped into a global ice age when the level of solar radiation it receives changes quickly over a geologically short period of time. The amount of solar radiation doesn’t have to drop to a particular threshold point; as long as the decrease in incoming sunlight occurs faster than a critical rate, a temporary glaciation, or Snowball Earth, will follow.These findings, published today in the Proceedings of the Royal Society A, suggest that whatever triggered the Earth’s ice ages most likely involved processes that quickly reduced the amount of solar radiation coming to the surface, such as widespread volcanic eruptions or biologically induced cloud formation that could have significantly blocked out the sun’s rays. The findings may also apply to the search for life on other planets. Researchers have been keen on finding exoplanets within the habitable zone — a distance from their star that would be within a temperature range that could support life. The new study suggests that these planets, like Earth, could also ice over temporarily if their climate changes abruptly. Even if they lie within a habitable zone, Earth-like planets may be more susceptible to global ice ages than previously thought.“You could have a planet that stays well within the classical habitable zone, but if incoming sunlight changes too fast, you could get a Snowball Earth,” says lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What this highlights is the notion that there’s so much more nuance in the concept of habitability.”Arnscheidt has co-authored the paper with Daniel Rothman, EAPS professor of geophysics, and co-founder and co-director of the Lorenz Center.A runaway snowballRegardless of the particular processes that triggered past glaciations, scientists generally agree that Snowball Earths arose from a “runaway” effect involving an ice-albedo feedback: As incoming sunlight is reduced, ice expands from the poles to the equator. As more ice covers the globe, the planet becomes more reflective, or higher in albedo, which further cools the surface for more ice to expand. Eventually, if the ice reaches a certain extent, this becomes a runaway process, resulting in a global glaciation. Global ice ages on Earth are temporary in nature, due to the planet’s carbon cycle. When the planet is not covered in ice, levels of carbon dioxide in the atmosphere are somewhat controlled by the weathering of rocks and minerals. When the planet is covered in ice, weathering is vastly reduced, so that carbon dioxide builds up in the atmosphere, creating a greenhouse effect that eventually thaws the planet out of its ice age.Scientists generally agree that the formation of Snowball Earths has something to do with the balance between incoming sunlight, the ice-albedo feedback, and the global carbon cycle.“There are lots of ideas for what caused these global glaciations, but they all really boil down to some implicit modification of solar radiation coming in,” Arnscheidt says. “But generally it’s been studied in the context of crossing a threshold.”He and Rothman had previously studied other periods in Earth’s history where the speed, or rate at which certain changes in climate occurred had a role in triggering events, such as past mass extinctions.“In the course of this exercise, we realized there was an immediate way to make a serious point by applying such ideas of rate-induced tipping, to Snowball Earth and habitability,” Rothman says.“Be wary of speed”The researchers developed a simple mathematical model of the Earth’s climate system that includes equations to represent relations between incoming and outgoing solar radiation, the surface temperature of the Earth, the concentration of carbon dioxide in the atmosphere, and the effects of weathering in taking up and storing atmospheric carbon dioxide. The researchers were able to tune each of these parameters to observe which conditions generated a Snowball Earth.Ultimately, they found that a planet was more likely to freeze over if incoming solar radiation decreased quickly, at a rate that was faster than a critical rate, rather than to a critical threshold, or particular level of sunlight. There is some uncertainty in exactly what that critical rate would be, as the model is a simplified representation of the Earth’s climate. Nevertheless, Arnscheidt estimates that the Earth would have to experience about a 2 percent drop in incoming sunlight over a period of about 10,000 years to tip into a global ice age.“It’s reasonable to assume past glaciations were induced by geologically quick changes to solar radiation,” Arnscheidt says.The particular mechanisms that may have quickly darkened the skies over tens of thousands of years is still up for debate. One possibility is that widespread volcanoes may have spewed aerosols into the atmosphere, blocking incoming sunlight around the world. Another is that primitive algae may have evolved mechanisms that facilitated the formation of light-reflecting clouds. The results from this new study suggest scientists may consider processes such as these, that quickly reduce incoming solar radiation, as more likely triggers for Earth’s ice ages.“Even though humanity will not trigger a snowball glaciation on our current climate trajectory, the existence of such a ‘rate-induced tipping point’ at the global scale may still remain a cause for concern,” Arnscheidt points out. “For example, it teaches us that we should be wary of the speed at which we are modifying Earth’s climate, not just the magnitude of the change. There could be other such rate-induced tipping points that might be triggered by anthropogenic warming. Identifying these and constraining their critical rates is a worthwhile goal for further research.”This research was funded, in part, by the MIT Lorenz Center. The trigger for “Snowball Earth” global ice ages may have been drops in incoming sunlight that happened quickly, in geological terms, according to an MIT study. Image: Wikimedia, Oleg Kuznetsov https://news.mit.edu/2020/sunlight-triggered-snowball-earths-ice-ages-0729 Findings also suggest exoplanets lying within habitable zones may be susceptible to ice ages. Wed, 29 Jul 2020 09:51:35 -0400 https://news.mit.edu/2020/sunlight-triggered-snowball-earths-ice-ages-0729 Jennifer Chu | MIT News Office At least twice in Earth’s history, nearly the entire planet was encased in a sheet of snow and ice. These dramatic “Snowball Earth” events occurred in quick succession, somewhere around 700 million years ago, and evidence suggests that the consecutive global ice ages set the stage for the subsequent explosion of complex, multicellular life on Earth.Scientists have considered multiple scenarios for what may have tipped the planet into each ice age. While no single driving process has been identified, it’s assumed that whatever triggered the temporary freeze-overs must have done so in a way that pushed the planet past a critical threshold, such as reducing incoming sunlight or atmospheric carbon dioxide to levels low enough to set off a global expansion of ice.But MIT scientists now say that Snowball Earths were likely the product of “rate-induced glaciations.” That is, they found the Earth can be tipped into a global ice age when the level of solar radiation it receives changes quickly over a geologically short period of time. The amount of solar radiation doesn’t have to drop to a particular threshold point; as long as the decrease in incoming sunlight occurs faster than a critical rate, a temporary glaciation, or Snowball Earth, will follow.These findings, published today in the Proceedings of the Royal Society A, suggest that whatever triggered the Earth’s ice ages most likely involved processes that quickly reduced the amount of solar radiation coming to the surface, such as widespread volcanic eruptions or biologically induced cloud formation that could have significantly blocked out the sun’s rays. The findings may also apply to the search for life on other planets. Researchers have been keen on finding exoplanets within the habitable zone — a distance from their star that would be within a temperature range that could support life. The new study suggests that these planets, like Earth, could also ice over temporarily if their climate changes abruptly. Even if they lie within a habitable zone, Earth-like planets may be more susceptible to global ice ages than previously thought.“You could have a planet that stays well within the classical habitable zone, but if incoming sunlight changes too fast, you could get a Snowball Earth,” says lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What this highlights is the notion that there’s so much more nuance in the concept of habitability.”Arnscheidt has co-authored the paper with Daniel Rothman, EAPS professor of geophysics, and co-founder and co-director of the Lorenz Center.A runaway snowballRegardless of the particular processes that triggered past glaciations, scientists generally agree that Snowball Earths arose from a “runaway” effect involving an ice-albedo feedback: As incoming sunlight is reduced, ice expands from the poles to the equator. As more ice covers the globe, the planet becomes more reflective, or higher in albedo, which further cools the surface for more ice to expand. Eventually, if the ice reaches a certain extent, this becomes a runaway process, resulting in a global glaciation. Global ice ages on Earth are temporary in nature, due to the planet’s carbon cycle. When the planet is not covered in ice, levels of carbon dioxide in the atmosphere are somewhat controlled by the weathering of rocks and minerals. When the planet is covered in ice, weathering is vastly reduced, so that carbon dioxide builds up in the atmosphere, creating a greenhouse effect that eventually thaws the planet out of its ice age.Scientists generally agree that the formation of Snowball Earths has something to do with the balance between incoming sunlight, the ice-albedo feedback, and the global carbon cycle.“There are lots of ideas for what caused these global glaciations, but they all really boil down to some implicit modification of solar radiation coming in,” Arnscheidt says. “But generally it’s been studied in the context of crossing a threshold.”He and Rothman had previously studied other periods in Earth’s history where the speed, or rate at which certain changes in climate occurred had a role in triggering events, such as past mass extinctions.“In the course of this exercise, we realized there was an immediate way to make a serious point by applying such ideas of rate-induced tipping, to Snowball Earth and habitability,” Rothman says.“Be wary of speed”The researchers developed a simple mathematical model of the Earth’s climate system that includes equations to represent relations between incoming and outgoing solar radiation, the surface temperature of the Earth, the concentration of carbon dioxide in the atmosphere, and the effects of weathering in taking up and storing atmospheric carbon dioxide. The researchers were able to tune each of these parameters to observe which conditions generated a Snowball Earth.Ultimately, they found that a planet was more likely to freeze over if incoming solar radiation decreased quickly, at a rate that was faster than a critical rate, rather than to a critical threshold, or particular level of sunlight. There is some uncertainty in exactly what that critical rate would be, as the model is a simplified representation of the Earth’s climate. Nevertheless, Arnscheidt estimates that the Earth would have to experience about a 2 percent drop in incoming sunlight over a period of about 10,000 years to tip into a global ice age.“It’s reasonable to assume past glaciations were induced by geologically quick changes to solar radiation,” Arnscheidt says.The particular mechanisms that may have quickly darkened the skies over tens of thousands of years is still up for debate. One possibility is that widespread volcanoes may have spewed aerosols into the atmosphere, blocking incoming sunlight around the world. Another is that primitive algae may have evolved mechanisms that facilitated the formation of light-reflecting clouds. The results from this new study suggest scientists may consider processes such as these, that quickly reduce incoming solar radiation, as more likely triggers for Earth’s ice ages.“Even though humanity will not trigger a snowball glaciation on our current climate trajectory, the existence of such a ‘rate-induced tipping point’ at the global scale may still remain a cause for concern,” Arnscheidt points out. “For example, it teaches us that we should be wary of the speed at which we are modifying Earth’s climate, not just the magnitude of the change. There could be other such rate-induced tipping points that might be triggered by anthropogenic warming. Identifying these and constraining their critical rates is a worthwhile goal for further research.”This research was funded, in part, by the MIT Lorenz Center. The trigger for “Snowball Earth” global ice ages may have been drops in incoming sunlight that happened quickly, in geological terms, according to an MIT study. Image: Wikimedia, Oleg Kuznetsov https://news.mit.edu/2020/gift-tackling-poverty-climate-change-0729 The King Climate Action Initiative at J-PAL will develop large-scale climate-response programs for some of the world’s most vulnerable populations. Wed, 29 Jul 2020 09:12:48 -0400 https://news.mit.edu/2020/gift-tackling-poverty-climate-change-0729 Peter Dizikes | MIT News Office With a founding $25 million gift from King Philanthropies, MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) is launching a new initiative to solve problems at the nexus of climate change and global poverty.The new program, the King Climate Action Initiative (K-CAI), was announced today by King Philanthropies and J-PAL, and will start immediately. K-CAI plans to rigorously study programs reducing the effects of climate change on vulnerable populations, and then work with policymakers to scale up the most successful interventions.“To protect our well-being and improve the lives of people living in poverty, we must be better stewards of our climate and our planet,” says Esther Duflo, director of J-PAL and the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics at MIT. “Through K-CAI, we will work to build a movement for evidence-informed policy at the nexus of climate change and poverty alleviation similar to the movement J-PAL helped build in global development. The moment is perhaps unique: The only silver lining of this global pandemic is that it reminds us that nature is sometimes stronger than us. It is a moment to act decisively to change behavior to stave off a much larger catastrophe in the future.”K-CAI constitutes an ambitious effort: The initiative intends to help improve the lives of at least 25 million people over the next decade. K-CAI will announce a call for proposals this summer and select its first funded projects by the end of 2020.“We are short on time to take action on climate change,” says Robert King, co-founder of King Philanthropies. “K-CAI reflects our commitment to confront this global crisis by focusing on solutions that benefit people in extreme poverty. They are already the hardest hit by climate change, and if we fail to act, their circumstances will become even more dire.”There are currently an estimated 736 million people globally living in extreme poverty, on as little as $1.90 per day or less. The World Bank estimates that climate change could push roughly another 100 million into extreme poverty by 2030.As vast as its effects may be, climate change also presents a diverse set of problems to tackle. Among other things, climate change, as well as fossil-fuel pollution, is expected to reduce crop yields, raise food prices, and generate more malnutrition; increase the prevalence of respiratory illness, heat stress, and numerous other diseases; and increase extreme weather events, wiping out homes, livelihoods, and communities.With this in mind, the initiative will focus on specific projects within four areas: climate change mitigation, to reduce carbon emissions; pollution reduction; adaptation to ongoing climate change; and shifting toward cleaner, reliable, and more affordable souces of energy. In each area, K-CAI will study smaller-scale programs, evaluate their impact, and work with partners to scale up the projects with the most effective solutions.Projects backed by J-PAL have already had an impact in these areas. In one recent study, J-PAL-affiliated researchers found that changing the emissions audit system in Gujarat, India, reduced industrial-plant pollution by 28 percent; the state then implemented the reforms. In another study in India, J-PAL affiliated researchers found that farmers using a flood-resistant rice variety called Swarna-Sub1 increased their crop yields by 41 percent.In Zambia, a study by researchers in the J-PAL network showed that lean-season loans for farmers increased agricultural output by 8 percent; in Uganda, J-PAL affiliated researchers found that a payment system to landowners cut deforestation nearly in half and is a cost-effective way to lower carbon emissions.Other J-PAL field experiments in progress include one providing cash payments that stop farmers in Punjab, India, from burning crops, which generates half the air pollution in Delhi; another implementing an emissions-trading plan in India; and a new program to harvest rainwater more effectively in Niger. All told, J-PAL researchers have evaluated over 40 programs focused on climate, energy, and the environment.By conducting these kinds of field experiments, and implementing some widely, K-CAI aims to apply the same approach J-PAL has directed toward multiple aspects of poverty alleviation, including food production, health care, education, and transparent governance.A unique academic enterprise, J-PAL emphasizes randomized controlled trials to identify useful poverty-reduction programs, then works with governments and nongovernmental organizations to implement them. All told, programs evaluated by J-PAL affiliated researchers and found to be effective have been scaled up to reach 400 million people worldwide since the lab’s founding in 2003.“J-PAL has distinctive core competencies that equip it to achieve outsized impact over the long run,” says Kim Starkey, president and CEO of King Philanthropies. “Its researchers excel at conducting randomized evaluations to figure out what works, its leadership is tremendous, and J-PAL as an organization has a rare, demonstrated ability to partner with governments and other organizations to scale up proven interventions and programs.”K-CAI aims to conduct an increasing number of field experiments over the initial five-year period and focus on implementing the highest-quality programs at scale over the subsequent five years. As Starkey observes, this approach may generate increasing interest from additional partners.“There is an immense need for a larger body of evidence about what interventions work at this nexus of climate change and extreme poverty,” Starkey says. “The findings of the King Climate Action Initiative will inform policymakers and funders as they seek to prioritize opportunities with the highest impact.”King Philanthropies was founded by Robert E. (Bob) King and Dorothy J. (Dottie) King in 2016. The organization has a goal of making “a meaningful difference in the lives of the world’s poorest people” by developing and supporting a variety of antipoverty initiatives.J-PAL was co-founded by Duflo; Abhijit Banerjee, the Ford International Professor of Economics at MIT; and Sendhil Mullainathan, now a professor at the University of Chicago’s Booth School of Business. It has over 200 affiliated researchers at more than 60 universities across the globe. J-PAL is housed in the Department of Economics in MIT’s School of Humanities, Arts, and Social Sciences.Last fall, Duflo and Banerjee, along with long-time collaborator Michael Kremer of Harvard University, were awarded the Nobel Prize in economic sciences. The Nobel citation observed that their work has “dramatically improved our ability to fight poverty in practice” and provided a “new approach to obtaining reliable answers about the best ways to fight global poverty.”K-CAI will be co-chaired by two professors, Michael Greenstone and Kelsey Jack, who have extensive research experience in environmental economics. Both are already affiliated researchers with J-PAL.Greenstone is the Milton Friedman Distinguished Service Professor in Economics at the University of Chicago. He is also director of the Energy Policy Institute at the University of Chicago. Greenstone, who was a tenured faculty member in MIT’s Department of Economics from 2003 to 2014, has published high-profile work on energy access, the consequences of air pollution, and the effectiveness of policy measures, among other topics.Jack is an associate professor in the Bren School of Environmental Science and Management at the University of California at Santa Barbara. She is an expert on environment-related programs in developing countries, with a focus on incentives that encourage the private-sector development of environmental goods. Jack was previously a faculty member at Tufts University, and a postdoc at MIT in 2010-11, working on J-PAL’s Agricultural Technology Adoption Initiative. Over the next decade, the King Climate Action Initiative (K-CAI) intends to help improve the lives of at least 25 million people hard hit by poverty and climate change. Image: MIT News https://news.mit.edu/2020/letter-reif-grand-challenges-climate-change-0723 Thu, 23 Jul 2020 15:18:29 -0400 https://news.mit.edu/2020/letter-reif-grand-challenges-climate-change-0723 MIT News Office The following letter was sent to the MIT community today by President L. Rafael Reif. To the members of the MIT community, I am delighted to share an important step in MIT’s ongoing efforts to take action against climate change. Thanks to the thoughtful leadership of Vice President for Research Maria Zuber, Associate Provost Richard Lester and a committee of 26 faculty leaders representing all five schools and the college, today we are committing to an ambitious new research effort called Climate Grand Challenges. MIT’s Plan for Action on Climate Change stressed the need for breakthrough innovations and underscored MIT’s responsibility to lead. Since then, the escalating climate crisis and lagging global response have only intensified the need for action. With this letter, we invite all principal investigators (PIs) from across MIT to help us define a new agenda of transformative research. The threat of climate change demands a host of interlocking solutions; to shape a research program worthy of MIT, we seek bold faculty proposals that address the most difficult problems in the field, problems whose solutions would make the most decisive difference. The focus will be on those hard questions where progress depends on advancing and applying frontier knowledge in the physical, life and social sciences, or advancing and applying cutting-edge technologies, or both; solutions may require the wisdom of many disciplines. Equally important will be to advance the humanistic and scientific understanding of how best to inspire 9 billion humans to adopt the technologies and behaviors the crisis demands. We encourage interested PIs to submit a letter of interest. A group of MIT faculty and outside experts will choose the most compelling – the five or six ideas that offer the most effective levers for rapid, large-scale change. MIT will then focus intensely on securing the funds for the work to succeed. To meet this great rolling emergency for the species, we are seeking and expecting big ideas for sharpening our understanding, combatting climate change itself and adapting constructively to its impacts. You can learn much more about the overall concept as well as specific deadlines and requirements here. This invitation is geared specifically for MIT PIs – but the climate problem deserves wholehearted attention from every one of us. Whatever your role, I encourage you to find ways to be part of the broad range of climate events, courses and research and other work already under way at MIT.  For decades, MIT students, staff, postdocs, faculty and alumni have poured their energy, insight and ingenuity into countless aspects of the climate problem; in this new work, your efforts are our inspiration and our springboard.  We will share next steps in the Climate Grand Challenges process later in the fall semester. Sincerely, L. Rafael Reif https://news.mit.edu/2020/covid-19-solar-output-smog-0722 As the air cleared after lockdowns, solar installations in Delhi produced 8 percent more power, study shows. Wed, 22 Jul 2020 00:00:00 -0400 https://news.mit.edu/2020/covid-19-solar-output-smog-0722 David L. Chandler | MIT News Office As the Covid-19 shutdowns and stay-at-home orders brought much of the world’s travel and commerce to a standstill, people around the world started noticing clearer skies as a result of lower levels of air pollution. Now, researchers have been able to demonstrate that those clearer skies had a measurable impact on the output from solar photovoltaic panels, leading to a more than 8 percent increase in the power output from installations in Delhi. While such an improved output was not unexpected, the researchers say this is the first study to demonstrate and quantify the impact of the reduced air pollution on solar output. The effect should apply to solar installations worldwide, but would normally be very difficult to measure against a background of natural variations in solar panel output caused by everything from clouds to dust on the panels. The extraordinary conditions triggered by the pandemic, with its sudden cessation of normal activities, combined with high-quality air-pollution data from one of the world’s smoggiest cities, afforded the opportunity to harness data from an unprecedented, unplanned natural experiment. The findings are reported today in the journal Joule, in a paper by MIT professor of mechanical engineering Tonio Buonassisi, research scientist Ian Marius Peters, and three others in Singapore and Germany. The study was an extension of previous research the team has been conducting in Delhi for several years. The impetus for the work came after an unusual weather pattern in 2013 swept a concentrated plume of smoke from forest fires in Indonesia across a vast swath of Indonesia, Malaysia, and Singapore, where Peters, who had just arrived in the region, found “it was so bad that you couldn’t see the buildings on the other side of the street.” Since he was already doing research on solar photovoltaics, Peters decided to investigate what effects the air pollution was having on solar panel output. The team had good long-term data on both solar panel output and solar insolation, gathered at the same time by monitoring stations set up adjacent to the solar installations. They saw that during the 18-day-long haze event, the performance of some types of solar panels decreased, while others stayed the same or increased slightly. That distinction proved useful in teasing apart the effects of pollution from other variables that could be at play, such as weather conditions. Peters later learned that a high-quality, years-long record of actual measurements of fine particulate air pollution (particles less than 2.5 micrometers in size) had been collected every hour, year after year, at the U.S. Embassy in Delhi. That provided the necessary baseline for determining the actual effects of pollution on solar panel output; the researchers compared the air pollution data from the embassy with meteorological data on cloudiness and the solar irradiation data from the sensors. They identified a roughly 10 percent overall reduction in output from the solar installations in Delhi because of pollution – enough to make a significant dent in the facilities’ financial projections. To see how the Covid-19 shutdowns had affected the situation, they were able to use the mathematical tools they had developed, along with the embassy’s ongoing data collection, to see the impact of reductions in travel and factory operations. They compared the data from before and after India went into mandatory lockdown on March 24, and also compared this with data from the previous three years. Pollution levels were down by about 50 percent after the shutdown, they found. As a result, the total output from the solar panels was increased by 8.3 percent in late March, and by 5.9 percent in April, they calculated. “These deviations are much larger than the typical variations we have” within a year or from year to year, Peters says — three to four times greater. “So we can’t explain this with just fluctuations.” The amount of difference, he says, is roughly the difference between the expected performance of a solar panel in Houston versus one in Toronto. An 8 percent increase in output might not sound like much, Buonassisi says, but “the margins of profit are very small for these businesses.” If a solar company was expecting to get a 2 percent profit margin out of their expected 100 percent panel output, and suddenly they are getting 108 percent output, that means their margin has increased fivefold, from 2 percent to 10 percent, he points out. The findings provide real data on what can happen in the future as emissions are reduced globally, he says. “This is the first real quantitative evaluation where you almost have a switch that you can turn on and off for air pollution, and you can see the effect,” he says. “You have an opportunity to baseline these models with and without air pollution.” By doing so, he says, “it gives a glimpse into a world with significantly less air pollution.” It also demonstrates that the very act of increasing the usage of solar electricity, and thus displacing fossil-fuel generation that produces air pollution, makes those panels more efficient all the time. Putting solar panels on one’s house, he says, “is helping not only yourself, not only putting money in your pocket, but it’s also helping everybody else out there who already has solar panels installed, as well as everyone else who will install them over the next 20 years.” In a way, a rising tide of solar panels raises all solar panels. Though the focus was on Delhi, because the effects there are so strong and easy to detect, this effect “is true anywhere where you have some kind of air pollution. If you reduce it, it will have beneficial consequences for solar panels,” Peters says. Even so, not every claim of such effects is necessarily real, he says, and the details do matter. For example, clearer skies were also noted across much of Europe as a result of the shutdowns, and some news reports described exceptional output levels from solar farms in Germany and in the U.K. But the researchers say that just turned out to be a coincidence. “The air pollution levels in Germany and Great Britain are generally so low that most PV installations are not significantly affected by them,” Peters says. After checking the data, what contributed most to those high levels of solar output this spring, he says, turned out to be just “extremely nice weather,” which produced record numbers of sunlight hours. The research team included C. Brabec and J. Hauch at the Helmholtz-Institute Erlangen-Nuremberg for Renewable Energies, in Germany, where Peters also now works, and A. Nobre at Cleantech Solar in Singapore. The work was supported by the Bavarian State Government. Shutdowns in response to the Covid-19 pandemic have resulted in lowered air pollution levels around the world. Researchers at MIT, and in Germany and Singapore have found that this resulted in a significant increase in the output from solar photovoltaic installations in Delhi, normally one of the world’s smoggiest cities. Image: Jose-Luis Olivares, MIT https://news.mit.edu/2020/building-more-sustainable-mit-at-home-0715 MIT’s Office of Sustainability puts lessons of resiliency into practice. Wed, 15 Jul 2020 13:55:01 -0400 https://news.mit.edu/2020/building-more-sustainable-mit-at-home-0715 Nicole Morell | MIT Office of Sustainability Like most offices across MIT, the Office of Sustainability (MITOS) has in recent months worked to pivot projects while seeking to understand and participate in the emergence of a new normal as the result of the Covid-19 pandemic. Despite now working off campus, the MITOS team methodology — one that warrants collective engagement, commitment to innovative problem solving, and robust data collection — has continued.An expanded look at resiliency When the MIT community transitioned off campus, many began to use the word “resilient” for good reason — it’s one way to describe a community of thousands that quickly learned how to study, research, work, and teach from afar in the face of a major disruption. In the field of sustainability, resiliency is frequently used when referring to how communities can not only continue to function, but thrive during and after flooding or extreme heat events as the result of climate change. In recent months, the term has taken on expanded meaning. “The challenges associated with Covid-19 and its impact on MIT and the greater community has provided a moment to explore what a sustainable, resilient campus and community looks like in practice,” says Director of Sustainability Julie Newman.The MIT campus climate resiliency framework codified by MITOS — and in response to a changing climate — has long been organized around the interdependencies of four core systems: community (academic, research, and student life), buildings, utilities, and landscape systems. This same framework is now being applied in part to the MIT response to Covid-19. “The MIT campus climate resiliency framework has enabled us to understand the vulnerabilities and capacities within each core system that inhibit or enable fulfillment of MIT’s mission,” explains Brian Goldberg, MITOS assistant director. “The pandemic’s disruption of the community layer provides us with a remarkable test in progress of this adaptive capacity.”The campus response to the pandemic has, in fact, informed future modeling and demonstrated how the community can advance its important work even when displaced. “MIT has been able to offer countless virtual resources to maintain a connected community,” Goldberg explains. “While a future major flood could physically displace segments of our community, we’ve now seen that the ability to quickly evacuate and regroup virtually demonstrates a remarkable adaptive capacity.”Taking the hive home Also resilient are the flowering plants growing in the Hive Garden — the Institute’s student-supported pollinator garden. Maintained by MIT Grounds Services alongside students, the closure of campus meant many would miss the first spring bloom in the new garden. To make up for this, a group of UA Sustainability Committee (UA Sustain) students began to brainstorm ways to bring sustainable gardening to the MIT community if they couldn’t come to campus. Working with MITOS, students hatched the idea for the Hive@Home — a project that empowers students and staff to try their hands (and green thumbs) at growing a jalapeno or two, while building community.“The Hive@Home is designed to link students and staff through gardening — continuing to strengthen the relationships built between MIT Grounds and the community since the Hive Garden started,” says Susy Jones, senior project manager who is leading the effort for MITOS. With funding from UA Sustain and MindHandHeart, the Hive@Home pilot launched in April with more than four dozen community members receiving vegetable seeds and growing supplies. Now the community is sharing their sprouts and lessons learned on Slack with guidance from MIT Grounds experts like Norm Magnusson and Mike Seaberg, who helped bring the campus garden to life, along with professor of ocean and mechanical engineering Alexandra Techet, who is also an experienced home gardener.Lessons learned from Covid-19 response  The impacts of Covid-19 continue to provide insights into community behavior and views. Seeing an opportunity to better understand these views, the Sustainability Leadership Committee, in collaboration with the Office of Sustainability, the Environmental Solutions Initiative, Terrascope, and the MIT Energy Initiative, hosted a community sustainability forum where more than 100 participants — including staff, students, and faculty — shared ideas on how they thought the response to Covid-19 could inform sustainability efforts at MIT and beyond. Common themes of human health and well-being, climate action, food security, consumption and waste, sustainability education, and bold leadership emerged from the forum. “The event gave us a view into how MIT can be a sustainability leader in a post Covid-19 world, and how our community would like to see this accomplished,” says Newman.Community members also shared a renewed focus on the impacts of consumption and single-use plastics, as well as the idea that remote work can decrease the carbon footprint of the Institute. The Sustainability Leadership Committee is now working to share these insights to drive action and launch new ideas with sustainability partners across campus.  These actions are just the beginning, as plans for campus are updated and the MIT community learns and adapts to a new normal at MIT. “We are looking at these ideas as a starting place,” explains Newman. “As we look to a future return to campus, we know the sustainability challenges and opportunities faced will continue to shift thinking about our mobility choices, where we eat, what we buy, and more. We will continue to have these community conversations and work across campus to support a sustainable, safe MIT.” MIT’s campus response to the pandemic has informed future modeling and demonstrated how the community can advance its important work even when displaced. Photo: Christopher Harting https://news.mit.edu/2020/decarbonize-and-diversify-0715 How energy-intensive economies can survive and thrive as the globe ramps up climate action. Wed, 15 Jul 2020 13:40:01 -0400 https://news.mit.edu/2020/decarbonize-and-diversify-0715 Mark Dwortzan | MIT Joint Program on the Science and Policy of Global Change Today, Russia’s economy depends heavily upon its abundant fossil fuel resources. Russia is one of the world’s largest exporters of fossil fuels, and a number of its key exporting industries — including metals, chemicals, and fertilizers — draw on fossil resources. The nation also consumes fossil fuels at a relatively high rate; it’s the world’s fourth-largest emitter of carbon dioxide. As the world shifts away from fossil fuel production and consumption and toward low-carbon development aligned with the near- and long-term goals of the Paris Agreement, how might countries like Russia reshape their energy-intensive economies to avoid financial peril and capitalize on this clean energy transition? In a new study in the journal Climate Policy, researchers at the MIT Joint Program on the Science and Policy of Global Change and Russia’s National Research University Higher School of Economics assess the impacts on the Russian economy of the efforts of the main importers of Russian fossil fuels to comply with the Paris Agreement. The researchers project that expected climate-related actions by importers of Russia’s fossil fuels will lower demand for these resources considerably, thereby reducing the country’s GDP growth rate by nearly 0.5 percent between 2035 and 2050. The study also finds that the Paris Agreement will heighten Russia’s risks of facing market barriers for its exports of energy-intensive goods, and of lagging behind in developing increasingly popular low-carbon energy technologies. Using the Joint Program’s Economic Projection and Policy Analysis model, a multi-region, multi-sector model of the world economy, the researchers evaluated the impact on Russian energy exports and GDP of scenarios representing global climate policy ambition ranging from non-implementation of national Paris pledges to collective action aligned with keeping global warming well below 2 degrees Celsius. The bottom line: Global climate policies will make it impossible for Russia to sustain its current path of fossil fuel export-based development. To maintain and enhance its economic well-being, the study’s co-authors recommend that Russia both decarbonize and diversify its economy in alignment with climate goals. In short, by taxing fossil fuels (e.g., through a production tax or carbon tax), the country could redistribute that revenue to the development of human capital to boost other economic sectors (primarily manufacturing, services, agriculture, and food production), thereby making up for energy-sector losses due to global climate policies. The study projects that the resulting GDP increase could be on the order of 1-4 percent higher than it would be without diversification. “Many energy-exporting countries have tried to diversify their economies, but with limited success,” says Sergey Paltsev, deputy director of the MIT Joint Program, senior research scientist at the MIT Energy Initiative (MITEI) and director of the MIT Joint Program/MITEI Energy-at-Scale Center. “Our study quantifies the dynamics of efforts to achieve economic diversification in which reallocation of funds leads to higher labor productivity and economic growth — all while enabling more aggressive emissions reduction targets.”   The study was supported by the Basic Research Program of the National Research University Higher School of Economics and the MIT Skoltech Seed Fund Program. Human capital development in Russia through increased per-student expenditure could lead to long-term benefits in manufacturing, services, agriculture, food production, and other sectors. Seen here: Russian students from Tyumen State University. Photo courtesy of the United Nations Development Program. https://news.mit.edu/2020/decarbonize-and-diversify-0715 How energy-intensive economies can survive and thrive as the globe ramps up climate action. Wed, 15 Jul 2020 13:40:01 -0400 https://news.mit.edu/2020/decarbonize-and-diversify-0715 Mark Dwortzan | MIT Joint Program on the Science and Policy of Global Change Today, Russia’s economy depends heavily upon its abundant fossil fuel resources. Russia is one of the world’s largest exporters of fossil fuels, and a number of its key exporting industries — including metals, chemicals, and fertilizers — draw on fossil resources. The nation also consumes fossil fuels at a relatively high rate; it’s the world’s fourth-largest emitter of carbon dioxide. As the world shifts away from fossil fuel production and consumption and toward low-carbon development aligned with the near- and long-term goals of the Paris Agreement, how might countries like Russia reshape their energy-intensive economies to avoid financial peril and capitalize on this clean energy transition? In a new study in the journal Climate Policy, researchers at the MIT Joint Program on the Science and Policy of Global Change and Russia’s National Research University Higher School of Economics assess the impacts on the Russian economy of the efforts of the main importers of Russian fossil fuels to comply with the Paris Agreement. The researchers project that expected climate-related actions by importers of Russia’s fossil fuels will lower demand for these resources considerably, thereby reducing the country’s GDP growth rate by nearly 0.5 percent between 2035 and 2050. The study also finds that the Paris Agreement will heighten Russia’s risks of facing market barriers for its exports of energy-intensive goods, and of lagging behind in developing increasingly popular low-carbon energy technologies. Using the Joint Program’s Economic Projection and Policy Analysis model, a multi-region, multi-sector model of the world economy, the researchers evaluated the impact on Russian energy exports and GDP of scenarios representing global climate policy ambition ranging from non-implementation of national Paris pledges to collective action aligned with keeping global warming well below 2 degrees Celsius. The bottom line: Global climate policies will make it impossible for Russia to sustain its current path of fossil fuel export-based development. To maintain and enhance its economic well-being, the study’s co-authors recommend that Russia both decarbonize and diversify its economy in alignment with climate goals. In short, by taxing fossil fuels (e.g., through a production tax or carbon tax), the country could redistribute that revenue to the development of human capital to boost other economic sectors (primarily manufacturing, services, agriculture, and food production), thereby making up for energy-sector losses due to global climate policies. The study projects that the resulting GDP increase could be on the order of 1-4 percent higher than it would be without diversification. “Many energy-exporting countries have tried to diversify their economies, but with limited success,” says Sergey Paltsev, deputy director of the MIT Joint Program, senior research scientist at the MIT Energy Initiative (MITEI) and director of the MIT Joint Program/MITEI Energy-at-Scale Center. “Our study quantifies the dynamics of efforts to achieve economic diversification in which reallocation of funds leads to higher labor productivity and economic growth — all while enabling more aggressive emissions reduction targets.”   The study was supported by the Basic Research Program of the National Research University Higher School of Economics and the MIT Skoltech Seed Fund Program. Human capital development in Russia through increased per-student expenditure could lead to long-term benefits in manufacturing, services, agriculture, food production, and other sectors. Seen here: Russian students from Tyumen State University. Photo courtesy of the United Nations Development Program. https://news.mit.edu/2020/new-approach-to-carbon-capture-0709 Researchers design an effective treatment for both exhaust and ambient air. Thu, 09 Jul 2020 15:25:01 -0400 https://news.mit.edu/2020/new-approach-to-carbon-capture-0709 Nancy W. Stauffer | MIT Energy Initiative An essential component of any climate change mitigation plan is cutting carbon dioxide (CO2) emissions from human activities. Some power plants now have CO2 capture equipment that grabs CO2 out of their exhaust. But those systems are each the size of a chemical plant, cost hundreds of millions of dollars, require a lot of energy to run, and work only on exhaust streams that contain high concentrations of CO2. In short, they’re not a solution for airplanes, home heating systems, or automobiles. To make matters worse, capturing CO2 emissions from all anthropogenic sources may not solve the climate problem. “Even if all those emitters stopped tomorrow morning, we would still have to do something about the amount of CO2 in the air if we’re going to restore preindustrial atmospheric levels at a rate relevant to humanity,” says Sahag Voskian SM ’15, PhD ’19, co-founder and chief technology officer at Verdox, Inc. And developing a technology that can capture the CO2 in the air is a particularly hard problem, in part because the CO2 occurs in such low concentrations. The CO2 capture challenge A key problem with CO2 capture is finding a “sorbent” that will pick up CO2 in a stream of gas and then release it so the sorbent is clean and ready for reuse and the released CO2 stream can be utilized or sent to a sequestration site for long-term storage. Research has mainly focused on sorbent materials present as small particles whose surfaces contain “active sites” that capture CO2 — a process called adsorption. When the system temperature is lowered (or pressure increased), CO2 adheres to the particle surfaces. When the temperature is raised (or pressure reduced), the CO2 is released. But achieving those temperature or pressure “swings” takes considerable energy, in part because it requires treating the whole mixture, not just the CO2-bearing sorbent. In 2015, Voskian, then a PhD candidate in chemical engineering, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering and co-director of the MIT Energy Initiative’s Low-Carbon Energy Center for Carbon Capture, Utilization, and Storage, began to take a closer look at the temperature- and pressure-swing approach. “We wondered if we could get by with using only a renewable resource — like renewably sourced electricity — rather than heat or pressure,” says Hatton. Using electricity to elicit the chemical reactions needed for CO2 capture and conversion had been studied for several decades, but Hatton and Voskian had a new idea about how to engineer a more efficient adsorption device. Their work focuses on a special class of molecules called quinones. When quinone molecules are forced to take on extra electrons — which means they’re negatively charged — they have a high chemical affinity for CO2 molecules and snag any that pass. When the extra electrons are removed from the quinone molecules, the quinone’s chemical affinity for CO2 instantly disappears, and the molecules release the captured CO2.  Others have investigated the use of quinones and an electrolyte in a variety of electrochemical devices. In most cases, the devices involve two electrodes — a negative one where the dissolved quinone is activated for CO2 capture, and a positive one where it’s deactivated for CO2 release. But moving the solution from one electrode to the other requires complex flow and pumping systems that are large and take up considerable space, limiting where the devices can be used.  As an alternative, Hatton and Voskian decided to use the quinone as a solid electrode and — by applying what Hatton calls “a small change in voltage” — vary the electrical charge of the electrode itself to activate and deactivate the quinone. In such a setup, there would be no need to pump fluids around or to raise and lower the temperature or pressure, and the CO2 would end up as an easy-to-separate attachment on the solid quinone electrode. They deemed their concept “electro-swing adsorption.” The electro-swing cell To put their concept into practice, the researchers designed the electrochemical cell shown in the two diagrams in Figure 1 in the slideshow above. To maximize exposure, they put two quinone electrodes on the outside of the cell, thereby doubling its geometric capacity for CO2 capture. To switch the quinone on and off, they needed a component that would supply electrons and then take them back. For that job, they used a single ferrocene electrode, sandwiched between the two quinone electrodes but isolated from them by electrolyte membrane separators to prevent short circuits. They connected both quinone electrodes to the ferrocene electrode using the circuit of wires at the top, with a power source along the way. The power source creates a voltage that causes electrons to flow from the ferrocene to the quinone through the wires. The quinone is now negatively charged. When CO2-containing air or exhaust is blown past these electrodes, the quinone will capture the CO2 molecules until all the active sites on its surface are filled up. During the discharge cycle, the direction of the voltage on the cell is reversed, and electrons flow from the quinone back to the ferrocene. The quinone is no longer negatively charged, so it has no chemical affinity for CO2. The CO2 molecules are released and swept out of the system by a stream of purge gas for subsequent use or disposal. The quinone is now regenerated and ready to capture more CO2. Two additional components are key to successful operation. First is an electrolyte, in this case a liquid salt, that moistens the cell with positive and negative ions (electrically charged particles). Since electrons only flow through the external wires, those charged ions must travel within the cell from one electrode to the other to close the circuit for continued operation. The second special ingredient is carbon nanotubes. In the electrodes, the quinone and ferrocene are both present as coatings on the surfaces of carbon nanotubes. Nanotubes are both strong and highly conductive, so they provide good support and serve as an efficient conduit for electrons traveling into and out of the quinone and ferrocene. To fabricate a cell, researchers first synthesize a quinone- or ferrocene-based polymer, specifically, polyanthraquinone or polyvinylferrocene. They then make an “ink” by combining the polymer with carbon nanotubes in a solvent. The polymer immediately wraps around the nanotubes, connecting with them on a fundamental level. To make the electrode, they use a non-woven carbon fiber mat as a substrate. They dip the mat into the ink, allow it to dry slowly, and then dip it again, repeating the procedure until they’ve built up a uniform coating of the composite on the substrate. The result of the process is a porous mesh that provides a large surface area of active sites and easy pathways for CO2 molecules to move in and out. Once the researchers have prepared the quinone and ferrocene electrodes, they assemble the electrochemical cell by laminating the pieces together in the correct order — the quinone electrode, the electrolyte separator, the ferrocene electrode, another separator, and the second quinone electrode. Finally, they moisten the assembled cell with their liquid salt electrolyte. Experimental results To test the behavior of their system, the researchers placed a single electrochemical cell inside a custom-made, sealed box and wired it for electricity input. They then cycled the voltage and measured the key responses and capabilities of the device. The simultaneous trends in charge density put into the cell and CO2 adsorption per mole of quinone showed that when the quinone electrode is negatively charged, the amount of CO2 adsorbed goes up. And when that charge is reversed, CO2 adsorption declines. For experiments under more realistic conditions, the researchers also fabricated full capture units — open-ended modules in which a few cells were lined up, one beside the other, with gaps between them where CO2-containing gases could travel, passing the quinone surfaces of adjacent cells. In both experimental systems, the researchers ran tests using inlet streams with CO2 concentrations ranging from 10 percent down to 0.6 percent. The former is typical of power plant exhaust, the latter closer to concentrations in ambient indoor air. Regardless of the concentration, the efficiency of capture was essentially constant at about 90 percent. (An efficiency of 100 percent would mean that one molecule of CO2 had been captured for every electron transferred — an outcome that Hatton calls “highly unlikely” because other parasitic processes could be going on simultaneously.) The system used about 1 gigajoule of energy per ton of CO2 captured. Other methods consume between 1 and 10 gigajoules per ton, depending on the CO2 concentration of the incoming gases. Finally, the system was exceptionally durable. Over more than 7,000 charge-discharge cycles, its CO2 capture capacity dropped by only 30 percent — a loss of capacity that can readily be overcome with further refinements in the electrode preparation, say the researchers.  The remarkable performance of their system stems from what Voskian calls the “binary nature of the affinity of quinone to CO2.” The quinone has either a high affinity or no affinity at all. “The result of that binary affinity is that our system should be equally effective at treating fossil fuel combustion flue gases and confined or ambient air,” he says.  Practical applications The experimental results confirm that the electro-swing device should be applicable in many situations. The device is compact and flexible; it operates at room temperature and normal air pressure; and it requires no large-scale, expensive ancillary equipment — only the direct current power source. Its simple design should enable “plug-and-play” installation in many processes, say the researchers. It could, for example, be retrofitted in sealed buildings to remove CO2. In most sealed buildings, ventilation systems bring in fresh outdoor air to dilute the CO2 concentration indoors. “But making frequent air exchanges with the outside requires a lot of energy to condition the incoming air,” says Hatton. “Removing the CO2 indoors would reduce the number of exchanges needed.” The result could be large energy savings. Similarly, the system could be used in confined spaces where air exchange is impossible — for example, in submarines, spacecraft, and aircraft — to ensure that occupants aren’t breathing too much CO2. The electro-swing system could also be teamed up with renewable sources, such as solar and wind farms, and even rooftop solar panels. Such sources sometimes generate more electricity than is needed on the power grid. Instead of shutting them off, the excess electricity could be used to run a CO2 capture plant. The researchers have also developed a concept for using their system at power plants and other facilities that generate a continuous flow of exhaust containing CO2. At such sites, pairs of units would work in parallel. “One is emptying the pure CO2 that it captured, while the other is capturing more CO2,” explains Voskian. “And then you swap them.” A system of valves would switch the airflow to the freshly emptied unit, while a purge gas would flow through the full unit, carrying the CO2 out into a separate chamber. The captured CO2 could be chemically processed into fuels or simply compressed and sent underground for long-term disposal. If the purge gas were also CO2, the result would be a steady stream of pure CO2 that soft-drink makers could use for carbonating drinks and farmers could use for feeding plants in greenhouses. Indeed, rather than burning fossil fuels to get CO2, such users could employ an electro-swing unit to generate their own CO2 while simultaneously removing CO2 from the air.  Costs and scale-up The researchers haven’t yet published a full technoeconomic analysis, but they project capital plus operating costs at $50 to $100 per ton of CO2 captured. That range is in line with costs using other, less-flexible carbon capture systems. Methods for fabricating the electro-swing cells are also manufacturing-friendly: The electrodes can be made using standard chemical processing methods and assembled using a roll-to-roll process similar to a printing press.  And the system can be scaled up as needed. According to Voskian, it should scale linearly: “If you need 10 times more capture capacity, you just manufacture 10 times more electrodes.” Together, he and Hatton, along with Brian M. Baynes PhD ’04, have formed a company called Verdox, and they’re planning to demonstrate that ease of scale-up by developing a pilot plant within the next few years. This research was supported by an MIT Energy Initiative (MITEI) Seed Fund grant and by Eni S.p.A. through MITEI. Sahag Voskian was an Eni-MIT Energy Fellow in 2016-17 and 2017-18. This article appears in the Spring 2020 issue of Energy Futures, the magazine of the MIT Energy Initiative.  Sahag Voskian SM ’15, PhD ’19 (left) and Professor T. Alan Hatton have developed an electrochemical cell that can capture and release carbon dioxide with just a small change in voltage. Photo: Stuart Darsch https://news.mit.edu/2020/new-approach-to-carbon-capture-0709 Researchers design an effective treatment for both exhaust and ambient air. Thu, 09 Jul 2020 15:25:01 -0400 https://news.mit.edu/2020/new-approach-to-carbon-capture-0709 Nancy W. Stauffer | MIT Energy Initiative An essential component of any climate change mitigation plan is cutting carbon dioxide (CO2) emissions from human activities. Some power plants now have CO2 capture equipment that grabs CO2 out of their exhaust. But those systems are each the size of a chemical plant, cost hundreds of millions of dollars, require a lot of energy to run, and work only on exhaust streams that contain high concentrations of CO2. In short, they’re not a solution for airplanes, home heating systems, or automobiles. To make matters worse, capturing CO2 emissions from all anthropogenic sources may not solve the climate problem. “Even if all those emitters stopped tomorrow morning, we would still have to do something about the amount of CO2 in the air if we’re going to restore preindustrial atmospheric levels at a rate relevant to humanity,” says Sahag Voskian SM ’15, PhD ’19, co-founder and chief technology officer at Verdox, Inc. And developing a technology that can capture the CO2 in the air is a particularly hard problem, in part because the CO2 occurs in such low concentrations. The CO2 capture challenge A key problem with CO2 capture is finding a “sorbent” that will pick up CO2 in a stream of gas and then release it so the sorbent is clean and ready for reuse and the released CO2 stream can be utilized or sent to a sequestration site for long-term storage. Research has mainly focused on sorbent materials present as small particles whose surfaces contain “active sites” that capture CO2 — a process called adsorption. When the system temperature is lowered (or pressure increased), CO2 adheres to the particle surfaces. When the temperature is raised (or pressure reduced), the CO2 is released. But achieving those temperature or pressure “swings” takes considerable energy, in part because it requires treating the whole mixture, not just the CO2-bearing sorbent. In 2015, Voskian, then a PhD candidate in chemical engineering, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering and co-director of the MIT Energy Initiative’s Low-Carbon Energy Center for Carbon Capture, Utilization, and Storage, began to take a closer look at the temperature- and pressure-swing approach. “We wondered if we could get by with using only a renewable resource — like renewably sourced electricity — rather than heat or pressure,” says Hatton. Using electricity to elicit the chemical reactions needed for CO2 capture and conversion had been studied for several decades, but Hatton and Voskian had a new idea about how to engineer a more efficient adsorption device. Their work focuses on a special class of molecules called quinones. When quinone molecules are forced to take on extra electrons — which means they’re negatively charged — they have a high chemical affinity for CO2 molecules and snag any that pass. When the extra electrons are removed from the quinone molecules, the quinone’s chemical affinity for CO2 instantly disappears, and the molecules release the captured CO2.  Others have investigated the use of quinones and an electrolyte in a variety of electrochemical devices. In most cases, the devices involve two electrodes — a negative one where the dissolved quinone is activated for CO2 capture, and a positive one where it’s deactivated for CO2 release. But moving the solution from one electrode to the other requires complex flow and pumping systems that are large and take up considerable space, limiting where the devices can be used.  As an alternative, Hatton and Voskian decided to use the quinone as a solid electrode and — by applying what Hatton calls “a small change in voltage” — vary the electrical charge of the electrode itself to activate and deactivate the quinone. In such a setup, there would be no need to pump fluids around or to raise and lower the temperature or pressure, and the CO2 would end up as an easy-to-separate attachment on the solid quinone electrode. They deemed their concept “electro-swing adsorption.” The electro-swing cell To put their concept into practice, the researchers designed the electrochemical cell shown in the two diagrams in Figure 1 in the slideshow above. To maximize exposure, they put two quinone electrodes on the outside of the cell, thereby doubling its geometric capacity for CO2 capture. To switch the quinone on and off, they needed a component that would supply electrons and then take them back. For that job, they used a single ferrocene electrode, sandwiched between the two quinone electrodes but isolated from them by electrolyte membrane separators to prevent short circuits. They connected both quinone electrodes to the ferrocene electrode using the circuit of wires at the top, with a power source along the way. The power source creates a voltage that causes electrons to flow from the ferrocene to the quinone through the wires. The quinone is now negatively charged. When CO2-containing air or exhaust is blown past these electrodes, the quinone will capture the CO2 molecules until all the active sites on its surface are filled up. During the discharge cycle, the direction of the voltage on the cell is reversed, and electrons flow from the quinone back to the ferrocene. The quinone is no longer negatively charged, so it has no chemical affinity for CO2. The CO2 molecules are released and swept out of the system by a stream of purge gas for subsequent use or disposal. The quinone is now regenerated and ready to capture more CO2. Two additional components are key to successful operation. First is an electrolyte, in this case a liquid salt, that moistens the cell with positive and negative ions (electrically charged particles). Since electrons only flow through the external wires, those charged ions must travel within the cell from one electrode to the other to close the circuit for continued operation. The second special ingredient is carbon nanotubes. In the electrodes, the quinone and ferrocene are both present as coatings on the surfaces of carbon nanotubes. Nanotubes are both strong and highly conductive, so they provide good support and serve as an efficient conduit for electrons traveling into and out of the quinone and ferrocene. To fabricate a cell, researchers first synthesize a quinone- or ferrocene-based polymer, specifically, polyanthraquinone or polyvinylferrocene. They then make an “ink” by combining the polymer with carbon nanotubes in a solvent. The polymer immediately wraps around the nanotubes, connecting with them on a fundamental level. To make the electrode, they use a non-woven carbon fiber mat as a substrate. They dip the mat into the ink, allow it to dry slowly, and then dip it again, repeating the procedure until they’ve built up a uniform coating of the composite on the substrate. The result of the process is a porous mesh that provides a large surface area of active sites and easy pathways for CO2 molecules to move in and out. Once the researchers have prepared the quinone and ferrocene electrodes, they assemble the electrochemical cell by laminating the pieces together in the correct order — the quinone electrode, the electrolyte separator, the ferrocene electrode, another separator, and the second quinone electrode. Finally, they moisten the assembled cell with their liquid salt electrolyte. Experimental results To test the behavior of their system, the researchers placed a single electrochemical cell inside a custom-made, sealed box and wired it for electricity input. They then cycled the voltage and measured the key responses and capabilities of the device. The simultaneous trends in charge density put into the cell and CO2 adsorption per mole of quinone showed that when the quinone electrode is negatively charged, the amount of CO2 adsorbed goes up. And when that charge is reversed, CO2 adsorption declines. For experiments under more realistic conditions, the researchers also fabricated full capture units — open-ended modules in which a few cells were lined up, one beside the other, with gaps between them where CO2-containing gases could travel, passing the quinone surfaces of adjacent cells. In both experimental systems, the researchers ran tests using inlet streams with CO2 concentrations ranging from 10 percent down to 0.6 percent. The former is typical of power plant exhaust, the latter closer to concentrations in ambient indoor air. Regardless of the concentration, the efficiency of capture was essentially constant at about 90 percent. (An efficiency of 100 percent would mean that one molecule of CO2 had been captured for every electron transferred — an outcome that Hatton calls “highly unlikely” because other parasitic processes could be going on simultaneously.) The system used about 1 gigajoule of energy per ton of CO2 captured. Other methods consume between 1 and 10 gigajoules per ton, depending on the CO2 concentration of the incoming gases. Finally, the system was exceptionally durable. Over more than 7,000 charge-discharge cycles, its CO2 capture capacity dropped by only 30 percent — a loss of capacity that can readily be overcome with further refinements in the electrode preparation, say the researchers.  The remarkable performance of their system stems from what Voskian calls the “binary nature of the affinity of quinone to CO2.” The quinone has either a high affinity or no affinity at all. “The result of that binary affinity is that our system should be equally effective at treating fossil fuel combustion flue gases and confined or ambient air,” he says.  Practical applications The experimental results confirm that the electro-swing device should be applicable in many situations. The device is compact and flexible; it operates at room temperature and normal air pressure; and it requires no large-scale, expensive ancillary equipment — only the direct current power source. Its simple design should enable “plug-and-play” installation in many processes, say the researchers. It could, for example, be retrofitted in sealed buildings to remove CO2. In most sealed buildings, ventilation systems bring in fresh outdoor air to dilute the CO2 concentration indoors. “But making frequent air exchanges with the outside requires a lot of energy to condition the incoming air,” says Hatton. “Removing the CO2 indoors would reduce the number of exchanges needed.” The result could be large energy savings. Similarly, the system could be used in confined spaces where air exchange is impossible — for example, in submarines, spacecraft, and aircraft — to ensure that occupants aren’t breathing too much CO2. The electro-swing system could also be teamed up with renewable sources, such as solar and wind farms, and even rooftop solar panels. Such sources sometimes generate more electricity than is needed on the power grid. Instead of shutting them off, the excess electricity could be used to run a CO2 capture plant. The researchers have also developed a concept for using their system at power plants and other facilities that generate a continuous flow of exhaust containing CO2. At such sites, pairs of units would work in parallel. “One is emptying the pure CO2 that it captured, while the other is capturing more CO2,” explains Voskian. “And then you swap them.” A system of valves would switch the airflow to the freshly emptied unit, while a purge gas would flow through the full unit, carrying the CO2 out into a separate chamber. The captured CO2 could be chemically processed into fuels or simply compressed and sent underground for long-term disposal. If the purge gas were also CO2, the result would be a steady stream of pure CO2 that soft-drink makers could use for carbonating drinks and farmers could use for feeding plants in greenhouses. Indeed, rather than burning fossil fuels to get CO2, such users could employ an electro-swing unit to generate their own CO2 while simultaneously removing CO2 from the air.  Costs and scale-up The researchers haven’t yet published a full technoeconomic analysis, but they project capital plus operating costs at $50 to $100 per ton of CO2 captured. That range is in line with costs using other, less-flexible carbon capture systems. Methods for fabricating the electro-swing cells are also manufacturing-friendly: The electrodes can be made using standard chemical processing methods and assembled using a roll-to-roll process similar to a printing press.  And the system can be scaled up as needed. According to Voskian, it should scale linearly: “If you need 10 times more capture capacity, you just manufacture 10 times more electrodes.” Together, he and Hatton, along with Brian M. Baynes PhD ’04, have formed a company called Verdox, and they’re planning to demonstrate that ease of scale-up by developing a pilot plant within the next few years. This research was supported by an MIT Energy Initiative (MITEI) Seed Fund grant and by Eni S.p.A. through MITEI. Sahag Voskian was an Eni-MIT Energy Fellow in 2016-17 and 2017-18. This article appears in the Spring 2020 issue of Energy Futures, the magazine of the MIT Energy Initiative.  Sahag Voskian SM ’15, PhD ’19 (left) and Professor T. Alan Hatton have developed an electrochemical cell that can capture and release carbon dioxide with just a small change in voltage. Photo: Stuart Darsch https://news.mit.edu/2020/innovations-environmental-training-mining-industry-0707 MIT Environmental Solutions Initiative and multinational mining company Vale bring sustainability education to young engineering professionals in Brazil. Tue, 07 Jul 2020 14:15:00 -0400 https://news.mit.edu/2020/innovations-environmental-training-mining-industry-0707 Aaron Krol | Environmental Solutions Initiative For the mining industry, efforts to achieve sustainability are moving from local to global. In the past, mining companies focused sustainability initiatives more on their social license to operate — treating workers fairly and operating safe and healthy facilities. However, concerns over climate change have put mining operations and supply chains in the global spotlight, leading to various carbon-neutral promises by mining companies in recent months. Heading in this direction is Vale, a global mining company and the world’s largest iron ore and nickel producer. It is a publicly traded company headquartered in Brazil with operations in 30 countries. In the wake of two major tailings dam failures, as well as continued pressure to reduce carbon emissions, Vale has committed to spend $2 billion to cut both its direct and indirect carbon emissions 33 percent by 2030. To meet these ambitions, a broad cultural change is required — and MIT is one of the partners invited by Vale to help with the challenge. Stephen Potter, global strategy director for Vale, knows that local understanding of sustainability is fundamental to reaching its goals. “We need to attract the best and brightest young people to work in the Brazilian mining sector, and young people want to work for companies with a strong sustainability program,” Potter says. To that end, Vale created the Mining Innovation in a New Environment (MINE) program in 2019, in collaboration with the MIT Environmental Solutions Initiative (ESI); the Imperial College London Consultants; The Bakery, a start-up accelerator; and SENAI CIMATEC, a Brazilian technical institute. The program provides classes and sustainability training to young professionals with degrees relevant to mining engineering. Students in the MINE program get hands-on experience working with a real challenge the company is facing, while also expanding their personal leadership and technical skills. “Instilling young people with an entrepreneurial and innovative mindset is a core tenet of this program, whether they ultimately work at Vale or elsewhere,” says Potter. ESI’s role in the MINE program is to provide expert perspectives on sustainability that students wouldn’t receive in ordinary engineering training courses. “MIT offers a unique blend of scientific and engineering expertise, as well as entrepreneurial spirit, that can inspire young professionals in the Brazilian mining sector to work toward sustainable practices,” says ESI Director John Fernández. Drawing on a deep, multidisciplinary portfolio of MIT research on the extraction and processing of metals and minerals, MIT can support the deployment of innovative technologies and environmentally and socially conscious business strategies throughout a global supply chain. Since December 2019, the inaugural class of 30 MINE students has had a whirlwind of experiences. To kick off the program, MIT offered six weeks of online training, building up to an immersive training session in Janary 2020. Hosted by SENAI CIMATEC at their academic campus in Salvador, Brazil, the event featured in-person sessions with five MIT faculty: professsors Jessika Trancik, Roberto Rigobon, Andrew Whittle, Rafi Segal, and Principal Research Scientist Randolph Kirchain. The two-week event was coordinated by Suzanne Greene, who leads the MINE program for ESI as part of her role with the MIT Sustainable Supply Chains program. “What I loved about this program,” Greene says, “was the breadth of topics MIT’s lecturers were able to offer students. Students could take a deep dive on clean energy technology one day and tailings dams the next.” The courses were designed to give the students a common grounding in sustainability concepts and management tools to prepare them for the next phase of the program,  a hands-on research project within Vale. Immersion projects in this next phase align Vale’s core sustainability strategies around worker and infrastructure safety and the low-carbon energy transition. “This project is a great opportunity for Vale to reconfigure their supply chain and also improve the social and environmental performance,” says Marina Mattos, a postdoc working with ESI in the Metals, Minerals, and the Environment program. “As a Brazilian, I’m thrilled to be part of the MIT team helping to develop next-generation engineers with the values, attitudes, and skills necessary to understand and address challenges of the mining industry.” “We expect this program will lead to interest from other extractive companies, not only for education, but for research as well,” adds Greene. “This is just the beginning.” MINE Program students and other program participants at a hackathon in Salvador, Brazil, are pictured here before the Covid-19 pandemic interrupted such gatherings. https://news.mit.edu/2020/d-lab-moves-online-without-compromising-impact-0701 With the campus shut down by Covid-19, the spring D-Lab class Water, Climate Change, and Health had to adapt. Wed, 01 Jul 2020 12:05:01 -0400 https://news.mit.edu/2020/d-lab-moves-online-without-compromising-impact-0701 Jessie Hendricks | Environmental Solutions Initiative It’s not a typical sentence you’d find on a class schedule, but on April 2, the first action item for one MIT course read: “Check in on each other’s health and well-being.” The revised schedule was for Susan Murcott and Julie Simpson’s spring D-Lab class EC.719 / EC.789 (Water, Climate Change, and Health), just one of hundreds of classes at MIT that had to change course after the novel coronavirus sparked a campus-wide shutdown. D-Lab at home The dust had only begun to settle two weeks later, after a week of canceled classes followed by the established spring break, when students and professors reconvened in their new virtual classrooms. In Murcott and Simpson’s three-hour, once-a-week D-Lab class, the 20 students had completed only half of the subject’s 12 classes before the campus shut down. Those who could attend the six remaining classes would do so remotely for the first time in the five-year history of the class. Typically, students would have gathered at D-Lab, an international design and development center next to the MIT Museum on Massachusetts Avenue in Cambridge, Massachusetts. Within the center, D-Lab provides project-based and hands-on learning for undergraduate and graduate students in collaboration with international non-governmental organizations, governments, and industry. Many of the projects involve design solutions in low-income countries around the world. Murcott, an MIT lecturer who has worked with low-income populations for over 30 years in 25 countries, including Nepal and Ghana, was a natural fit to teach the class. Murcott’s background is in civil and environmental engineering, wastewater management, and climate. Her co-teacher, Research Engineer Julie Simpson of the Sea Grant College Program, has a PhD in coastal and marine ecology and a strong climate background. “It’s typical to find courses in climate change and energy, climate change and policy, or maybe climate change and human behavior,” Murcott says. But when she first began planning her D-Lab subject, there were no classes one could find anywhere in the world that married climate change and water.  Murcott and Simpson refer to the class as transdisciplinary. “[Transdisciplinary] is about having as broad a sample of humanity as you can teaching and learning together on the topics that you care about,” Murcott says. But transdisciplinary also means attracting a wide range of students from various walks of life, studying a variety of subjects. This spring, Murcott and Simpson’s class had undergraduates, graduate students, and young professionals from MIT, Wellesley College, and Harvard University, studying architecture, chemistry, mechanical engineering, biochemistry, microbiology, computer science, math, food and agriculture, law, and public health, plus a Knight Science Journalism at MIT Fellow. After campus closed, these students scattered to locations across the country and the world, including France, Hong Kong, Rwanda, and South Korea. Student Sun Kim sent a five-page document with pictures to the class after returning to her home in South Korea, detailing her arrival in a Covid-19 world. Kim was tested in the airport after landing, given free room and board in a nearby hotel until she received her result (a “negative” result came back within eight hours), and quarantined in her parents’ house for two weeks, just in case she had picked up the virus during her travels. “I have been enjoying my Zoom classes during the wee hours of the night and sleeping during the day — ignoring the sunlight and pretending I am still in the U.S.,” Kim wrote. Future generation climate action plans Usually, the class has three or four field trips over the course of the semester, to places like the Blue Hill Meteorological Observatory, home of the longest climate record in the United States, and the Charles River Dam Infrastructure, which helps control flooding along Memorial Drive. With these physical trips closed off during the pandemic, Murcott and Simpson had to find new virtual spaces in which to convene. Four student teams took part in a climate change simulation using a program developed by Climate Interactive called En-ROADS, in which they were challenged to create scenarios that aimed for a limit of 1.5 degree Celsius global average temperature rise above pre-industrial levels set out in the 2015 Paris Agreement. Each team developed unique scenarios and managed to reach that target by adjusting energy options, agricultural and land-use practices, economic levers, and policy options. The teams then used their En-ROADS scenario planning findings to evaluate the climate action plans of Cambridge, Boston, and Massachusetts, with virtual visits from experts on the plans. They also evaluated MIT’s climate plan, which was written in 2015 and which will be updated by the end of this year. Students found that MIT has one of the least-ambitious targets for reducing its greenhouse gas emissions compared to other institutions that the D-Lab class reviewed. Teams of students were then challenged to improve upon what MIT had done to date by coming up with their own future generation climate action plans. “I wanted them to find their voice,” says Murcott. As the co-chair of MIT’s Water Sustainability Working Group, an official committee designated to come up with a water plan for MIT, Murcott and Simpson are now working with a subset of eight students from the class over the summer, together with the MIT Environmental Solutions Initiative, the MIT Office of Sustainability, and the Office of the Vice President for Research, to collaborate on a new water and climate action plan. Final projects The spring 2020 D-Lab final presentations were as diverse as the students’ fields of study. Over two Zoom sessions, teams and individual students presented a total of eight final projects. The first project aimed to lower the number of Covid-19 transmissions among Cambridge residents and update access to food programs in light of the pandemic. At the time of the presentation, Massachusetts had the third-highest reported number of cases of the new coronavirus. Students reviewed what was already being done in Cambridge and expanded on that with recommendations such as an assistive phone line for sick residents, an N95 mask exchange program, increased transportation for medical care, and lodging options for positive cases to prevent household transmission. Another team working on the Covid-19 project presented their recommendations to update the city’s food policy. They suggested programs to increase awareness of the Supplemental Nutrition Assistance Program (SNAP) and the Women, Infants, and Children program (WIC) through municipal mailings, help vendors at farmers markets enroll in SNAP/EBT so that users could purchase local produce and goods, and promote local community gardens to help with future food security. Another project proposed an extensive rainwater harvesting project for the Memorial Drive dormitories, which also have a high photovoltaic potential, in which the nearby MIT recreational fields would benefit from self-sufficient rainwater irrigation driven by a solar-powered pump. Another student developed a machine learning method to count and detect river herrings that migrate into Boston each year by training a computer program to identify the fish using existing cameras installed by fish ladders.  Student Lowry Yankwich wrote a long-form science journalism piece about the effect of climate change on local fisheries, and a team of three students created a six-unit climate change course called “Surviving and Thriving in the 21st Century” for upper-high-school to first-year college students Two global water projects were presented. In the first, student Ade Dapo-Famodu’s study compared a newly manufactured water test, the ECC Vial, to other leading global products that measure two major indicators of contaminated water: E. coli and coliforms. The second global water project was the Butaro Water Project team with Carene Umubyeyi and Naomi Lutz. Their project is a collaboration between faculty and students at MIT, Tufts University, University of Rwanda and University of Global Health Equity in Butaro, a small district in the northern part of Rwanda, where a number of villages lack access to safe drinking water. The end is just the beginning For many, the D-Lab projects aren’t just a semester-long endeavor. It’s typical for some D-Lab term projects to turn into either a January Independent Activities Period or a summer research or field project. Of the 20 students in the class, 10 are continuing to work on their term projects over the summer. Umubyeyi is Rwandan. Having returned home after the MIT shutdown, she will be coordinating the team’s design and construction of the village water system over the summer, with technical support from her teammate, Lutz, remotely from Illinois. The Future Generations Climate Action Planning process resulted in five students eager to take the D-Lab class work forward. They will be working with Jim Gomes, senior advisor in the Office of the Vice President, who is responsible for coordination MIT’s 2020 Climate Action Plan, together with one other student intern, Grace Moore. The six-unit online course for teens, Surviving and Thriving in the 21st Century, is being taught by Clara Gervaise-Volaire and Gabby Cazares and will be live through July 3. Continued policy work on Covid-19 will continue with contacts in the Cambridge City Council. Finally, Lowry will be sending out his full-length article for publication and starting his next piece.   “Students have done so well in the face of the MIT shutdown and coronavirus pandemic challenge,” says Murcott. “Scattered around the country and around the world, they have come together through this online D-Lab class to embrace MIT’s mission of ‘creating a better world.’ In the process, they have deepened themselves and are actively serving others in the process. What could be better in these hard times?” Lecturer Susan Murcott met many members of her EC.719 / EC.789 (Water, Climate Change, and Health) D-Lab class for the first time at the Boston climate strike on Sept. 20, 2019. Photo: Susan Murcott https://news.mit.edu/2020/ideastream-showcases-breakthrough-technologies-across-mit-0624 From machine learning to devices to Covid-19 testing, Deshpande Center projects aim to make a positive impact on the world. Wed, 24 Jun 2020 14:40:01 -0400 https://news.mit.edu/2020/ideastream-showcases-breakthrough-technologies-across-mit-0624 Deshpande Center for Technological Innovation MIT’s Deshpande Center for Technological Innovation hosted IdeaStream, an annual showcase of technologies being developed across MIT, online for the first time in the event’s 18-year history. Last month, more than 500 people worldwide tuned in each day to view the breakthrough research and to chat with the researchers. Speakers from 19 MIT teams that received Deshpande grants presented their work, from learned control of manufacturing processes by Professor Brian Anthony, to what Hyunwoo Yuk of the Xuanhe Zhao Lab colloquially calls “surgical duct tape,” to artificial axons as a myelination assay for drug screening in neurological diseases by Anna Jagielska, a postdoc in the Krystyn Van Vliet Laboratory for Material Chemomechanics. “Innovation at MIT never stops,” said Deshpande Center Faculty Director Timothy Swager in a welcome address. He underscored how essential it was to keep innovation going, saying the innovation at the heart of the Deshpande Center’s current and future spinout companies will become part of essential businesses to aid in the pandemic. “These will be key for us … to emerge as a stronger ecosystem, both locally and globally, with different types of innovation,” he said. A virtual format includes far-flung attendees IdeaStream is known not only as an exhibition of MIT projects, but as a bridge between academic research and the business community where researchers, faculty, investors, and industry leaders meet and build connections. This year physical distancing prevented in-person meetings, but the conference’s virtual format facilitated thoughtful discussion and introductions and extended IdeaStream’s reach. Attendees from Greater Boston and as far as Ireland, India, Cyprus, Australia, and Brazil engaged with IdeaStream speakers in Zoom breakout sessions following the presentations. Jérôme Michon, who presented the Juejun Hu Research Group’s work on on-chip raman spectroscopic sensors for chemical and biological sensing, conducted his breakout session from France, to where he returned as MIT closed its campus. Technology for a better world Many of the presenters said their projects would have a positive impact on the environment or society. Svetlana Boriskina of the Department of Mechanical Engineering said clothing production is one of the world’s biggest polluters, requiring large amounts of energy and water and emitting greenhouse gases. Her SmartPE fabrics use far less water in production and are made from sustainable materials. The polyethylene fabrics are also antimicrobial, stain-resistant, and wick away body moisture. Postdoc Francesco Benedetti likewise pointed to the massive energy used for gas separation and purification in the chemical industry, and posed his team’s project as a cleaner alternative. In a collaboration of the Zachary Smith Lab and the Swager Group, they are creating membranes from polymers with a flexible backbone connected to tunable, rigid, pore-generating side chains. They require no heat or toxic solvents for separation, and could replace distillation, saving significant amounts of energy. Another team has adapted its project to aid in the coronavirus response. Postdoc Eric Miller said that prior to the pandemic, the Hadley D. Sikes Lab had developed immunoassays using engineered binding proteins that successfully identified markers for malaria, tuberculosis, and dengue. Now they have applied that technology to develop a rapid Covid-19 diagnostic test. The paper-based tests would be easily administered by anyone, with results expected within 10 minutes. Some of the projects addressed food and water challenges and were sponsored by MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Maxwell Robinson, a postdoc in the Karen Gleason Lab, presented an early detection system for Huanglongbing, or citrus greening disease. The incurable disease has cost Florida’s citrus industry $3 billion, and now threatens California’s $3.3 billion industry. The system would identify affected trees using sensors attuned to volatile organic compounds emitted by citrus trees. These compounds change in concentration during early-stage infection, when trees show no visible symptoms. In another J-WAFS project, Professor Kripa Varanasi of the Department of Mechanical Engineering built on a statistic — only about 2 percent of applied pesticides adhere to their targets — and demonstrated how the drops he formulated help pesticides better adhere to leaves. The result is a drastically lower volume of agricultural spray needed for application, and an overall reduction in chemical runoff. Taking a chance on early-stage research Innovation does not come without risk, said Deshpande Center Executive Director Leon Sandler following the event. Much university research is not ready to spin out into companies, and investors won’t put money into it because it’s still too risky. “By supporting early-stage research with funding, connecting researchers with deep-domain experts who can help shape it, maturing it to a point where it starts to be an attractive investment, you give it a chance to spin out,” he said. “You give it a chance to commercialize and make an impact on the world. All the presentations may be viewed on the Deshpande Center’s website. IdeaStream 2020 featured presentations from 19 research projects across MIT, as well as Q&A sessions with speakers via Zoom. Image: Shirley Goh/Deshpande Center for Technological Innovation https://news.mit.edu/2020/researchers-find-solar-photovoltaics-benefits-outweigh-costs-0623 Over a seven-year period, decline in PV costs outpaced decline in value; by 2017, market, health, and climate benefits outweighed the cost of PV systems. Tue, 23 Jun 2020 14:15:01 -0400 https://news.mit.edu/2020/researchers-find-solar-photovoltaics-benefits-outweigh-costs-0623 Nancy Stauffer | MIT Energy Initiative Over the past decade, the cost of solar photovoltaic (PV) arrays has fallen rapidly. But at the same time, the value of PV power has declined in areas that have installed significant PV generating capacity. Operators of utility-scale PV systems have seen electricity prices drop as more PV generators come online. Over the same time period, many coal-fired power plants were required to install emissions-control systems, resulting in declines in air pollution nationally and regionally. The result has been improved public health — but also a decrease in the potential health benefits from offsetting coal generation with PV generation. Given those competing trends, do the benefits of PV generation outweigh the costs? Answering that question requires balancing the up-front capital costs against the lifetime benefits of a PV system. Determining the former is fairly straightforward. But assessing the latter is challenging because the benefits differ across time and place. “The differences aren’t just due to variation in the amount of sunlight a given location receives throughout the year,” says Patrick R. Brown PhD ’16, a postdoc at the MIT Energy Initiative. “They’re also due to variability in electricity prices and pollutant emissions.” The drop in the price paid for utility-scale PV power stems in part from how electricity is bought and sold on wholesale electricity markets. On the “day-ahead” market, generators and customers submit bids specifying how much they’ll sell or buy at various price levels at a given hour on the following day. The lowest-cost generators are chosen first. Since the variable operating cost of PV systems is near zero, they’re almost always chosen, taking the place of the most expensive generator then in the lineup. The price paid to every selected generator is set by the highest-cost operator on the system, so as more PV power comes on, more high-cost generators come off, and the price drops for everyone. As a result, in the middle of the day, when solar is generating the most, prices paid to electricity generators are at their lowest. Brown notes that some generators may even bid negative prices. “They’re effectively paying consumers to take their power to ensure that they are dispatched,” he explains. For example, inflexible coal and nuclear plants may bid negative prices to avoid frequent shutdown and startup events that would result in extra fuel and maintenance costs. Renewable generators may also bid negative prices to obtain larger subsidies that are rewarded based on production.  Health benefits also differ over time and place. The health effects of deploying PV power are greater in a heavily populated area that relies on coal power than in a less-populated region that has access to plenty of clean hydropower or wind. And the local health benefits of PV power can be higher when there’s congestion on transmission lines that leaves a region stuck with whatever high-polluting sources are available nearby. The social costs of air pollution are largely “externalized,” that is, they are mostly unaccounted for in electricity markets. But they can be quantified using statistical methods, so health benefits resulting from reduced emissions can be incorporated when assessing the cost-competitiveness of PV generation. The contribution of fossil-fueled generators to climate change is another externality not accounted for by most electricity markets. Some U.S. markets, particularly in California and the Northeast, have implemented cap-and-trade programs, but the carbon dioxide (CO2) prices in those markets are much lower than estimates of the social cost of CO2, and other markets don’t price carbon at all. A full accounting of the benefits of PV power thus requires determining the CO2 emissions displaced by PV generation and then multiplying that value by a uniform carbon price representing the damage that those emissions would have caused. Calculating PV costs and benefits To examine the changing value of solar power, Brown and his colleague Francis M. O’Sullivan, the senior vice president of strategy at Ørsted Onshore North America and a senior lecturer at the MIT Sloan School of Management, developed a methodology to assess the costs and benefits of PV power across the U.S. power grid annually from 2010 to 2017.  The researchers focused on six “independent system operators” (ISOs) in California, Texas, the Midwest, the Mid-Atlantic, New York, and New England. Each ISO sets electricity prices at hundreds of “pricing nodes” along the transmission network in their region. The researchers performed analyses at more than 10,000 of those pricing nodes. For each node, they simulated the operation of a utility-scale PV array that tilts to follow the sun throughout the day. They calculated how much electricity it would generate and the benefits that each kilowatt would provide, factoring in energy and “capacity” revenues as well as avoided health and climate change costs associated with the displacement of fossil fuel emissions. (Capacity revenues are paid to generators for being available to deliver electricity at times of peak demand.) They focused on emissions of CO2, which contributes to climate change, and of nitrogen oxides (NOx), sulfur dioxide (SO2), and particulate matter called PM2.5 — fine particles that can cause serious health problems and can be emitted or formed in the atmosphere from NOx and SO2. The results of the analysis showed that the wholesale energy value of PV generation varied significantly from place to place, even within the region of a given ISO. For example, in New York City and Long Island, where population density is high and adding transmission lines is difficult, the market value of solar was at times 50 percent higher than across the state as a whole.  The public health benefits associated with SO2, NOx, and PM2.5 emissions reductions declined over the study period but were still substantial in 2017. Monetizing the health benefits of PV generation in 2017 would add almost 75 percent to energy revenues in the Midwest and New York and fully 100 percent in the Mid-Atlantic, thanks to the large amount of coal generation in the Midwest and Mid-Atlantic and the high population density on the Eastern Seaboard.  Based on the calculated energy and capacity revenues and health and climate benefits for 2017, the researchers asked: Given that combination of private and public benefits, what upfront PV system cost would be needed to make the PV installation “break even” over its lifetime, assuming that grid conditions in that year persist for the life of the installation? In other words, says Brown, “At what capital cost would an investment in a PV system be paid back in benefits over the lifetime of the array?”  Assuming 2017 values for energy and capacity market revenues alone, an unsubsidized PV investment at 2017 costs doesn’t break even. Add in the health benefit, and PV breaks even at 30 percent of the pricing nodes modeled. Assuming a carbon price of $50 per ton, the investment breaks even at about 70 percent of the nodes, and with a carbon price of $100 per ton (which is still less than the price estimated to be needed to limit global temperature rise to under 2 degrees Celsius), PV breaks even at all of the modeled nodes.  That wasn’t the case just two years earlier: At 2015 PV costs, PV would only have broken even in 2017 at about 65 percent of the nodes counting market revenues, health benefits, and a $100 per ton carbon price. “Since 2010, solar has gone from one of the most expensive sources of electricity to one of the cheapest, and it now breaks even across the majority of the U.S. when considering the full slate of values that it provides,” says Brown.  Based on their findings, the researchers conclude that the decline in PV costs over the studied period outpaced the decline in value, such that in 2017 the market, health, and climate benefits outweighed the cost of PV systems at the majority of locations modeled. “So the amount of solar that’s competitive is still increasing year by year,” says Brown.  The findings underscore the importance of considering health and climate benefits as well as market revenues. “If you’re going to add another megawatt of PV power, it’s best to put it where it’ll make the most difference, not only in terms of revenues but also health and CO2,” says Brown.  Unfortunately, today’s policies don’t reward that behavior. Some states do provide renewable energy subsidies for solar investments, but they reward generation equally everywhere. Yet in states such as New York, the public health benefits would have been far higher at some nodes than at others. State-level or regional reward mechanisms could be tailored to reflect such variation in node-to-node benefits of PV generation, providing incentives for installing PV systems where they’ll be most valuable. Providing time-varying price signals (including the cost of emissions) not only to utility-scale generators, but also to residential and commercial electricity generators and customers, would similarly guide PV investment to areas where it provides the most benefit.  Time-shifting PV output to maximize revenues  The analysis provides some guidance that might help would-be PV installers maximize their revenues. For example, it identifies certain “hot spots” where PV generation is especially valuable. At some high-electricity-demand nodes along the East Coast, for instance, persistent grid congestion has meant that the projected revenue of a PV generator has been high for more than a decade. The analysis also shows that the sunniest site may not always be the most profitable choice. A PV system in Texas would generate about 20 percent more power than one in the Northeast, yet energy revenues were greater at nodes in the Northeast than in Texas in some of the years analyzed.  To help potential PV owners maximize their future revenues, Brown and O’Sullivan performed a follow-on study focusing on ways to shift the output of PV arrays to align with times of higher prices on the wholesale market. For this analysis, they considered the value of solar on the day-ahead market and also on the “real-time market,” which dispatches generators to correct for discrepancies between supply and demand. They explored three options for shaping the output of PV generators, with a focus on the California real-time market in 2017, when high PV penetration led to a large reduction in midday prices compared to morning and evening prices. Curtailing output when prices are negative: During negative-price hours, a PV operator can simply turn off generation. In California in 2017, curtailment would have increased revenues by 9 percent on the real-time market compared to “must-run” operation. Changing the orientation of “fixed-tilt” (stationary) solar panels: The general rule of thumb in the Northern Hemisphere is to orient solar panels toward the south, maximizing production over the year. But peak production then occurs at about noon, when electricity prices in markets with high solar penetration are at their lowest. Pointing panels toward the west moves generation further into the afternoon. On the California real-time market in 2017, optimizing the orientation would have increased revenues by 13 percent, or 20 percent in conjunction with curtailment. Using 1-axis tracking: For larger utility-scale installations, solar panels are frequently installed on automatic solar trackers, rotating throughout the day from east in the morning to west in the evening. Using such 1-axis tracking on the California system in 2017 would have increased revenues by 32 percent over a fixed-tilt installation, and using tracking plus curtailment would have increased revenues by 42 percent. The researchers were surprised to see how much the optimal orientation changed in California over the period of their study. “In 2010, the best orientation for a fixed array was about 10 degrees west of south,” says Brown. “In 2017, it’s about 55 degrees west of south.” That adjustment is due to changes in market prices that accompany significant growth in PV generation — changes that will occur in other regions as they start to ramp up their solar generation. The researchers stress that conditions are constantly changing on power grids and electricity markets. With that in mind, they made their database and computer code openly available so that others can readily use them to calculate updated estimates of the net benefits of PV power and other distributed energy resources. They also emphasize the importance of getting time-varying prices to all market participants and of adapting installation and dispatch strategies to changing power system conditions. A law set to take effect in California in 2020 will require all new homes to have solar panels. Installing the usual south-facing panels with uncurtailable output could further saturate the electricity market at times when other PV installations are already generating. “If new rooftop arrays instead use west-facing panels that can be switched off during negative price times, it’s better for the whole system,” says Brown. “Rather than just adding more solar at times when the price is already low and the electricity mix is already clean, the new PV installations would displace expensive and dirty gas generators in the evening. Enabling that outcome is a win all around.” Patrick Brown and this research were supported by a U.S. Department of Energy Office of Energy Efficiency and Renewable Energy (EERE) Postdoctoral Research Award through the EERE Solar Energy Technologies Office. The computer code and data repositories are available here and here. This article appears in the Spring 2020 issue of Energy Futures, the magazine of the MIT Energy Initiative.  Utility-scale photovoltaic arrays are an economic investment across most of the United States when health and climate benefits are taken into account, concludes an analysis by MITEI postdoc Patrick Brown and Senior Lecturer Francis O’Sullivan. Their results show the importance of providing accurate price signals to generators and consumers and of adopting policies that reward installation of solar arrays where they will bring the most benefit. Photo courtesy of SunEnergy1. https://news.mit.edu/2020/researchers-find-solar-photovoltaics-benefits-outweigh-costs-0623 Over a seven-year period, decline in PV costs outpaced decline in value; by 2017, market, health, and climate benefits outweighed the cost of PV systems. Tue, 23 Jun 2020 14:15:01 -0400 https://news.mit.edu/2020/researchers-find-solar-photovoltaics-benefits-outweigh-costs-0623 Nancy Stauffer | MIT Energy Initiative Over the past decade, the cost of solar photovoltaic (PV) arrays has fallen rapidly. But at the same time, the value of PV power has declined in areas that have installed significant PV generating capacity. Operators of utility-scale PV systems have seen electricity prices drop as more PV generators come online. Over the same time period, many coal-fired power plants were required to install emissions-control systems, resulting in declines in air pollution nationally and regionally. The result has been improved public health — but also a decrease in the potential health benefits from offsetting coal generation with PV generation. Given those competing trends, do the benefits of PV generation outweigh the costs? Answering that question requires balancing the up-front capital costs against the lifetime benefits of a PV system. Determining the former is fairly straightforward. But assessing the latter is challenging because the benefits differ across time and place. “The differences aren’t just due to variation in the amount of sunlight a given location receives throughout the year,” says Patrick R. Brown PhD ’16, a postdoc at the MIT Energy Initiative. “They’re also due to variability in electricity prices and pollutant emissions.” The drop in the price paid for utility-scale PV power stems in part from how electricity is bought and sold on wholesale electricity markets. On the “day-ahead” market, generators and customers submit bids specifying how much they’ll sell or buy at various price levels at a given hour on the following day. The lowest-cost generators are chosen first. Since the variable operating cost of PV systems is near zero, they’re almost always chosen, taking the place of the most expensive generator then in the lineup. The price paid to every selected generator is set by the highest-cost operator on the system, so as more PV power comes on, more high-cost generators come off, and the price drops for everyone. As a result, in the middle of the day, when solar is generating the most, prices paid to electricity generators are at their lowest. Brown notes that some generators may even bid negative prices. “They’re effectively paying consumers to take their power to ensure that they are dispatched,” he explains. For example, inflexible coal and nuclear plants may bid negative prices to avoid frequent shutdown and startup events that would result in extra fuel and maintenance costs. Renewable generators may also bid negative prices to obtain larger subsidies that are rewarded based on production.  Health benefits also differ over time and place. The health effects of deploying PV power are greater in a heavily populated area that relies on coal power than in a less-populated region that has access to plenty of clean hydropower or wind. And the local health benefits of PV power can be higher when there’s congestion on transmission lines that leaves a region stuck with whatever high-polluting sources are available nearby. The social costs of air pollution are largely “externalized,” that is, they are mostly unaccounted for in electricity markets. But they can be quantified using statistical methods, so health benefits resulting from reduced emissions can be incorporated when assessing the cost-competitiveness of PV generation. The contribution of fossil-fueled generators to climate change is another externality not accounted for by most electricity markets. Some U.S. markets, particularly in California and the Northeast, have implemented cap-and-trade programs, but the carbon dioxide (CO2) prices in those markets are much lower than estimates of the social cost of CO2, and other markets don’t price carbon at all. A full accounting of the benefits of PV power thus requires determining the CO2 emissions displaced by PV generation and then multiplying that value by a uniform carbon price representing the damage that those emissions would have caused. Calculating PV costs and benefits To examine the changing value of solar power, Brown and his colleague Francis M. O’Sullivan, the senior vice president of strategy at Ørsted Onshore North America and a senior lecturer at the MIT Sloan School of Management, developed a methodology to assess the costs and benefits of PV power across the U.S. power grid annually from 2010 to 2017.  The researchers focused on six “independent system operators” (ISOs) in California, Texas, the Midwest, the Mid-Atlantic, New York, and New England. Each ISO sets electricity prices at hundreds of “pricing nodes” along the transmission network in their region. The researchers performed analyses at more than 10,000 of those pricing nodes. For each node, they simulated the operation of a utility-scale PV array that tilts to follow the sun throughout the day. They calculated how much electricity it would generate and the benefits that each kilowatt would provide, factoring in energy and “capacity” revenues as well as avoided health and climate change costs associated with the displacement of fossil fuel emissions. (Capacity revenues are paid to generators for being available to deliver electricity at times of peak demand.) They focused on emissions of CO2, which contributes to climate change, and of nitrogen oxides (NOx), sulfur dioxide (SO2), and particulate matter called PM2.5 — fine particles that can cause serious health problems and can be emitted or formed in the atmosphere from NOx and SO2. The results of the analysis showed that the wholesale energy value of PV generation varied significantly from place to place, even within the region of a given ISO. For example, in New York City and Long Island, where population density is high and adding transmission lines is difficult, the market value of solar was at times 50 percent higher than across the state as a whole.  The public health benefits associated with SO2, NOx, and PM2.5 emissions reductions declined over the study period but were still substantial in 2017. Monetizing the health benefits of PV generation in 2017 would add almost 75 percent to energy revenues in the Midwest and New York and fully 100 percent in the Mid-Atlantic, thanks to the large amount of coal generation in the Midwest and Mid-Atlantic and the high population density on the Eastern Seaboard.  Based on the calculated energy and capacity revenues and health and climate benefits for 2017, the researchers asked: Given that combination of private and public benefits, what upfront PV system cost would be needed to make the PV installation “break even” over its lifetime, assuming that grid conditions in that year persist for the life of the installation? In other words, says Brown, “At what capital cost would an investment in a PV system be paid back in benefits over the lifetime of the array?”  Assuming 2017 values for energy and capacity market revenues alone, an unsubsidized PV investment at 2017 costs doesn’t break even. Add in the health benefit, and PV breaks even at 30 percent of the pricing nodes modeled. Assuming a carbon price of $50 per ton, the investment breaks even at about 70 percent of the nodes, and with a carbon price of $100 per ton (which is still less than the price estimated to be needed to limit global temperature rise to under 2 degrees Celsius), PV breaks even at all of the modeled nodes.  That wasn’t the case just two years earlier: At 2015 PV costs, PV would only have broken even in 2017 at about 65 percent of the nodes counting market revenues, health benefits, and a $100 per ton carbon price. “Since 2010, solar has gone from one of the most expensive sources of electricity to one of the cheapest, and it now breaks even across the majority of the U.S. when considering the full slate of values that it provides,” says Brown.  Based on their findings, the researchers conclude that the decline in PV costs over the studied period outpaced the decline in value, such that in 2017 the market, health, and climate benefits outweighed the cost of PV systems at the majority of locations modeled. “So the amount of solar that’s competitive is still increasing year by year,” says Brown.  The findings underscore the importance of considering health and climate benefits as well as market revenues. “If you’re going to add another megawatt of PV power, it’s best to put it where it’ll make the most difference, not only in terms of revenues but also health and CO2,” says Brown.  Unfortunately, today’s policies don’t reward that behavior. Some states do provide renewable energy subsidies for solar investments, but they reward generation equally everywhere. Yet in states such as New York, the public health benefits would have been far higher at some nodes than at others. State-level or regional reward mechanisms could be tailored to reflect such variation in node-to-node benefits of PV generation, providing incentives for installing PV systems where they’ll be most valuable. Providing time-varying price signals (including the cost of emissions) not only to utility-scale generators, but also to residential and commercial electricity generators and customers, would similarly guide PV investment to areas where it provides the most benefit.  Time-shifting PV output to maximize revenues  The analysis provides some guidance that might help would-be PV installers maximize their revenues. For example, it identifies certain “hot spots” where PV generation is especially valuable. At some high-electricity-demand nodes along the East Coast, for instance, persistent grid congestion has meant that the projected revenue of a PV generator has been high for more than a decade. The analysis also shows that the sunniest site may not always be the most profitable choice. A PV system in Texas would generate about 20 percent more power than one in the Northeast, yet energy revenues were greater at nodes in the Northeast than in Texas in some of the years analyzed.  To help potential PV owners maximize their future revenues, Brown and O’Sullivan performed a follow-on study focusing on ways to shift the output of PV arrays to align with times of higher prices on the wholesale market. For this analysis, they considered the value of solar on the day-ahead market and also on the “real-time market,” which dispatches generators to correct for discrepancies between supply and demand. They explored three options for shaping the output of PV generators, with a focus on the California real-time market in 2017, when high PV penetration led to a large reduction in midday prices compared to morning and evening prices. Curtailing output when prices are negative: During negative-price hours, a PV operator can simply turn off generation. In California in 2017, curtailment would have increased revenues by 9 percent on the real-time market compared to “must-run” operation. Changing the orientation of “fixed-tilt” (stationary) solar panels: The general rule of thumb in the Northern Hemisphere is to orient solar panels toward the south, maximizing production over the year. But peak production then occurs at about noon, when electricity prices in markets with high solar penetration are at their lowest. Pointing panels toward the west moves generation further into the afternoon. On the California real-time market in 2017, optimizing the orientation would have increased revenues by 13 percent, or 20 percent in conjunction with curtailment. Using 1-axis tracking: For larger utility-scale installations, solar panels are frequently installed on automatic solar trackers, rotating throughout the day from east in the morning to west in the evening. Using such 1-axis tracking on the California system in 2017 would have increased revenues by 32 percent over a fixed-tilt installation, and using tracking plus curtailment would have increased revenues by 42 percent. The researchers were surprised to see how much the optimal orientation changed in California over the period of their study. “In 2010, the best orientation for a fixed array was about 10 degrees west of south,” says Brown. “In 2017, it’s about 55 degrees west of south.” That adjustment is due to changes in market prices that accompany significant growth in PV generation — changes that will occur in other regions as they start to ramp up their solar generation. The researchers stress that conditions are constantly changing on power grids and electricity markets. With that in mind, they made their database and computer code openly available so that others can readily use them to calculate updated estimates of the net benefits of PV power and other distributed energy resources. They also emphasize the importance of getting time-varying prices to all market participants and of adapting installation and dispatch strategies to changing power system conditions. A law set to take effect in California in 2020 will require all new homes to have solar panels. Installing the usual south-facing panels with uncurtailable output could further saturate the electricity market at times when other PV installations are already generating. “If new rooftop arrays instead use west-facing panels that can be switched off during negative price times, it’s better for the whole system,” says Brown. “Rather than just adding more solar at times when the price is already low and the electricity mix is already clean, the new PV installations would displace expensive and dirty gas generators in the evening. Enabling that outcome is a win all around.” Patrick Brown and this research were supported by a U.S. Department of Energy Office of Energy Efficiency and Renewable Energy (EERE) Postdoctoral Research Award through the EERE Solar Energy Technologies Office. The computer code and data repositories are available here and here. This article appears in the Spring 2020 issue of Energy Futures, the magazine of the MIT Energy Initiative.  Utility-scale photovoltaic arrays are an economic investment across most of the United States when health and climate benefits are taken into account, concludes an analysis by MITEI postdoc Patrick Brown and Senior Lecturer Francis O’Sullivan. Their results show the importance of providing accurate price signals to generators and consumers and of adopting policies that reward installation of solar arrays where they will bring the most benefit. Photo courtesy of SunEnergy1. https://news.mit.edu/2020/ice-ice-maybe-meghana-ranganathan-0617 EAPS graduate student Meghana Ranganathan zooms into the microstructure of ice streams to better understand the impacts of climate change. Wed, 17 Jun 2020 15:50:01 -0400 https://news.mit.edu/2020/ice-ice-maybe-meghana-ranganathan-0617 Laura Carter | School of Science From above, Antarctica appears as a massive sheet of white. But if you were to zoom in, you would find that an ice sheet is a complex and dynamic system. In the Department of Earth, Atmospheric and Planetary Sciences (EAPS), graduate student Meghana Ranganathan studies what controls the speed of ice streams — narrow, fast-flowing sections of the glacier that funnel into the ocean. When they meet the ocean, losing ground support, they calve and break off into icebergs. This is the fastest route of ice mass loss in a changing climate. Looking at the microstructure, there are many components that can affect the speed with which the ice flows, Ranganathan explains, including its interaction with the land the ice sits on, the crystalline structure of the ice, and the orientation and size of the grains of ice. And, unfortunately, many models do not take these minute factors into consideration, which can impact their predictions. That is what she hopes to improve, modifying the mathematics and building models that eliminate assumptions by fleshing out the details of exactly what is happening down to a microscopic level. Ranganathan is equipped to handle such a topic, holding a bachelor’s degree in mathematics from Swarthmore College, where she generated food chain models to investigate extinction levels. She left her undergraduate studies with a “desire to save the world” and knew she wanted to apply her knowledge to climate science for her graduate degree. “We’re one of the first generations that grew up hearing about the climate crisis, and I think that made quite an impact on me,” she says. It’s also a “sweet spot,” she claims, in terms of being both a scientifically invigorating problem — with a lot of mathematical complexities — and a societal issue: “My desire to use math to discover things about the world, and my desire to help the world intersect in climate science.” A climate of opportunity EAPS allowed Ranganathan the flexibility to choose her field of focus within the wide range of climate science. “EAPS is a great department in diversity of fields,” she says. “It’s rare for one department to encompass so many aspects of earth and planetary sciences.” She lists faculty addressing everything from hurricanes to climate variability to biological oceanography and even exoplanetary studies. “Even now that I’ve found a research focus, I get to learn about other fields and stay in touch with current research being done across the earth sciences,” she adds. Flexibility is something she also attributes to her fellowship. Currently, Ranganathan is sponsored by the Sven Treitel Fellowship, and it’s this support that has allowed her the opportunity to develop and grow her independence, transitioning from student to researcher. “Graduate school is arguably not necessarily to learn a field, but rather to learn how to build on your own ideas,” she explains. Without having her time consumed by writing grant proposals or working on other people’s funded projects, she can divert her full attention to the topic she chooses. “This fellowship has really enabled me to focus on what I’m here to do: learn to be a scientist.” The Sven Treitel Graduate Student Support Fund was established in 2016 by EAPS alumnus Arthur Cheng ScD ’78 to honor Sven Treitel ’53, SM ’55, PhD ’58. “Sven Treitel was a visiting professor at MIT when I was a graduate student, and he was a great role model for me,” says Cheng. Treitel’s contributions to making seismograms more accurate are considered instrumental to bringing about the “digital revolution” of seismology. Years of change Currently in her third year, Ranganathan has passed her qualifying exam and is now fully devoted to her project. That includes facing some challenges in her research, like producing new models or, at least, new additions to preexisting models to make them suitable for ice streams. She also worries about what she calls a dearth of data needed to provide her model some benchmarks. Her excitement isn’t deterred, though, and she’s invigorated by the prospect of self-directing how she tackles these technical obstacles with input from her advisor, Cecil and Ida Green Career Development Professor Brent Minchew. During the Covid-19 crisis, Ranganathan appreciates the EAPS department and her advisor for ensuring that events and check-ins remain a regular occurrence in addition to prioritizing mental health. Although she has adjusted her hours and workflow, Ranganathan believes she has been relatively lucky while MIT campus has limited access. “My work is quite easy to take remote, since it is entirely computer-based work. So, my days haven’t changed too much, with the exception of my physical location,” she notes. “The biggest trick I’ve learned is to be OK with everything not being exactly the same as it would have been if we were working in person.” Ranganathan still meets with her office mate every morning for coffee, albeit virtually, and continues to find encouragement in her fellow lab group-mates, whom she describes as smart, driven, and diverse, and brought together by a love for ice and glaciers. She considers the EAPS students in general a warming part of being at MIT. “They’re passionate and friendly. I love how active our students are in science communication, outreach, and climate activism,” she comments. Ice sheets of paper The co-president of the WiXII (Women in Course 12 group), Ranganathan is well-versed in communication and outreach herself. She enjoys writing — fiction as well as journalism — and has previously contributed articles to Scientific American. She uses her writing as a means to elevate awareness of climate issues and generally focuses on the interplay between climate and society. Her 2019 TEDx talk focused on human relationships with ice — how the last two decades of scientific study has completely changed how society understands ice sheets. Amazingly, all of Ranganathan’s knowledge of earth science, climate science, and glaciology, she has learned since joining MIT in 2017. “I never realized how much you learn so quickly in graduate school.” She hopes to continue down a similar track in her future career, addressing important aspects of glaciology that still need answers. She might want to try field work someday. When asked what’s left to accomplish, she joked, “Do the thesis! Write the thesis!”  EAPS graduate student Meghana Ranganathan studies glaciers to better calibrate climate models. Photo courtesy of Meghana Ranganathan. https://news.mit.edu/2020/why-mediterranean-climate-change-hotspot-0617 MIT analysis uncovers the basis of the severe rainfall declines predicted by many models. Wed, 17 Jun 2020 09:55:48 -0400 https://news.mit.edu/2020/why-mediterranean-climate-change-hotspot-0617 David L. Chandler | MIT News Office Although global climate models vary in many ways, they agree on this: The Mediterranean region will be significantly drier in coming decades, potentially seeing 40 percent less precipitation during the winter rainy season.An analysis by researchers at MIT has now found the underlying mechanisms that explain the anomalous effects in this region, especially in the Middle East and in northwest Africa. The analysis could help refine the models and add certainty to their projections, which have significant implications for the management of water resources and agriculture in the region.The study, published last week in the Journal of Climate, was carried out by MIT graduate student Alexandre Tuel and professor of civil and environmental engineering Elfatih Eltahir.The different global circulation models of the Earth’s changing climate agree that temperatures virtually everywhere will increase, and in most places so will rainfall, in part because warmer air can carry more water vapor. However, “There is one major exception, and that is the Mediterranean area,” Eltahir says, which shows the greatest decline of projected rainfall of any landmass on Earth.“With all their differences, the models all seem to agree that this is going to happen,” he says, although they differ on the amount of the decline, ranging from 10 percent to 60 percent. But nobody had previously been able to explain why.Tuel and Eltahir found that this projected drying of the Mediterranean region is a result of the confluence of two different effects of a warming climate: a change in the dynamics of upper atmosphere circulation and a reduction in the temperature difference between land and sea. Neither factor by itself would be sufficient to account for the anomalous reduction in rainfall, but in combination the two phenomena can fully account for the unique drying trend seen in the models.The first effect is a large-scale phenomenon, related to powerful high-altitude winds called the midlatitude jet stream, which drive a strong, steady west-to-east weather pattern across Europe, Asia, and North America. Tuel says the models show that “one of the robust things that happens with climate change is that as you increase the global temperature, you’re going to increase the strength of these midlatitude jets.”But in the Northern Hemisphere, those winds run into obstacles, with mountain ranges including the Rockies, Alps, and Himalayas, and these collectively impart a kind of wave pattern onto this steady circulation, resulting in alternating zones of higher and lower air pressure. High pressure is associated with clear, dry air, and low pressure with wetter air and storm systems. But as the air gets warmer, this wave pattern gets altered.“It just happened that the geography of where the Mediterranean is, and where the mountains are, impacts the pattern of air flow high in the atmosphere in a way that creates a high pressure area over the Mediterranean,” Tuel explains. That high-pressure area creates a dry zone with little precipitation.However, that effect alone can’t account for the projected Mediterranean drying. That requires the addition of a second mechanism, the reduction of the temperature difference between land and sea. That difference, which helps to drive winds, will also be greatly reduced by climate change, because the land is warming up much faster than the seas.“What’s really different about the Mediterranean compared to other regions is the geography,” Tuel says. “Basically, you have a big sea enclosed by continents, which doesn’t really occur anywhere else in the world.” While models show the surrounding landmasses warming by 3 to 4 degrees Celsius over the coming century, the sea itself will only warm by about 2 degrees or so. “Basically, the difference between the water and the land becomes a smaller with time,” he says.That, in turn, amplifies the pressure differential, adding to the high-pressure area that drives a clockwise circulation pattern of winds surrounding the Mediterranean basin. And because of the specifics of local topography, projections show the two areas hardest hit by the drying trend will be the northwest Africa, including Morocco, and the eastern Mediterranean region, including Turkey and the Levant.That trend is not just a projection, but has already become apparent in recent climate trends across the Middle East and western North Africa, the researchers say. “These are areas where we already detect declines in precipitation,” Eltahir says. It’s possible that these rainfall declines in an already parched region may even have contributed to the political unrest in the region, he says.“We document from the observed record of precipitation that this eastern part has already experienced a significant decline of precipitation,” Eltahir says. The fact that the underlying physical processes are now understood will help to ensure that these projections should be taken seriously by planners in the region, he says. It will provide much greater confidence, he says, by enabling them “to understand the exact mechanisms by which that change is going to happen.”Eltahir has been working with government agencies in Morocco to help them translate this information into concrete planning. “We are trying to take these projections and see what would be the impacts on availability of water,” he says. “That potentially will have a lot of impact on how Morocco plans its water resources, and also how they could develop technologies that could help them alleviate those impacts through better management of water at the field scale, or maybe through precision agriculture using higher technology.”The work was supported by the collaborative research program between Université Mohamed VI Polytechnique in Morocco and MIT. Global climate models agree that the Mediterranean area will be significantly drier, potentially seeing 40 percent less precipitation during the winter rainy season in the already parched regions of the Middle East and North Africa. https://news.mit.edu/2020/professor-jinhua-zhao-0616 Associate Professor Jinhua Zhao, who will direct the new MIT Mobility Initiative, brings behavioral science to urban transportation. Mon, 15 Jun 2020 23:59:59 -0400 https://news.mit.edu/2020/professor-jinhua-zhao-0616 Peter Dizikes | MIT News Office It’s easy to think of urban mobility strictly in terms of infrastructure: Does an area have the right rail lines, bus lanes, or bike paths? How much parking is available? How well might autonomous vehicles work? MIT Associate Professor Jinhua Zhao views matters a bit differently, however.To understand urban movement, Zhao believes, we also need to understand people. How does everyone choose to use transport? Why do they move around, and when? How does their self-image influence their choices?“The main part of my own thinking is the recognition that transportation systems are half physical infrastructure, and half human beings,” Zhao says.Now, after two decades as a student and professor at MIT, he has built up an impressive body of research flowing from this approach. A bit like the best mobility systems, Zhao’s work is multimodal. He divides his scholarship into three main themes. The first covers the behavioral foundations of urban mobility: the attitudinal and emotional aspects of transportation, such as the pride people take in vehicle ownership, the experience of time spent in transit, and the decision making that results in large-scale mobility patterns within urban regions.Zhao’s second area of scholarship applies these kinds of insights to design work, exploring how to structure mobility systems with behavioral concepts in mind. What are people’s risk preferences concerning autonomous vehicles? Will people use them in concert with existing transit? How do people’s individual characteristics affect their willingness to take ride-sharing opportunities?Zhao’s third theme is policy-oriented: Do mobility systems provide access and fairness? Are they met with acceptance? Here Zhao’s work ranges across countries, including China, Singapore, the U.K., and the U.S., examining topics like access to rail, compliance with laws, and the public perception of transportation systems.Within these themes, a tour of Zhao’s research reveals specific results across a wide swath of transportation issues. He has studied how multimodal smartcards affect passenger behavior (they distinctly help commuters); examined the effects of off-peak discounts on subway ridership (they reduce crowding); quantified “car pride,” the sense in which car ownership stems from social status concerns (it’s prevalent in developing countries, plus the U.S.). He has also observed how a legacy of rail transit relates to car-ownership rates even after rail lines vanish, and discovered how potential discriminatory attitudes with respect to class and race influence preferences toward ridesharing.“People make decisions in all sorts of different ways,” Zhao says. “The notion that people wake up and calculate the utility of taking the car versus taking the bus — or walking, or cycling — and find the one that maximizes their utility doesn’t speak to reality.”Zhao also wants to make sure that decision makers recognize the importance of these personal factors in the overall success of their mobility systems.“I study policy from the individual subject’s point of view,” says Zhao. “I’m a citizen. How do I think about it? Do I think this is fair? Do I understand it enough? Do I comply with the policy? It is more of a behavioral approach to policy studies.”To be sure, Zhao is more than a researcher; he is an active mentor of MIT students, having been director of the JTL Urban Mobility Lab and the MIT Transit Lab, and chair of the PhD program in the Department of Urban Studies and Planning (DUSP). And at the MIT Energy Initiative (MITEI), Zhao is also co-director of the MITEI Mobility System Center. For his research and teaching, Zhao was awarded tenure last year at MIT.This May, Zhao added another important role to his brief: He was named director of the new MIT Mobility Initiative, an Institute-wide effort designed to cultivate a dynamic intellectual community on mobility and transportation, redefine the interdisciplinary education program, and effect fundamental changes in the long-term trajectory of mobility development in the world.“We are at the dawn of the most profound changes in transportation: an unprecedented combination of new technologies, such as autonomy, electrification, computation and AI, and new objectives, including decarbonization, public health, economic vibrancy, data security and privacy, and social justice,” says Zhao. “The timeframe for these changes — decarbonization in particular — is short in a system with massive amounts of fixed, long-life assets and entrenched behavior and culture. It’s this combination of new technologies, new purposes, and urgent timeframes that makes an MIT-led Mobility Initiative critical at this moment.”How much can preferences be shaped?Zhao says the current time is an “exhilarating” age for transportation scholarship. And questions surrounding the shape of mobility systems will likely only grow due to the uncertainties introduced by the ongoing Covid-19 pandemic.“If in the 1980s you asked people what the [mobility] system would look like 20 years in the future, they would say it would probably be the same,” Zhao says. “Now, really nobody knows what it will it look like.”Zhao grew up in China and attended Tongji University in Shanghai, graduating with a bachelor’s degree in planning in 2001. He then came to MIT for his graduate studies, emerging with three degrees from DUSP: a master’s in city planning and a master’s in transportation, in 2004, and a PhD in 2009.For his doctoral dissertation, working with Joseph Ferreira of DUSP and Nigel Wilson of the Department of Civil and Environmental Engineering, Zhao examined what he calls “preference-accommodating versus preference-shaping” approaches to urban mobility.The preference-accommodating approach, Zhao says, assumes that “people know what they want, and no one else has any right to say” what those tastes should be. But the preference-shaping approach asks, “To the degree preferences can be shaped, should they?” Tastes that we think of as almost instinctual, like the love of cars in the U.S., are much more the result of commercial influence than we usually recognize, he believes.While that distinction was already important to Zhao when he was a student, the acceleration of climate change has made it a more urgent issue now: Can people be nudged toward a lifestyle that centers more around sustainable modes of transportation?“People like cars today,” Zhao says. “But the auto industry spends hundreds of millions of dollars annually to construct those preferences. If every one of the 7.7 billion human beings strives to have a car as part of a successful life, no technical solutions exist today to satisfy this desire without destroying our planet.”For Zhao, this is not an abstract discussion. A few years ago, Zhao and his colleagues Fred Salvucci, John Attanucci, and Julie Newman helped work on reforms to MIT’s own acclaimed transportation policy. Those changes fully subsidized mass transit for employees and altered campus parking fees, resulting in fewer single-occupant vehicles commuting to the Institute, reduced parking demand, and greater employee satisfaction.Pursuing “joyful” time in the classroomFor all his research productivity, Zhao considers teaching to be at the core of his MIT responsibilities; he has received the “Committed to Caring” award by MIT’s Office of Graduate Education and considers classroom discussions to be the most energizing part of his job.“That’s really the most joyful time I have here,” Zhao says.Indeed, Zhao emphasizes, students are an the essential fuel powering MIT’s notably interdisciplinary activities.“I find that students are often the intermediaries that connect faculty,” Zhao says. “Most of my PhD students construct a dissertation committee that, beyond me as a supervisor, has faculty from other departments. That student will get input from economists, computer scientists, business professors. And that student brings three to four faculty together that would otherwise rarely talk to each other. I explicitly encourage students to do that, and they really enjoy it.”His own research will always be a work in progress, Zhao says. Cities are complex, mobility systems are intricate, and the needs of people are ever-changing. So there will always be new problems for planners to study — and perhaps answer.“Urban mobility is not something that a few brilliant researchers can work on for a year and solve,” Zhao concludes. “We have to have some degree of humility to accept its complexity.” “If in the 1980s you asked people what would the [mobility] system look like 20 years in the future, they would say it would probably be the same,” Associate Professor Jinhua Zhao says. “Now, really nobody knows what it will it look like.” Image: Illustration by Jose-Luis Olivares, MIT. Based on a photo by Martin Dee. https://news.mit.edu/2020/swift-solar-startup-mit-roots-develops-lightweight-solar-panels-0615 “The inventions and technical advancements of Swift Solar have the opportunity to revolutionize the format of solar photovoltaic technology.” Mon, 15 Jun 2020 14:10:01 -0400 https://news.mit.edu/2020/swift-solar-startup-mit-roots-develops-lightweight-solar-panels-0615 Kathryn M. O’Neill | MIT Energy Initiative Joel Jean PhD ’17 spent two years working on The Future of Solar Energy, a report published by the MIT Energy Initiative (MITEI) in 2015. Today, he is striving to create that future as CEO of Swift Solar, a startup that is developing lightweight solar panels based on perovskite semiconductors. It hasn’t been a straight path, but Jean says his motivation — one he shares with his five co-founders — is the drive to address climate change. “The whole world is finally starting to see the threat of climate change and that there are many benefits to clean energy. That’s why we see such huge potential for new energy technologies,” he says. Max Hoerantner, co-founder and Swift Solar’s vice president of engineering, agrees. “It’s highly motivating to have the opportunity to put a dent into the climate change crisis with the technology that we’ve developed during our PhDs and postdocs.” The company’s international team of founders — from the Netherlands, Austria, Australia, the United Kingdom, and the United States — has developed a product with the potential to greatly increase the use of solar power: a very lightweight, super-efficient, inexpensive, and scalable solar cell. Jean and Hoerantner also have experience building a solar research team, gained working at GridEdge Solar, an interdisciplinary MIT research program that works toward scalable solar and is funded by the Tata Trusts and run out of MITEI’s Tata Center for Technology and Design. “The inventions and technical advancements of Swift Solar have the opportunity to revolutionize the format of solar photovoltaic technology,” says Vladimir Bulović, the Fariborz Maseeh (1990) Professor of Emerging Technology in MIT’s Department of Electrical Engineering and Computer Science, director of MIT.nano, and a science advisor for Swift Solar. Tandem photovoltaics The product begins with perovskites — a class of materials that are cheap, abundant, and great at absorbing and emitting light, making them good semiconductors for solar energy conversion. Using perovskites for solar generation took off about 10 years ago because the materials can be much more efficient at converting sunlight to electricity than the crystalline silicon typically used in solar panels today. They are also lightweight and flexible, whereas crystalline silicon is so brittle it needs to be protected by rigid glass, making most solar panels today about as large and heavy as a patio door. Many researchers and entrepreneurs have rushed to capitalize on those advantages, but Swift Solar has two core technologies that its founders see as their competitive edge. First, they are using two layers of perovskites in tandem to boost efficiency. “We’re putting two perovskite solar cells stacked on top of each other, each absorbing different parts of the spectrum,” Hoerantner says. Second, Swift Solar employs a proprietary scalable deposition process to create its perovskite films, which drives down manufacturing costs. “We’re the only company focusing on high-efficiency all-perovskite tandems. They’re hard to make, but we believe that’s where the market is ultimately going to go,” Jean says. “Our technologies enable much cheaper and more ubiquitous solar power through cheaper production, reduced installation costs, and more power per unit area,” says Sam Stranks, co-founder and lead scientific advisor for Swift Solar as well as an assistant professor in the Department of Chemical Engineering and Biotechnology at the University of Cambridge in the United Kingdom. “Other commercial solar photovoltaic technologies can do one or the other [providing either high power or light weight and flexibility], but not both.” Bulović says technology isn’t the only reason he expects the company to make a positive impact on the energy sector. “The success of a startup is initiated by the quality of the first technical ideas, but is sustained by the quality of the team that builds and grows the technology,” he says. “Swift Solar’s team is extraordinary.” Indeed, Swift Solar’s six co-founders together have six PhDs, four Forbes 30 Under 30 fellowships, and more than 80,000 citations. Four of them — Tomas Leijtens, Giles Eperon, Hoerantner, and Stranks — earned their doctorates at Oxford University in the United Kingdom, working with one of the pioneers of perovskite photovoltaics, Professor Henry Snaith. Stranks then came to MIT to work with Bulović, who is also widely recognized as a leader in next-generation photovoltaics and an experienced entrepreneur. (Bulović is a co-inventor of some of the patents the business is licensing from MIT.) Stranks met Jean at MIT, where Hoerantner later completed a postdoc working at GridEdge Solar. And the sixth co-founder, Kevin Bush, completed his PhD at Stanford University, where Leijtens did a postdoc with Professor Michael McGehee, another leading perovskite researcher and advisor to Swift. What ultimately drew them all together was the desire to address climate change. “We were all independently thinking about how we could have an impact on climate change using solar technology, and a startup seemed like the only real direction that could have an impact at the scale the climate demands,” Jean says. The team first met in a Google Hangouts session spanning three time zones in early 2016. Swift Solar was officially launched in November 2017. MITEI study Interestingly, Jean says it was his work on The Future of Solar Energy — rather than his work in the lab — that most contributed to his role in the founding of Swift Solar. The study team of more than 30 experts, including Jean and Bulović, investigated the potential for expanding solar generating capacity to the multi-terawatt scale by mid-century. They determined that the main goal of U.S. solar policy should be to build the foundation for a massive scale-up of solar generation over the next few decades. “I worked on quantum dot and organic solar cells for most of my PhD, but I also spent a lot of time looking at energy policy and economics, talking to entrepreneurs, and thinking about what it would take to succeed in tomorrow’s solar market. That made me less wedded to a single technology,” Jean says. Jean’s work on the study led to a much cited publication, “Pathways for Solar Photovoltaics” in Energy & Environmental Science (2015), and to his founding leadership role with GridEdge Solar. “Technical advancements and insights gained in this program helped launch Swift Solar as a hub for novel lightweight solar technology,” Bulović says. Swift Solar has also benefited from MIT’s entrepreneurial ecosystem, Jean says, noting that he took 15.366 (MIT Energy Ventures), a class on founding startups, and got assistance from the Venture Mentoring Service. “There were a lot of experiences like that that have really informed where we’re going as a company,” he says. Stranks adds, “MIT provided a thriving environment for exploring commercialization ideas in parallel to our tech development. Very few places could combine both so dynamically.” Swift Solar raised its first seed round of funding in 2018 and moved to the Bay Area of California last summer after incubating for a year at the U.S. Department of Energy’s National Renewable Energy Laboratory in Golden, Colorado. The team is now working to develop its manufacturing processes so that it can scale its technology up from the lab to the marketplace. The founders say their first goal is to develop specialized high-performance products for applications that require high efficiency and light weight, such as unmanned aerial vehicles and other mobile applications. “Wherever there is a need for solar energy and lightweight panels that can be deployed in a flexible way, our products will find a good use,” Hoerantner says. Scaling up will take time, but team members say the high stakes associated with climate change make all the effort worthwhile. “My vision is that we will be able to grow quickly and efficiently to realize our first products within the next two years, and to supply panels for rooftop and utility-scale solar applications in the longer term, helping the world rapidly transform to an electrified, low-carbon future,” Stranks says. This article appears in the Spring 2020 issue of Energy Futures, the magazine of the MIT Energy Initiative. Joel Jean PhD ’17, co-founder of Swift Solar, stands in front of the company’s sign at its permanent location in San Carlos, California. Photo courtesy of Joel Jean. https://news.mit.edu/2020/swift-solar-startup-mit-roots-develops-lightweight-solar-panels-0615 “The inventions and technical advancements of Swift Solar have the opportunity to revolutionize the format of solar photovoltaic technology.” Mon, 15 Jun 2020 14:10:01 -0400 https://news.mit.edu/2020/swift-solar-startup-mit-roots-develops-lightweight-solar-panels-0615 Kathryn M. O’Neill | MIT Energy Initiative Joel Jean PhD ’17 spent two years working on The Future of Solar Energy, a report published by the MIT Energy Initiative (MITEI) in 2015. Today, he is striving to create that future as CEO of Swift Solar, a startup that is developing lightweight solar panels based on perovskite semiconductors. It hasn’t been a straight path, but Jean says his motivation — one he shares with his five co-founders — is the drive to address climate change. “The whole world is finally starting to see the threat of climate change and that there are many benefits to clean energy. That’s why we see such huge potential for new energy technologies,” he says. Max Hoerantner, co-founder and Swift Solar’s vice president of engineering, agrees. “It’s highly motivating to have the opportunity to put a dent into the climate change crisis with the technology that we’ve developed during our PhDs and postdocs.” The company’s international team of founders — from the Netherlands, Austria, Australia, the United Kingdom, and the United States — has developed a product with the potential to greatly increase the use of solar power: a very lightweight, super-efficient, inexpensive, and scalable solar cell. Jean and Hoerantner also have experience building a solar research team, gained working at GridEdge Solar, an interdisciplinary MIT research program that works toward scalable solar and is funded by the Tata Trusts and run out of MITEI’s Tata Center for Technology and Design. “The inventions and technical advancements of Swift Solar have the opportunity to revolutionize the format of solar photovoltaic technology,” says Vladimir Bulović, the Fariborz Maseeh (1990) Professor of Emerging Technology in MIT’s Department of Electrical Engineering and Computer Science, director of MIT.nano, and a science advisor for Swift Solar. Tandem photovoltaics The product begins with perovskites — a class of materials that are cheap, abundant, and great at absorbing and emitting light, making them good semiconductors for solar energy conversion. Using perovskites for solar generation took off about 10 years ago because the materials can be much more efficient at converting sunlight to electricity than the crystalline silicon typically used in solar panels today. They are also lightweight and flexible, whereas crystalline silicon is so brittle it needs to be protected by rigid glass, making most solar panels today about as large and heavy as a patio door. Many researchers and entrepreneurs have rushed to capitalize on those advantages, but Swift Solar has two core technologies that its founders see as their competitive edge. First, they are using two layers of perovskites in tandem to boost efficiency. “We’re putting two perovskite solar cells stacked on top of each other, each absorbing different parts of the spectrum,” Hoerantner says. Second, Swift Solar employs a proprietary scalable deposition process to create its perovskite films, which drives down manufacturing costs. “We’re the only company focusing on high-efficiency all-perovskite tandems. They’re hard to make, but we believe that’s where the market is ultimately going to go,” Jean says. “Our technologies enable much cheaper and more ubiquitous solar power through cheaper production, reduced installation costs, and more power per unit area,” says Sam Stranks, co-founder and lead scientific advisor for Swift Solar as well as an assistant professor in the Department of Chemical Engineering and Biotechnology at the University of Cambridge in the United Kingdom. “Other commercial solar photovoltaic technologies can do one or the other [providing either high power or light weight and flexibility], but not both.” Bulović says technology isn’t the only reason he expects the company to make a positive impact on the energy sector. “The success of a startup is initiated by the quality of the first technical ideas, but is sustained by the quality of the team that builds and grows the technology,” he says. “Swift Solar’s team is extraordinary.” Indeed, Swift Solar’s six co-founders together have six PhDs, four Forbes 30 Under 30 fellowships, and more than 80,000 citations. Four of them — Tomas Leijtens, Giles Eperon, Hoerantner, and Stranks — earned their doctorates at Oxford University in the United Kingdom, working with one of the pioneers of perovskite photovoltaics, Professor Henry Snaith. Stranks then came to MIT to work with Bulović, who is also widely recognized as a leader in next-generation photovoltaics and an experienced entrepreneur. (Bulović is a co-inventor of some of the patents the business is licensing from MIT.) Stranks met Jean at MIT, where Hoerantner later completed a postdoc working at GridEdge Solar. And the sixth co-founder, Kevin Bush, completed his PhD at Stanford University, where Leijtens did a postdoc with Professor Michael McGehee, another leading perovskite researcher and advisor to Swift. What ultimately drew them all together was the desire to address climate change. “We were all independently thinking about how we could have an impact on climate change using solar technology, and a startup seemed like the only real direction that could have an impact at the scale the climate demands,” Jean says. The team first met in a Google Hangouts session spanning three time zones in early 2016. Swift Solar was officially launched in November 2017. MITEI study Interestingly, Jean says it was his work on The Future of Solar Energy — rather than his work in the lab — that most contributed to his role in the founding of Swift Solar. The study team of more than 30 experts, including Jean and Bulović, investigated the potential for expanding solar generating capacity to the multi-terawatt scale by mid-century. They determined that the main goal of U.S. solar policy should be to build the foundation for a massive scale-up of solar generation over the next few decades. “I worked on quantum dot and organic solar cells for most of my PhD, but I also spent a lot of time looking at energy policy and economics, talking to entrepreneurs, and thinking about what it would take to succeed in tomorrow’s solar market. That made me less wedded to a single technology,” Jean says. Jean’s work on the study led to a much cited publication, “Pathways for Solar Photovoltaics” in Energy & Environmental Science (2015), and to his founding leadership role with GridEdge Solar. “Technical advancements and insights gained in this program helped launch Swift Solar as a hub for novel lightweight solar technology,” Bulović says. Swift Solar has also benefited from MIT’s entrepreneurial ecosystem, Jean says, noting that he took 15.366 (MIT Energy Ventures), a class on founding startups, and got assistance from the Venture Mentoring Service. “There were a lot of experiences like that that have really informed where we’re going as a company,” he says. Stranks adds, “MIT provided a thriving environment for exploring commercialization ideas in parallel to our tech development. Very few places could combine both so dynamically.” Swift Solar raised its first seed round of funding in 2018 and moved to the Bay Area of California last summer after incubating for a year at the U.S. Department of Energy’s National Renewable Energy Laboratory in Golden, Colorado. The team is now working to develop its manufacturing processes so that it can scale its technology up from the lab to the marketplace. The founders say their first goal is to develop specialized high-performance products for applications that require high efficiency and light weight, such as unmanned aerial vehicles and other mobile applications. “Wherever there is a need for solar energy and lightweight panels that can be deployed in a flexible way, our products will find a good use,” Hoerantner says. Scaling up will take time, but team members say the high stakes associated with climate change make all the effort worthwhile. “My vision is that we will be able to grow quickly and efficiently to realize our first products within the next two years, and to supply panels for rooftop and utility-scale solar applications in the longer term, helping the world rapidly transform to an electrified, low-carbon future,” Stranks says. This article appears in the Spring 2020 issue of Energy Futures, the magazine of the MIT Energy Initiative. Joel Jean PhD ’17, co-founder of Swift Solar, stands in front of the company’s sign at its permanent location in San Carlos, California. Photo courtesy of Joel Jean. https://news.mit.edu/2020/sand-grains-massive-glacial-surges-0612 New model answers longstanding question of how these sudden flows happen; may expand understanding of Antarctic ice sheets. Fri, 12 Jun 2020 15:17:33 -0400 https://news.mit.edu/2020/sand-grains-massive-glacial-surges-0612 Jennifer Chu | MIT News Office About 10 percent of the Earth’s land mass is covered in glaciers, most of which slip slowly across the land over years, carving fjords and trailing rivers in their wake. But about 1 percent of glaciers can suddenly surge, spilling over the land at 10 to 100 times their normal speed. When this happens, a glacial surge can set off avalanches, flood rivers and lakes, and overwhelm downstream settlements. What triggers the surges themselves has been a longstanding question in the field of glaciology. Now scientists at MIT and Dartmouth College have developed a model that pins down the conditions that would trigger a glacier to surge. Through their model, the researchers find that glacial surge is driven by the conditions of the underlying sediment, and specifically by the tiny grains of sediment that lie beneath a towering glacier.“There’s a huge separation of scales: Glaciers are these massive things, and it turns out that their flow, this incredible amount of momentum, is somehow driven by grains of millimeter-scale sediment,” says Brent Minchew, the Cecil and Ida Green Assistant Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “That’s a hard thing to get your head around. And it’s exciting to open up this whole new line of inquiry that nobody had really considered before.”The new model of glacial surge may also help scientists better understand the behavior of larger masses of moving ice. “We think of glacial surges as natural laboratories,” Minchew says. “Because they’re this extreme, transient event, glacial surges give us this window into how other systems work, such as the fast-flowing streams in Antarctica, which are the things that matter for sea-level rise.”Minchew and his co-author Colin Meyer of Dartmouth have published their results this month in the journal Proceedings of the Royal Society A. A glacier breaks looseWhile he was still a PhD student, Minchew was reading through “The Physics of Glaciers,” the standard textbook in the field of glaciology, when he came across a rather bleak passage on the prospect of modeling a glacial surge. The passage outlined the basic requirements of such a model and closed with a pessimistic outlook, noting that “such a model has not been established, and none is in view.”Rather than be discouraged, Minchew took this statement as a challenge, and as part of his thesis began to lay out the framework for a model to describe the triggering events for a glacial surge. As he quickly realized, the handful of models that existed at the time were based on the assumption that most surge-type glaciers lay atop bedrock — rough and impermeable surfaces that the models assumed remained unchanged as glaciers flowed across. But scientists have since observed that glacial surges often occur not over solid rock, but instead across shifting sediment.Minchew’s model simulates a glacier’s movement over a permeable layer of sediment, made up of individual grains, the size of which he can adjust in the model to study both the interactions of the grains within the sediment, and ultimately, the glacier’s movement in response. The new model shows that as a glacier moves at a normal rate across a sediment bed, the grains at the top of the sediment layer, in direct contact with the glacier, are dragged along with the glacier at the same speed, while the grains toward the middle move slower, and those at the bottom stay put. This layered shifting of grains creates a shearing effect within the sediment layer. At the microscale, the model shows that this shearing occurs in the form of individual sediment grains that roll up and over each other. As grains roll up, over, and away with the glacier, they open up spaces within the water-saturated sediment layer that expand, providing pockets for the water to seep into. This creates a decrease in water pressure, which acts to strengthen the sedimentary material as a whole, creating a sort of resistance against the sediment’s grains and making it harder for them to roll along with the moving glacier. However, as a glacier accumulates snowfall, it thickens and its surface steepens, which increases the shear forces acting on the sediment. As the sediment weakens, the glacier starts flowing faster and faster. “The faster it flows, the more the glacier thins, and as you start to thin, you’re decreasing the load to the sediment, because you’re decreasing the weight of the ice. So you’re bringing the weight of the ice closer to the sediment’s water pressure. And that ends up weakening the sediment,” Minchew explains. “Once that happens, everything starts to break loose, and you get a surge.” Antarctic shearingAs a test of their model, the researchers compared predictions of their model to observations of two glaciers that recently experienced surges, and found that the model was able to reproduce the flow rates of both glaciers with reasonable precision. In order to predict which glaciers will surge and when, the researchers say scientists will have to know something about the strength of the underlying sediment, and in particular, the size distribution of the sediment’s grains. If these measurements can be made of a particular glacier’s environment, the new model can be used to predict when and by how much that glacier will surge. Beyond glacial surges, Minchew hopes the new model will help to illuminate the mechanics of ice flow in other systems, such as the ice sheets in West Antarctica. “It’s within the realm of possibility that we could get 1 to 3 meters of sea-level rise from West Antarctica within our lifetimes,” Minchew says. This type of shearing mechanism in glacial surges could play a major role in determining the rates of sea-level rise you’d get from West Antarctica.”This research was funded, in part, by the U.S. National Science Foundation and NASA. A surging glacier in the St. Elias Mountains, Canada. Credit: Gwenn Flowers https://news.mit.edu/2020/sand-grains-massive-glacial-surges-0612 New model answers longstanding question of how these sudden flows happen; may expand understanding of Antarctic ice sheets. Fri, 12 Jun 2020 15:17:33 -0400 https://news.mit.edu/2020/sand-grains-massive-glacial-surges-0612 Jennifer Chu | MIT News Office About 10 percent of the Earth’s land mass is covered in glaciers, most of which slip slowly across the land over years, carving fjords and trailing rivers in their wake. But about 1 percent of glaciers can suddenly surge, spilling over the land at 10 to 100 times their normal speed. When this happens, a glacial surge can set off avalanches, flood rivers and lakes, and overwhelm downstream settlements. What triggers the surges themselves has been a longstanding question in the field of glaciology. Now scientists at MIT and Dartmouth College have developed a model that pins down the conditions that would trigger a glacier to surge. Through their model, the researchers find that glacial surge is driven by the conditions of the underlying sediment, and specifically by the tiny grains of sediment that lie beneath a towering glacier.“There’s a huge separation of scales: Glaciers are these massive things, and it turns out that their flow, this incredible amount of momentum, is somehow driven by grains of millimeter-scale sediment,” says Brent Minchew, the Cecil and Ida Green Assistant Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “That’s a hard thing to get your head around. And it’s exciting to open up this whole new line of inquiry that nobody had really considered before.”The new model of glacial surge may also help scientists better understand the behavior of larger masses of moving ice. “We think of glacial surges as natural laboratories,” Minchew says. “Because they’re this extreme, transient event, glacial surges give us this window into how other systems work, such as the fast-flowing streams in Antarctica, which are the things that matter for sea-level rise.”Minchew and his co-author Colin Meyer of Dartmouth have published their results this month in the journal Proceedings of the Royal Society A. A glacier breaks looseWhile he was still a PhD student, Minchew was reading through “The Physics of Glaciers,” the standard textbook in the field of glaciology, when he came across a rather bleak passage on the prospect of modeling a glacial surge. The passage outlined the basic requirements of such a model and closed with a pessimistic outlook, noting that “such a model has not been established, and none is in view.”Rather than be discouraged, Minchew took this statement as a challenge, and as part of his thesis began to lay out the framework for a model to describe the triggering events for a glacial surge. As he quickly realized, the handful of models that existed at the time were based on the assumption that most surge-type glaciers lay atop bedrock — rough and impermeable surfaces that the models assumed remained unchanged as glaciers flowed across. But scientists have since observed that glacial surges often occur not over solid rock, but instead across shifting sediment.Minchew’s model simulates a glacier’s movement over a permeable layer of sediment, made up of individual grains, the size of which he can adjust in the model to study both the interactions of the grains within the sediment, and ultimately, the glacier’s movement in response. The new model shows that as a glacier moves at a normal rate across a sediment bed, the grains at the top of the sediment layer, in direct contact with the glacier, are dragged along with the glacier at the same speed, while the grains toward the middle move slower, and those at the bottom stay put. This layered shifting of grains creates a shearing effect within the sediment layer. At the microscale, the model shows that this shearing occurs in the form of individual sediment grains that roll up and over each other. As grains roll up, over, and away with the glacier, they open up spaces within the water-saturated sediment layer that expand, providing pockets for the water to seep into. This creates a decrease in water pressure, which acts to strengthen the sedimentary material as a whole, creating a sort of resistance against the sediment’s grains and making it harder for them to roll along with the moving glacier. However, as a glacier accumulates snowfall, it thickens and its surface steepens, which increases the shear forces acting on the sediment. As the sediment weakens, the glacier starts flowing faster and faster. “The faster it flows, the more the glacier thins, and as you start to thin, you’re decreasing the load to the sediment, because you’re decreasing the weight of the ice. So you’re bringing the weight of the ice closer to the sediment’s water pressure. And that ends up weakening the sediment,” Minchew explains. “Once that happens, everything starts to break loose, and you get a surge.” Antarctic shearingAs a test of their model, the researchers compared predictions of their model to observations of two glaciers that recently experienced surges, and found that the model was able to reproduce the flow rates of both glaciers with reasonable precision. In order to predict which glaciers will surge and when, the researchers say scientists will have to know something about the strength of the underlying sediment, and in particular, the size distribution of the sediment’s grains. If these measurements can be made of a particular glacier’s environment, the new model can be used to predict when and by how much that glacier will surge. Beyond glacial surges, Minchew hopes the new model will help to illuminate the mechanics of ice flow in other systems, such as the ice sheets in West Antarctica. “It’s within the realm of possibility that we could get 1 to 3 meters of sea-level rise from West Antarctica within our lifetimes,” Minchew says. This type of shearing mechanism in glacial surges could play a major role in determining the rates of sea-level rise you’d get from West Antarctica.”This research was funded, in part, by the U.S. National Science Foundation and NASA. A surging glacier in the St. Elias Mountains, Canada. Credit: Gwenn Flowers https://news.mit.edu/2020/secrets-of-plastic-eating-enzyme-petase-improve-recycling-linda-zhong-0608 Graduate student Linda Zhong and professor of biology Anthony Sinskey are studying the plastic-devouring enzyme PETase as a way to improve recycling. Mon, 08 Jun 2020 14:15:01 -0400 https://news.mit.edu/2020/secrets-of-plastic-eating-enzyme-petase-improve-recycling-linda-zhong-0608 Fernanda Ferreira | School of Science It was during a cruise in Alaska that Linda Zhong realized that the world didn’t have to be full of plastic. “I grew up in cities, so you’re very used to seeing all kinds of trash everywhere,” says the graduate student in microbiology. Zhong, who is Canadian and lived in Ottawa growing up and in Toronto during college, routinely saw trash in the waters of the Ottawa River and on the beaches around Lake Ontario. “You never see it as anything other than normal.” Alaska changed that. Seeing the pristine, plastic-free landscape, Zhong decided to find a way to get rid of plastic waste. “I’m a biologist, so I approached it from a biological standpoint,” she says. Plastic pollution is a global problem. According to the United Nations Environment Program, an estimated 8.3 billion tons of plastic have been produced since the 1950s. More than 60 percent of that has ended up in landfills and the environment. A major type of plastic is polyethylene terephthalate, or PET, which most water bottles are made of. Even though PET is easy to recycle compared to other types of plastic, in reality, it isn’t. “There are two ways to recycle PET: one is mechanical, and the other’s chemical,” says Zhong. Chemical recycling, which converts PET back to its original raw materials, is theoretically a closed loop in terms of material flow, but not so in practice. “For the most part, no one uses it right now because it’s so costly,” explains Zhong. Mechanical recycling involves melting PET into small pellets that can be used to make new products. It’s a much cheaper process but, as Zhong says, it can’t be done infinitely. “Companies can recycle a bottle into another a handful of times before the material is too degraded to make bottles,” she says. When this happens, the degraded material is thrown out, ending up in landfills or in the ocean. Zhong’s ultimate goal is to reduce that massive material loss. Before arriving at MIT, Zhong began looking for organisms that could degrade plastic and learned that a group in Japan had published a paper on Ideonella sakaiensis. “It’s this weird environmental microbe that really likes digesting weird compounds,” says Zhong. One of those weird compounds is PET. With the organism found, Zhong set her sights on the enzyme produced by Ideonella sakaiensis that digests plastic: PETase. When Zhong got into MIT, she brought the project with her and proposed it to her advisor, professor of biology Anthony Sinskey. As Zhong delved into the project, her aims changed. “At the beginning, I really wanted to do a screen and rapidly evolve this enzyme to make it better,” she says. That is still ongoing, but as Zhong learned more about PETase, she realized that there was a huge gap in the field’s understanding of how it works. “I keep finding myself stumbling over what the literature says and what my results show,” says Zhong. This led Zhong to shift her experiments to more fundamental questions. “I started developing methods to study this enzyme in more detail,” says Zhong. Previous assays that looked at PETase would measure the breakdown of plastic 24 hours after the enzyme was added. “My method allows me to start taking measurements within 30 minutes, and it shows me a lot more about what the enzyme does over time.” Zhong explains that understanding how PETase truly works is essential before engineering it to digest plastic more efficiently. “So, getting that fundamental picture of the enzyme and establishing good methods to study it are what I’m focusing on.” Right now, Zhong is working from home due to the Covid-19 health crisis, balancing her time between reading papers and cooking. “It’s sort of my replacement for experiments, since it’s something I do with my hands at a bench,” says Zhong. But cooking isn’t a perfect substitute and she still can’t wait to get back to the lab. “I really want to find the answers to the questions I’ve just started exploring,” Zhong says.  Zhong and Sinskey received a grant from the Ally of Nature Fund to help fund the PETase project. The fund was established in 2007 by MIT alumni Audrey Buyrn ’58, SM ’63, PhD ’66 and her late husband Alan Phillips ’57, PhD ’61 to provide support for projects whose purpose is to prevent, reduce, and repair humanity’s impact on the natural environment. In 2019, according to Zhong, the fund was a boon. “Because it was a new project in the lab, we had no funding,” she says. The Ally of Nature grant also has no spending restrictions, which is ideal for a project that has moved beyond bioengineering to encompass biochemistry and fundamental biology. “I didn’t have a budget, because I didn’t know what I needed,” says Zhong. “But now I can buy what I need when I need it.” Graduate student Linda Zhong works in Professor Anthony Sinskey’s biology lab on an answer for plastic pollution. Photo courtesy of Linda Zhong. https://news.mit.edu/2020/peatland-drainage-southeast-asia-climate-change-0604 Study reveals drainage, deforestation of the region’s peatlands, which leads to fires, greenhouse emissions, land subsidence. Thu, 04 Jun 2020 11:00:00 -0400 https://news.mit.edu/2020/peatland-drainage-southeast-asia-climate-change-0604 David L. Chandler | MIT News Office In less than three decades, most of Southeast Asia’s peatlands have been wholly or partially deforested, drained, and dried out. This has released carbon that accumulated over thousands of years from dead plant matter, and has led to rampant wildfires that spew air pollution and greenhouse gases into the atmosphere.The startling prevalence of such rapid destruction of the peatlands, and their resulting subsidence, is revealed in a new satellite-based study conducted by researchers at MIT and in Singapore and Oregon. The research was published today in the journal Nature Geoscience, in a paper by Alison Hoyt PhD ’17, a postdoc at the Max Planck Institute for Biogeochemistry; MIT professor of civil and environmental engineering Charles Harvey; and two others. Video courtesy of Colin Harvey. Tropical peatlands are permanently flooded forest lands, where the debris of fallen leaves and branches is preserved by the wet environment and continues to accumulate for centuries, rather than continually decomposing as it does in dryland forests. When drained and dried, either to create plantations or to build roads or canals to extract the timber, the peat becomes highly flammable. Even when unburned it rapidly decomposes, releasing its accumulated store of carbon. This loss of stored carbon leads to subsidence, the sinking of the ground surface, in vulnerable coastal areas.Until now, measuring the progression of this draining and drying process has required arduous treks through dense forests and wet land, and help from local people who know their way through the remote trackless swampland. There, poles are dug into the ground to provide a reference to measure the subsidence of the land over time as the peat desiccates. The process is arduous and time-consuming, and thus limited in the areas it can cover.Now, Hoyt explains, the team was able to use precise satellite elevation data gathered over a three-year period to get detailed measurements of the degree of subsidence over an area of 2.7 million hectares mostly in Malaysia and Indonesia — more than 10 percent of the total area covered by peatlands in the Southeast Asia region. Over 90 percent of the peatland area they studied was subsiding, at an average of almost an inch a year (over 1 foot every 15 years). This subsidence poses a threat to these ecosystems, as most coastal peatlands are at or just above sea level.“Peatlands are really unique and carbon rich environments and wetland ecosystems,” Hoyt says. While most previous attempts to quantify their destruction have focused on a few locations or types of land use, by using the satellite data, she says this work represents “the first time that we can make measurements across many different types of land uses rather than just plantations, and across millions of hectares.” This makes it possible to show just how widespread the draining and subsidence of these lands has been.“Thirty years ago, or even 20 years ago, this land was covered with pristine rainforest with enormous trees,” Harvey says, and that was still the case even when he began doing research in the area. “In 13 years, I’ve seen almost all of these rainforests just removed. There’s almost none at all anymore, in that short period of time.”Because peat is composed almost entirely of organic carbon, measuring how much that land has subsided provides a direct measure of the amount of carbon that has been released into the atmosphere. Unlike other kinds of subsidence seen in drier ecosystems, which can result from compaction of soil, in this case the missing depth of peat reflects matter that has actually been decomposed and lost to the air. “It’s not just compaction. It’s actually mass loss. So measuring rates of subsidence is basically equivalent to measuring emissions of carbon dioxide,” says Harvey, who is also a principal investigator at the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore. Some analysts had previously thought that the draining of peatland forests to make way for palm oil plantations was the major cause of peatland loss, but the new study shows that subsidence is widespread across peatlands under a diverse set of land uses. This subsidence is driven by the drainage of tropical peatlands, mostly for  the expansion of agriculture, as well as from other causes, such as creating canals for floating timber out of the forests, and digging drainage ditches alongside roads, which can drain large surrounding areas. All of these factors, it turns out, have contributed significantly to the extreme loss of peatlands in the region.One longstanding controversy that this new research could help to address is how long the peatland subsidence continues after the lands are drained. Plantation owners have said that this is temporary and the land quickly stabilizes, while some conservation advocates say the process continues, leaving large regions highly vulnerable to flooding as sea levels rise, since most of these lands are only slightly above sea level. The new data suggest that the rate of subsidence continues over time, though the rate does slow down.The satellite measurements used for this study were gathered between 2007 and 2011 using a method called Interferometric Synthetic Aperture Radar (InSAR), which can detect changes in surface elevation with an accuracy of centimeters or even millimeters. Though the satellites that produced these data sets are no longer in operation, newer Japanese satellites are now gathering similar data, and the team hopes to do followup studies using some of the newer data.“This is definitely a proof of concept on how satellite data can help us understand environmental changes happening across the whole region,” Hoyt says. That could help in monitoring regional greenhouse gas output, but could also help in implementing and monitoring local regulations on land use. “This has really exciting management implications, because it could allow us to verify management practices and track hotspots of subsidence,” she says.While there has been little interest in the region in curbing peatland drainage in order to curb greenhouse gas emissions, the serious risk of uncontrollable fires in these dried peatlands provides a strong motivation to try to preserve and restore these ecosystems, Harvey says. “These plumes of smoke that engulf the region are a problem that everyone there recognizes.” “This new approach … allows peat subsidence to be easily monitored over very large spatial extents and a diversity of settings that would be impossible using other approaches,” says David Wardle, the Smithsonian Professor of Forest Ecology at Nanyang Technological University in Singapore, who was not associated with this research. “It is in my opinion an important breakthrough that moves forward our understanding of the serious environmental problems that have emerged from peat forest clearing and its conversion and degradation, and, alarmingly, highlights that the problems are worse than we thought they were.”The research team also included Estelle Chaussard of the University of Oregon and Sandra Seppalainen ’16. The work was supported by the National Research Foundation, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program, the Singapore-MIT Alliance for Research and Technology (SMART), the National Science Foundation, and MIT’s Environmental Solutions Initiative. In this photo, Alison Hoyt stands on top of a log during a research trip in a peat swamp forest in Borneo. Tropical peatlands are permanently flooded forest lands, where the debris of fallen leaves and branches is preserved by the wet environment and continues to accumulate for centuries, rather than continually decomposing as it does in dryland forests. Courtesy of Alison Hoyt https://news.mit.edu/2020/peatland-drainage-southeast-asia-climate-change-0604 Study reveals drainage, deforestation of the region’s peatlands, which leads to fires, greenhouse emissions, land subsidence. Thu, 04 Jun 2020 11:00:00 -0400 https://news.mit.edu/2020/peatland-drainage-southeast-asia-climate-change-0604 David L. Chandler | MIT News Office In less than three decades, most of Southeast Asia’s peatlands have been wholly or partially deforested, drained, and dried out. This has released carbon that accumulated over thousands of years from dead plant matter, and has led to rampant wildfires that spew air pollution and greenhouse gases into the atmosphere.The startling prevalence of such rapid destruction of the peatlands, and their resulting subsidence, is revealed in a new satellite-based study conducted by researchers at MIT and in Singapore and Oregon. The research was published today in the journal Nature Geoscience, in a paper by Alison Hoyt PhD ’17, a postdoc at the Max Planck Institute for Biogeochemistry; MIT professor of civil and environmental engineering Charles Harvey; and two others. Video courtesy of Colin Harvey. Tropical peatlands are permanently flooded forest lands, where the debris of fallen leaves and branches is preserved by the wet environment and continues to accumulate for centuries, rather than continually decomposing as it does in dryland forests. When drained and dried, either to create plantations or to build roads or canals to extract the timber, the peat becomes highly flammable. Even when unburned it rapidly decomposes, releasing its accumulated store of carbon. This loss of stored carbon leads to subsidence, the sinking of the ground surface, in vulnerable coastal areas.Until now, measuring the progression of this draining and drying process has required arduous treks through dense forests and wet land, and help from local people who know their way through the remote trackless swampland. There, poles are dug into the ground to provide a reference to measure the subsidence of the land over time as the peat desiccates. The process is arduous and time-consuming, and thus limited in the areas it can cover.Now, Hoyt explains, the team was able to use precise satellite elevation data gathered over a three-year period to get detailed measurements of the degree of subsidence over an area of 2.7 million hectares mostly in Malaysia and Indonesia — more than 10 percent of the total area covered by peatlands in the Southeast Asia region. Over 90 percent of the peatland area they studied was subsiding, at an average of almost an inch a year (over 1 foot every 15 years). This subsidence poses a threat to these ecosystems, as most coastal peatlands are at or just above sea level.“Peatlands are really unique and carbon rich environments and wetland ecosystems,” Hoyt says. While most previous attempts to quantify their destruction have focused on a few locations or types of land use, by using the satellite data, she says this work represents “the first time that we can make measurements across many different types of land uses rather than just plantations, and across millions of hectares.” This makes it possible to show just how widespread the draining and subsidence of these lands has been.“Thirty years ago, or even 20 years ago, this land was covered with pristine rainforest with enormous trees,” Harvey says, and that was still the case even when he began doing research in the area. “In 13 years, I’ve seen almost all of these rainforests just removed. There’s almost none at all anymore, in that short period of time.”Because peat is composed almost entirely of organic carbon, measuring how much that land has subsided provides a direct measure of the amount of carbon that has been released into the atmosphere. Unlike other kinds of subsidence seen in drier ecosystems, which can result from compaction of soil, in this case the missing depth of peat reflects matter that has actually been decomposed and lost to the air. “It’s not just compaction. It’s actually mass loss. So measuring rates of subsidence is basically equivalent to measuring emissions of carbon dioxide,” says Harvey, who is also a principal investigator at the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore. Some analysts had previously thought that the draining of peatland forests to make way for palm oil plantations was the major cause of peatland loss, but the new study shows that subsidence is widespread across peatlands under a diverse set of land uses. This subsidence is driven by the drainage of tropical peatlands, mostly for  the expansion of agriculture, as well as from other causes, such as creating canals for floating timber out of the forests, and digging drainage ditches alongside roads, which can drain large surrounding areas. All of these factors, it turns out, have contributed significantly to the extreme loss of peatlands in the region.One longstanding controversy that this new research could help to address is how long the peatland subsidence continues after the lands are drained. Plantation owners have said that this is temporary and the land quickly stabilizes, while some conservation advocates say the process continues, leaving large regions highly vulnerable to flooding as sea levels rise, since most of these lands are only slightly above sea level. The new data suggest that the rate of subsidence continues over time, though the rate does slow down.The satellite measurements used for this study were gathered between 2007 and 2011 using a method called Interferometric Synthetic Aperture Radar (InSAR), which can detect changes in surface elevation with an accuracy of centimeters or even millimeters. Though the satellites that produced these data sets are no longer in operation, newer Japanese satellites are now gathering similar data, and the team hopes to do followup studies using some of the newer data.“This is definitely a proof of concept on how satellite data can help us understand environmental changes happening across the whole region,” Hoyt says. That could help in monitoring regional greenhouse gas output, but could also help in implementing and monitoring local regulations on land use. “This has really exciting management implications, because it could allow us to verify management practices and track hotspots of subsidence,” she says.While there has been little interest in the region in curbing peatland drainage in order to curb greenhouse gas emissions, the serious risk of uncontrollable fires in these dried peatlands provides a strong motivation to try to preserve and restore these ecosystems, Harvey says. “These plumes of smoke that engulf the region are a problem that everyone there recognizes.” “This new approach … allows peat subsidence to be easily monitored over very large spatial extents and a diversity of settings that would be impossible using other approaches,” says David Wardle, the Smithsonian Professor of Forest Ecology at Nanyang Technological University in Singapore, who was not associated with this research. “It is in my opinion an important breakthrough that moves forward our understanding of the serious environmental problems that have emerged from peat forest clearing and its conversion and degradation, and, alarmingly, highlights that the problems are worse than we thought they were.”The research team also included Estelle Chaussard of the University of Oregon and Sandra Seppalainen ’16. The work was supported by the National Research Foundation, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program, the Singapore-MIT Alliance for Research and Technology (SMART), the National Science Foundation, and MIT’s Environmental Solutions Initiative. In this photo, Alison Hoyt stands on top of a log during a research trip in a peat swamp forest in Borneo. Tropical peatlands are permanently flooded forest lands, where the debris of fallen leaves and branches is preserved by the wet environment and continues to accumulate for centuries, rather than continually decomposing as it does in dryland forests. Courtesy of Alison Hoyt https://news.mit.edu/2020/reflecting-sunlight-cool-planet-storm-0602 Solar geoengineering proposals will weaken extratropical storm tracks in both hemispheres, scientists find. Tue, 02 Jun 2020 09:10:35 -0400 https://news.mit.edu/2020/reflecting-sunlight-cool-planet-storm-0602 Jennifer Chu | MIT News Office How can the world combat the continued rise in global temperatures? How about shading the Earth from a portion of the sun’s heat by injecting the stratosphere with reflective aerosols? After all, volcanoes do essentially the same thing, albeit in short, dramatic bursts: When a Vesuvius erupts, it blasts fine ash into the atmosphere, where the particles can linger as a kind of cloud cover, reflecting solar radiation back into space and temporarily cooling the planet.Some researchers are exploring proposals to engineer similar effects, for example by launching reflective aerosols into the stratosphere — via planes, balloons, and even blimps — in order to block the sun’s heat and counteract global warming. But such solar geoengineering schemes, as they are known, could have other long-lasting effects on the climate.Now scientists at MIT have found that solar geoengineering would significantly change extratropical storm tracks — the zones in the middle and high latitudes where storms form year-round and are steered by the jet stream across the oceans and land. Extratropical storm tracks give rise to extratropical cyclones, and not their tropical cousins, hurricanes. The strength of extratropical storm tracks determines the severity and frequency of storms such as nor’easters in the United States.The team considered an idealized scenario in which solar radiation was reflected enough to offset the warming that would occur if carbon dioxide were to quadruple in concentration. In a number of global climate models under this scenario, the strength of storm tracks in both the northern and southern hemispheres weakened significantly in response.Weakened storm tracks would mean less powerful winter storms, but the team cautions that weaker storm tracks also lead to stagnant conditions, particularly in summer, and less wind to clear away air pollution. Changes in winds could also affect the circulation of ocean waters and, in turn, the stability of ice sheets.“About half the world’s population lives in the extratropical regions where storm tracks dominate weather,” says Charles Gertler, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Our results show that solar geoengineering will not simply reverse climate change. Instead, it has the potential itself to induce novel changes in climate.”Gertler and his colleagues have published their results this week in the journal Geophysical Research Letters. Co-authors include EAPS Professor Paul O’Gorman, along with Ben Kravitz of Indiana University, John Moore of Beijing Normal University, Steven Phipps of the University of Tasmania, and Shingo Watanabe of the Japan Agency for Marine-Earth Science and TechnologyA not-so-sunny pictureScientists have previously modeled what Earth’s climate might look like if solar geoengineering scenarios were to play out on a global scale, with mixed results. On the one hand, spraying aerosols into the stratosphere would reduce incoming solar heat and, to a degree, counteract the warming caused by carbon dioxide emissions. On the other hand, such cooling of the planet would not prevent other greenhouse gas-induced effects such as regional reductions in rainfall and ocean acidification.There have also been signs that intentionally reducing solar radiation would shrink the temperature difference between the Earth’s equator and poles or, in climate parlance, weaken the planet’s meridional temperature gradient, cooling the equator while the poles continue to warm. This last consequence was especially intriguing to Gertler and O’Gorman.“Storm tracks feed off of meridional temperature gradients, and storm tracks are interesting because they help us to understand weather extremes,” Gertler says. “So we were interested in how geoengineering affects storm tracks.”  The team looked at how extratropical storm tracks might change under a scenario of solar geoengineering known to climate scientists as experiment G1 of the Geoengineering Model Intercomparison Project (GeoMIP), a project that provides various geoengineering scenarios for scientists to run on climate models to assess their various climate effects.The G1 experiment assumes an idealized scenario in which a solar geoengineering scheme blocks enough solar radiation to counterbalance the warming that would occur if carbon dioxide concentrations were to quadruple.The researchers used results from various climate models run forward in time under the conditions of the G1 experiment. They also used results from a more sophisticated geoengineering scenario with doubling of carbon dioxide concentrations and aerosols injected into the stratosphere at more than one latitude. In each model they recorded the day-to-day change in air pressure at sea level pressure at various locations along the storm tracks. These changes reflect the passage of storms and measure a storm track’s energy.“If we look at the variance in sea level pressure, we have a sense of how often and how strongly cyclones pass over each area,” Gertler explains. “We then average the variance across the whole extratropical region, to get an average value of storm track strength for the northern and southern hemispheres.”An imperfect counterbalanceTheir results, across climate models, showed that solar geoengineering would weaken storm tracks in both Northern and Southern hemispheres. Depending on the scenario they considered, the storm track in the Northern Hemisphere would be 5 to 17 percent weaker than it is today.“A weakened storm track, in both hemispheres, would mean weaker winter storms but also lead to more stagnant weather, which could affect heat waves,” Gertler says. “Across all seasons, this could affect ventilation of air pollution. It also may contribute to a weakening of the hydrological cycle, with regional reductions in rainfall. These are not good changes, compared to a baseline climate that we are used to.”The researchers were curious to see how the same storm tracks would respond to just global warming alone, without the addition of social geoengineering, so they ran the climate models again under several warming-only scenarios. Surprisingly, they found that, in the northern hemisphere, global warming would also weaken storm tracks, by the same magnitude as with the addition of solar geoengineering. This suggests solar geoengineering, and efforts to cool the Earth by reducing incoming heat, would not do much to alter global warming’s effects, at least on storm tracks — a puzzling outcome that the researchers are unsure how to explain.In the Southern Hemisphere, there is a slightly different story. They found that global warming alone would strengthen storm tracks there, whereas the addition of solar geoengineering would prevent this strengthening, and even further, would weaken the storm tracks there.“In the Southern Hemisphere, winds drive ocean circulation, which in turn could affect uptake of carbon dioxide, and  the stability of the Antarctic ice sheet,” O’Gorman adds. “So how storm tracks change over the Southern Hemisphere is quite important.”The team also observed that the weakening of storm tracks was strongly correlated with changes in temperature and humidity. Specifically, the climate models showed that in response to reduced incoming solar radiation, the equator cooled significantly as the poles continued to warm. This reduced temperature gradient appears to be sufficient to explain the weakening storm tracks — a result that the group is the first to demonstrate.“This work highlights that solar geoengineering is not reversing climate change, but is substituting one unprecedented climate state for another,” Gertler says. “Reflecting sunlight isn’t a perfect counterbalance to the greenhouse effect.”Adds O’Gorman: “There are multiple reasons to avoid doing this, and instead to favor reducing emissions of CO2 and other greenhouse gases.”This research was funded, in part, by the National Science Foundation, NASA, and the Industry and Foundation sponsors of the MIT Joint Program on the Science and Policy of Global Change. MIT researchers find that extratropical storm tracks — the blue regions of storminess in the Earth’s middle latitudes — would change significantly with solar geoengineering efforts. Image: Courtesy of the researchers https://news.mit.edu/2020/reflecting-sunlight-cool-planet-storm-0602 Solar geoengineering proposals will weaken extratropical storm tracks in both hemispheres, scientists find. Tue, 02 Jun 2020 09:10:35 -0400 https://news.mit.edu/2020/reflecting-sunlight-cool-planet-storm-0602 Jennifer Chu | MIT News Office How can the world combat the continued rise in global temperatures? How about shading the Earth from a portion of the sun’s heat by injecting the stratosphere with reflective aerosols? After all, volcanoes do essentially the same thing, albeit in short, dramatic bursts: When a Vesuvius erupts, it blasts fine ash into the atmosphere, where the particles can linger as a kind of cloud cover, reflecting solar radiation back into space and temporarily cooling the planet.Some researchers are exploring proposals to engineer similar effects, for example by launching reflective aerosols into the stratosphere — via planes, balloons, and even blimps — in order to block the sun’s heat and counteract global warming. But such solar geoengineering schemes, as they are known, could have other long-lasting effects on the climate.Now scientists at MIT have found that solar geoengineering would significantly change extratropical storm tracks — the zones in the middle and high latitudes where storms form year-round and are steered by the jet stream across the oceans and land. Extratropical storm tracks give rise to extratropical cyclones, and not their tropical cousins, hurricanes. The strength of extratropical storm tracks determines the severity and frequency of storms such as nor’easters in the United States.The team considered an idealized scenario in which solar radiation was reflected enough to offset the warming that would occur if carbon dioxide were to quadruple in concentration. In a number of global climate models under this scenario, the strength of storm tracks in both the northern and southern hemispheres weakened significantly in response.Weakened storm tracks would mean less powerful winter storms, but the team cautions that weaker storm tracks also lead to stagnant conditions, particularly in summer, and less wind to clear away air pollution. Changes in winds could also affect the circulation of ocean waters and, in turn, the stability of ice sheets.“About half the world’s population lives in the extratropical regions where storm tracks dominate weather,” says Charles Gertler, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Our results show that solar geoengineering will not simply reverse climate change. Instead, it has the potential itself to induce novel changes in climate.”Gertler and his colleagues have published their results this week in the journal Geophysical Research Letters. Co-authors include EAPS Professor Paul O’Gorman, along with Ben Kravitz of Indiana University, John Moore of Beijing Normal University, Steven Phipps of the University of Tasmania, and Shingo Watanabe of the Japan Agency for Marine-Earth Science and TechnologyA not-so-sunny pictureScientists have previously modeled what Earth’s climate might look like if solar geoengineering scenarios were to play out on a global scale, with mixed results. On the one hand, spraying aerosols into the stratosphere would reduce incoming solar heat and, to a degree, counteract the warming caused by carbon dioxide emissions. On the other hand, such cooling of the planet would not prevent other greenhouse gas-induced effects such as regional reductions in rainfall and ocean acidification.There have also been signs that intentionally reducing solar radiation would shrink the temperature difference between the Earth’s equator and poles or, in climate parlance, weaken the planet’s meridional temperature gradient, cooling the equator while the poles continue to warm. This last consequence was especially intriguing to Gertler and O’Gorman.“Storm tracks feed off of meridional temperature gradients, and storm tracks are interesting because they help us to understand weather extremes,” Gertler says. “So we were interested in how geoengineering affects storm tracks.”  The team looked at how extratropical storm tracks might change under a scenario of solar geoengineering known to climate scientists as experiment G1 of the Geoengineering Model Intercomparison Project (GeoMIP), a project that provides various geoengineering scenarios for scientists to run on climate models to assess their various climate effects.The G1 experiment assumes an idealized scenario in which a solar geoengineering scheme blocks enough solar radiation to counterbalance the warming that would occur if carbon dioxide concentrations were to quadruple.The researchers used results from various climate models run forward in time under the conditions of the G1 experiment. They also used results from a more sophisticated geoengineering scenario with doubling of carbon dioxide concentrations and aerosols injected into the stratosphere at more than one latitude. In each model they recorded the day-to-day change in air pressure at sea level pressure at various locations along the storm tracks. These changes reflect the passage of storms and measure a storm track’s energy.“If we look at the variance in sea level pressure, we have a sense of how often and how strongly cyclones pass over each area,” Gertler explains. “We then average the variance across the whole extratropical region, to get an average value of storm track strength for the northern and southern hemispheres.”An imperfect counterbalanceTheir results, across climate models, showed that solar geoengineering would weaken storm tracks in both Northern and Southern hemispheres. Depending on the scenario they considered, the storm track in the Northern Hemisphere would be 5 to 17 percent weaker than it is today.“A weakened storm track, in both hemispheres, would mean weaker winter storms but also lead to more stagnant weather, which could affect heat waves,” Gertler says. “Across all seasons, this could affect ventilation of air pollution. It also may contribute to a weakening of the hydrological cycle, with regional reductions in rainfall. These are not good changes, compared to a baseline climate that we are used to.”The researchers were curious to see how the same storm tracks would respond to just global warming alone, without the addition of social geoengineering, so they ran the climate models again under several warming-only scenarios. Surprisingly, they found that, in the northern hemisphere, global warming would also weaken storm tracks, by the same magnitude as with the addition of solar geoengineering. This suggests solar geoengineering, and efforts to cool the Earth by reducing incoming heat, would not do much to alter global warming’s effects, at least on storm tracks — a puzzling outcome that the researchers are unsure how to explain.In the Southern Hemisphere, there is a slightly different story. They found that global warming alone would strengthen storm tracks there, whereas the addition of solar geoengineering would prevent this strengthening, and even further, would weaken the storm tracks there.“In the Southern Hemisphere, winds drive ocean circulation, which in turn could affect uptake of carbon dioxide, and  the stability of the Antarctic ice sheet,” O’Gorman adds. “So how storm tracks change over the Southern Hemisphere is quite important.”The team also observed that the weakening of storm tracks was strongly correlated with changes in temperature and humidity. Specifically, the climate models showed that in response to reduced incoming solar radiation, the equator cooled significantly as the poles continued to warm. This reduced temperature gradient appears to be sufficient to explain the weakening storm tracks — a result that the group is the first to demonstrate.“This work highlights that solar geoengineering is not reversing climate change, but is substituting one unprecedented climate state for another,” Gertler says. “Reflecting sunlight isn’t a perfect counterbalance to the greenhouse effect.”Adds O’Gorman: “There are multiple reasons to avoid doing this, and instead to favor reducing emissions of CO2 and other greenhouse gases.”This research was funded, in part, by the National Science Foundation, NASA, and the Industry and Foundation sponsors of the MIT Joint Program on the Science and Policy of Global Change. MIT researchers find that extratropical storm tracks — the blue regions of storminess in the Earth’s middle latitudes — would change significantly with solar geoengineering efforts. Image: Courtesy of the researchers https://news.mit.edu/2020/reflecting-sunlight-cool-planet-storm-0602 Solar geoengineering proposals will weaken extratropical storm tracks in both hemispheres, scientists find. Tue, 02 Jun 2020 09:10:35 -0400 https://news.mit.edu/2020/reflecting-sunlight-cool-planet-storm-0602 Jennifer Chu | MIT News Office How can the world combat the continued rise in global temperatures? How about shading the Earth from a portion of the sun’s heat by injecting the stratosphere with reflective aerosols? After all, volcanoes do essentially the same thing, albeit in short, dramatic bursts: When a Vesuvius erupts, it blasts fine ash into the atmosphere, where the particles can linger as a kind of cloud cover, reflecting solar radiation back into space and temporarily cooling the planet.Some researchers are exploring proposals to engineer similar effects, for example by launching reflective aerosols into the stratosphere — via planes, balloons, and even blimps — in order to block the sun’s heat and counteract global warming. But such solar geoengineering schemes, as they are known, could have other long-lasting effects on the climate.Now scientists at MIT have found that solar geoengineering would significantly change extratropical storm tracks — the zones in the middle and high latitudes where storms form year-round and are steered by the jet stream across the oceans and land. Extratropical storm tracks give rise to extratropical cyclones, and not their tropical cousins, hurricanes. The strength of extratropical storm tracks determines the severity and frequency of storms such as nor’easters in the United States.The team considered an idealized scenario in which solar radiation was reflected enough to offset the warming that would occur if carbon dioxide were to quadruple in concentration. In a number of global climate models under this scenario, the strength of storm tracks in both the northern and southern hemispheres weakened significantly in response.Weakened storm tracks would mean less powerful winter storms, but the team cautions that weaker storm tracks also lead to stagnant conditions, particularly in summer, and less wind to clear away air pollution. Changes in winds could also affect the circulation of ocean waters and, in turn, the stability of ice sheets.“About half the world’s population lives in the extratropical regions where storm tracks dominate weather,” says Charles Gertler, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Our results show that solar geoengineering will not simply reverse climate change. Instead, it has the potential itself to induce novel changes in climate.”Gertler and his colleagues have published their results this week in the journal Geophysical Research Letters. Co-authors include EAPS Professor Paul O’Gorman, along with Ben Kravitz of Indiana University, John Moore of Beijing Normal University, Steven Phipps of the University of Tasmania, and Shingo Watanabe of the Japan Agency for Marine-Earth Science and TechnologyA not-so-sunny pictureScientists have previously modeled what Earth’s climate might look like if solar geoengineering scenarios were to play out on a global scale, with mixed results. On the one hand, spraying aerosols into the stratosphere would reduce incoming solar heat and, to a degree, counteract the warming caused by carbon dioxide emissions. On the other hand, such cooling of the planet would not prevent other greenhouse gas-induced effects such as regional reductions in rainfall and ocean acidification.There have also been signs that intentionally reducing solar radiation would shrink the temperature difference between the Earth’s equator and poles or, in climate parlance, weaken the planet’s meridional temperature gradient, cooling the equator while the poles continue to warm. This last consequence was especially intriguing to Gertler and O’Gorman.“Storm tracks feed off of meridional temperature gradients, and storm tracks are interesting because they help us to understand weather extremes,” Gertler says. “So we were interested in how geoengineering affects storm tracks.”  The team looked at how extratropical storm tracks might change under a scenario of solar geoengineering known to climate scientists as experiment G1 of the Geoengineering Model Intercomparison Project (GeoMIP), a project that provides various geoengineering scenarios for scientists to run on climate models to assess their various climate effects.The G1 experiment assumes an idealized scenario in which a solar geoengineering scheme blocks enough solar radiation to counterbalance the warming that would occur if carbon dioxide concentrations were to quadruple.The researchers used results from various climate models run forward in time under the conditions of the G1 experiment. They also used results from a more sophisticated geoengineering scenario with doubling of carbon dioxide concentrations and aerosols injected into the stratosphere at more than one latitude. In each model they recorded the day-to-day change in air pressure at sea level pressure at various locations along the storm tracks. These changes reflect the passage of storms and measure a storm track’s energy.“If we look at the variance in sea level pressure, we have a sense of how often and how strongly cyclones pass over each area,” Gertler explains. “We then average the variance across the whole extratropical region, to get an average value of storm track strength for the northern and southern hemispheres.”An imperfect counterbalanceTheir results, across climate models, showed that solar geoengineering would weaken storm tracks in both Northern and Southern hemispheres. Depending on the scenario they considered, the storm track in the Northern Hemisphere would be 5 to 17 percent weaker than it is today.“A weakened storm track, in both hemispheres, would mean weaker winter storms but also lead to more stagnant weather, which could affect heat waves,” Gertler says. “Across all seasons, this could affect ventilation of air pollution. It also may contribute to a weakening of the hydrological cycle, with regional reductions in rainfall. These are not good changes, compared to a baseline climate that we are used to.”The researchers were curious to see how the same storm tracks would respond to just global warming alone, without the addition of social geoengineering, so they ran the climate models again under several warming-only scenarios. Surprisingly, they found that, in the northern hemisphere, global warming would also weaken storm tracks, by the same magnitude as with the addition of solar geoengineering. This suggests solar geoengineering, and efforts to cool the Earth by reducing incoming heat, would not do much to alter global warming’s effects, at least on storm tracks — a puzzling outcome that the researchers are unsure how to explain.In the Southern Hemisphere, there is a slightly different story. They found that global warming alone would strengthen storm tracks there, whereas the addition of solar geoengineering would prevent this strengthening, and even further, would weaken the storm tracks there.“In the Southern Hemisphere, winds drive ocean circulation, which in turn could affect uptake of carbon dioxide, and  the stability of the Antarctic ice sheet,” O’Gorman adds. “So how storm tracks change over the Southern Hemisphere is quite important.”The team also observed that the weakening of storm tracks was strongly correlated with changes in temperature and humidity. Specifically, the climate models showed that in response to reduced incoming solar radiation, the equator cooled significantly as the poles continued to warm. This reduced temperature gradient appears to be sufficient to explain the weakening storm tracks — a result that the group is the first to demonstrate.“This work highlights that solar geoengineering is not reversing climate change, but is substituting one unprecedented climate state for another,” Gertler says. “Reflecting sunlight isn’t a perfect counterbalance to the greenhouse effect.”Adds O’Gorman: “There are multiple reasons to avoid doing this, and instead to favor reducing emissions of CO2 and other greenhouse gases.”This research was funded, in part, by the National Science Foundation, NASA, and the Industry and Foundation sponsors of the MIT Joint Program on the Science and Policy of Global Change. MIT researchers find that extratropical storm tracks — the blue regions of storminess in the Earth’s middle latitudes — would change significantly with solar geoengineering efforts. Image: Courtesy of the researchers https://news.mit.edu/2020/machine-learning-map-ocean-0529 An MIT-developed technique could aid in tracking the ocean’s health and productivity. Fri, 29 May 2020 14:00:00 -0400 https://news.mit.edu/2020/machine-learning-map-ocean-0529 Jennifer Chu | MIT News Office On land, it’s fairly obvious where one ecological region ends and another begins, for instance at the boundary between a desert and savanna. In the ocean, much of life is microscopic and far more mobile, making it challenging for scientists to map the boundaries between ecologically distinct marine regions.One way scientists delineate marine communities is through satellite images of chlorophyll, the green pigment produced by phytoplankton. Chlorophyll concentrations can indicate how rich or productive the underlying ecosystem might be in one region versus another. But chlorophyll maps can only give an idea of the total amount of life that might be present in a given region. Two regions with the same concentration of chlorophyll may in fact host very different combinations of plant and animal life.“It’s like if you were to look at all the regions on land that don’t have a lot of biomass, that would include Antarctica and the Sahara, even though they have completely different ecological assemblages,” says Maike Sonnewald, a former postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences.Now Sonnewald and her colleagues at MIT have developed an unsupervised machine-learning technique that automatically combs through a highly complicated set of global ocean data to find commonalities between marine locations, based on their ratios and interactions between multiple phytoplankton species. With their technique, the researchers found that the ocean can be split into over 100 types of “provinces” that are distinct in their ecological makeup. Any given location in the ocean would conceivably fit into one of these 100 ecological  provinces.The researchers then looked for similarities between these 100 provinces, ultimately grouping them into 12 more general categories. From these “megaprovinces,” they were able to see that, while some had the same total amount of life within a region, they had very different community structures, or balances of animal and plant species. Sonnewald says capturing these ecological subtleties is essential to tracking the ocean’s health and productivity.“Ecosystems are changing with climate change, and the community structure needs to be monitored to understand knock on effects on fisheries and the ocean’s capacity to draw down carbon dioxide,” Sonnewald says. “We can’t fully understand these vital dynamics with conventional methods, that to date don’t include the ecology that’s there. But our method, combined with satellite data and other tools, could offer important progress.” Sonnewald, who is now an associate research scholar at Princeton University and a visitor at the University of Washington, has reported the results today in the journal Science Advances. Her coauthors at MIT are Senior Research Scientist Stephanie Dutkiewitz, Principal Research Engineer Christopher Hill, and Research Scientist Gael Forget.Rolling out a data ballThe team’s new machine learning technique, which they’ve named SAGE, for the Systematic AGgregated Eco-province method, is designed to take large, complicated datasets, and probabilistically project that data down to a simpler, lower-dimensional dataset.“It’s like making cookies,” Sonnewald says. “You take this horrifically complicated ball of data and roll it out to reveal its elements.”In particular, the researchers used a clustering algorithm that Sonnewald says is designed to “crawl along a dataset” and hone in on regions with a large density of points — a sign that these points share something in common. Sonnewald and her colleagues set this algorithm loose on ocean data from MIT’s Darwin Project, a three-dimensional model of the global ocean that combines a model of the ocean’s climate, including wind, current, and temperature patterns, with an ocean ecology model. That model includes 51 species of phytoplankton and the ways in which each species grows and interacts with each other as well as with the surrounding climate and available nutrients.If one were to try and look through this very complicated, 51-layered space of data, for every available point in the ocean, to see which points share common traits, Sonnewald says the task would be “humanly intractable.” With the team’s unsupervised machine learning algorithm, such commonalities “begin to crystallize out a bit.”This first “data cleaning” step in the team’s SAGE method was able to parse the global ocean into about 100 different ecological provinces, each with a distinct balance of species.The researchers assigned each available location in the ocean model to one of the 100 provinces, and assigned a color to each province. They then generated a map of the global ocean, colorized by province type.  “In the Southern Ocean around Antarctica, there’s burgundy and orange colors that are shaped how we expect them, in these zonal streaks that encircle Antarctica,” Sonnewald says. “Together with other features, this gives us a lot of confidence that our method works and makes sense, at least in the model.”Ecologies unifiedThe team then looked for ways to further simplify the more than 100 provinces they identified, to see whether they could pick out commonalities even among these ecologically distinct regions.“We started thinking about things like, how are groups of people distinguished from each other? How do we see how connected to each other we are? And we used this type of intuition to see if we could quantify how ecologically similar different provinces are,” Sonnewald says.To do this, the team applied techniques from graph theory to represent all 100 provinces in a single graph, according to biomass — a measure that’s analogous to the amount of chlorophyll produced in a region. They chose to group the 100 provinces into 12 general categories, or “megaprovinces.” When they compared these megaprovinces, they found that those that had a similar biomass were composed of very different biological species.“For instance, provinces D and K have almost the same amount of biomass, but when we look deeper, K has diatoms and hardly any prokaryotes, while D has hardly any diatoms, and a lot of prokaryotes. But from a satellite, they could look the same,” Sonnewald says. “So our method could start the process of adding the ecological information to bulk chlorophyll measures, and ultimately aid observations.”The team has developed an online widget that researchers can use to find other similarities among the 100 provinces. In their paper, Sonnewald’s colleagues chose to group the provinces into 12 categories. But others may want to divide the provinces into more groups, and drill down into the data to see what traits are shared among these groups.Sonnewald is sharing the tool with oceanographers who want to identify precisely where regions of a particular ecological makeup are located, so they could, for example, send ships to sample in those regions, and not in others where the balance of species might be slightly different.“Instead of guiding sampling with tools based on bulk chlorophyll, and guessing where the interesting ecology could be found with this method, you can surgically go in and say, ‘this is what the model says you might find here,’” Sonnewald says. “Knowing what species assemblages are where, for things like ocean science and global fisheries, is really powerful.”This research was funded, in part, by NASA and the Jet Propulsion Laboratory. A machine-learning technique developed at MIT combs through global ocean data to find commonalities between marine locations, based on interactions between phytoplankton species. Using this approach, researchers have determined that the ocean can be split into over 100 types of “provinces,” and 12 “megaprovinces,” that are distinct in their ecological makeup. Image: Courtesy of the researchers, edited by MIT News. https://news.mit.edu/2020/machine-learning-map-ocean-0529 An MIT-developed technique could aid in tracking the ocean’s health and productivity. Fri, 29 May 2020 14:00:00 -0400 https://news.mit.edu/2020/machine-learning-map-ocean-0529 Jennifer Chu | MIT News Office On land, it’s fairly obvious where one ecological region ends and another begins, for instance at the boundary between a desert and savanna. In the ocean, much of life is microscopic and far more mobile, making it challenging for scientists to map the boundaries between ecologically distinct marine regions.One way scientists delineate marine communities is through satellite images of chlorophyll, the green pigment produced by phytoplankton. Chlorophyll concentrations can indicate how rich or productive the underlying ecosystem might be in one region versus another. But chlorophyll maps can only give an idea of the total amount of life that might be present in a given region. Two regions with the same concentration of chlorophyll may in fact host very different combinations of plant and animal life.“It’s like if you were to look at all the regions on land that don’t have a lot of biomass, that would include Antarctica and the Sahara, even though they have completely different ecological assemblages,” says Maike Sonnewald, a former postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences.Now Sonnewald and her colleagues at MIT have developed an unsupervised machine-learning technique that automatically combs through a highly complicated set of global ocean data to find commonalities between marine locations, based on their ratios and interactions between multiple phytoplankton species. With their technique, the researchers found that the ocean can be split into over 100 types of “provinces” that are distinct in their ecological makeup. Any given location in the ocean would conceivably fit into one of these 100 ecological  provinces.The researchers then looked for similarities between these 100 provinces, ultimately grouping them into 12 more general categories. From these “megaprovinces,” they were able to see that, while some had the same total amount of life within a region, they had very different community structures, or balances of animal and plant species. Sonnewald says capturing these ecological subtleties is essential to tracking the ocean’s health and productivity.“Ecosystems are changing with climate change, and the community structure needs to be monitored to understand knock on effects on fisheries and the ocean’s capacity to draw down carbon dioxide,” Sonnewald says. “We can’t fully understand these vital dynamics with conventional methods, that to date don’t include the ecology that’s there. But our method, combined with satellite data and other tools, could offer important progress.” Sonnewald, who is now an associate research scholar at Princeton University and a visitor at the University of Washington, has reported the results today in the journal Science Advances. Her coauthors at MIT are Senior Research Scientist Stephanie Dutkiewitz, Principal Research Engineer Christopher Hill, and Research Scientist Gael Forget.Rolling out a data ballThe team’s new machine learning technique, which they’ve named SAGE, for the Systematic AGgregated Eco-province method, is designed to take large, complicated datasets, and probabilistically project that data down to a simpler, lower-dimensional dataset.“It’s like making cookies,” Sonnewald says. “You take this horrifically complicated ball of data and roll it out to reveal its elements.”In particular, the researchers used a clustering algorithm that Sonnewald says is designed to “crawl along a dataset” and hone in on regions with a large density of points — a sign that these points share something in common. Sonnewald and her colleagues set this algorithm loose on ocean data from MIT’s Darwin Project, a three-dimensional model of the global ocean that combines a model of the ocean’s climate, including wind, current, and temperature patterns, with an ocean ecology model. That model includes 51 species of phytoplankton and the ways in which each species grows and interacts with each other as well as with the surrounding climate and available nutrients.If one were to try and look through this very complicated, 51-layered space of data, for every available point in the ocean, to see which points share common traits, Sonnewald says the task would be “humanly intractable.” With the team’s unsupervised machine learning algorithm, such commonalities “begin to crystallize out a bit.”This first “data cleaning” step in the team’s SAGE method was able to parse the global ocean into about 100 different ecological provinces, each with a distinct balance of species.The researchers assigned each available location in the ocean model to one of the 100 provinces, and assigned a color to each province. They then generated a map of the global ocean, colorized by province type.  “In the Southern Ocean around Antarctica, there’s burgundy and orange colors that are shaped how we expect them, in these zonal streaks that encircle Antarctica,” Sonnewald says. “Together with other features, this gives us a lot of confidence that our method works and makes sense, at least in the model.”Ecologies unifiedThe team then looked for ways to further simplify the more than 100 provinces they identified, to see whether they could pick out commonalities even among these ecologically distinct regions.“We started thinking about things like, how are groups of people distinguished from each other? How do we see how connected to each other we are? And we used this type of intuition to see if we could quantify how ecologically similar different provinces are,” Sonnewald says.To do this, the team applied techniques from graph theory to represent all 100 provinces in a single graph, according to biomass — a measure that’s analogous to the amount of chlorophyll produced in a region. They chose to group the 100 provinces into 12 general categories, or “megaprovinces.” When they compared these megaprovinces, they found that those that had a similar biomass were composed of very different biological species.“For instance, provinces D and K have almost the same amount of biomass, but when we look deeper, K has diatoms and hardly any prokaryotes, while D has hardly any diatoms, and a lot of prokaryotes. But from a satellite, they could look the same,” Sonnewald says. “So our method could start the process of adding the ecological information to bulk chlorophyll measures, and ultimately aid observations.”The team has developed an online widget that researchers can use to find other similarities among the 100 provinces. In their paper, Sonnewald’s colleagues chose to group the provinces into 12 categories. But others may want to divide the provinces into more groups, and drill down into the data to see what traits are shared among these groups.Sonnewald is sharing the tool with oceanographers who want to identify precisely where regions of a particular ecological makeup are located, so they could, for example, send ships to sample in those regions, and not in others where the balance of species might be slightly different.“Instead of guiding sampling with tools based on bulk chlorophyll, and guessing where the interesting ecology could be found with this method, you can surgically go in and say, ‘this is what the model says you might find here,’” Sonnewald says. “Knowing what species assemblages are where, for things like ocean science and global fisheries, is really powerful.”This research was funded, in part, by NASA and the Jet Propulsion Laboratory. A machine-learning technique developed at MIT combs through global ocean data to find commonalities between marine locations, based on interactions between phytoplankton species. Using this approach, researchers have determined that the ocean can be split into over 100 types of “provinces,” and 12 “megaprovinces,” that are distinct in their ecological makeup. Image: Courtesy of the researchers, edited by MIT News. https://news.mit.edu/2020/making-nuclear-energy-cost-competitive-0527 Three MIT teams to explore novel ways to reduce operations and maintenance costs of advanced nuclear reactors. Wed, 27 May 2020 17:00:01 -0400 https://news.mit.edu/2020/making-nuclear-energy-cost-competitive-0527 Department of Nuclear Science and Engineering Nuclear energy is a low-carbon energy source that is vital to decreasing carbon emissions. A critical factor in its continued viability as a future energy source is finding novel and innovative ways to improve operations and maintenance (O&M) costs in the next generation of advanced reactors. The U.S. Department of Energy’s Advanced Research Projects Agency-Energy (ARPA-E) established the Generating Electricity Managed by Intelligent Nuclear Assets (GEMINA) program to do exactly this. Through $27 million in funding, GEMINA is accelerating research, discovery, and development of new digital technologies that would produce effective and sustainable reductions in O&M costs. Three MIT research teams have received APRA-E GEMINA awards to generate critical data and strategies to reduce O&M costs for the next generation of nuclear power plants to make them more economical, flexible, and efficient. The MIT teams include researchers from Department of Nuclear Science and Engineering (NSE), the Department of Civil and Environmental Engineering, and the MIT Nuclear Reactor Laboratory. By leveraging state-of-art in high-fidelity simulations and unique MIT research reactor capabilities, the MIT-led teams will collaborate with leading industry partners with practical O&M experience and automation to support the development of digital twins. Digital twins are virtual replicas of physical systems that are programmed to have the same properties, specifications, and behavioral characteristics as actual systems. The goal is to apply artificial intelligence, advanced control systems, predictive maintenance, and model-based fault detection within the digital twins to inform the design of O&M frameworks for advanced nuclear power plants. In a project focused on developing high-fidelity digital twins for the critical systems in advanced nuclear reactors, NSE professors Emilio Baglietto and Koroush Shirvan will collaborate with researchers from GE Research and GE Hitachi. The GE Hitachi BWRX-300, a small modular reactor designed to provide flexible energy generation, will serve as a reference design. BWRX-300 is a promising small modular reactor concept that aims to be competitive with natural gas to realize market penetration in the United States. The team will assemble, validate, and exercise high-fidelity digital twins of the BWRX-300 systems. Digital twins address mechanical and thermal fatigue failure modes that drive O&M activities well beyond selected BWRX-300 components and extend to all advanced reactors where a flowing fluid is present. The role of high-fidelity resolution is central to the approach, as it addresses the unique challenges of the nuclear industry. NSE will leverage the tremendous advancements they have achieved in recent years to accelerate the transition of the nuclear industry toward high-fidelity simulations in the form of computational fluid dynamics. The high spatial and time resolution accuracy of the simulations, combined with the AI-enabled digital twins, offer the opportunity to deliver predictive maintenance approaches that can greatly reduce the operating cost of nuclear stations. GE Research represents an ideal partner, given their tremendous experience in developing digital twins and close link to GE Hitachi and BWRX-300 design team. This team is particularly well position to tackle regulatory challenges of applying digital twins to safety-grade components through explicit characterization of uncertainties. This three-year MIT-led project is supported by an award of $1,787,065. MIT Principal Research Engineer and Interim Director of the Nuclear Reactor Lab Gordon Kohse will lead a collaboration with MPR Associates to generate critical irradiation data to be used in digital twinning of molten-salt reactors (MSRs). MSRs produce radioactive materials when nuclear fuel is dissolved in a molten salt at high temperature and undergoes fission as it flows through the reactor core. Understanding the behavior of these radioactive materials is important for MSR design and for predicting and reducing O&M costs — a vital step in bringing safe, clean, next-generation nuclear power to market. The MIT-led team will use the MIT nuclear research reactor’s unique capability to provide data to determine how radioactive materials are generated and transported in MSR components. Digital twins of MSRs will require this critical data, which is currently unavailable. The MIT team will monitor radioactivity during and after irradiation of molten salts containing fuel in materials that will be used in MSR construction. Along with Kohse, the MIT research team includes David Carpenter and Kaichao Sun from the MIT Nuclear Reactor Laboratory, and Charles Forsberg and Professor Mingda Li from NSE. Storm Kauffman and the MPR Associates team bring a wealth of nuclear industry experience to the project and will ensure that the data generated aligns with the needs of reactor developers. This two-year project is supported by an award of $899,825. In addition to these two MIT-led projects, a third MIT team will work closely with the Electric Power Research Institute (EPRI) on a new paradigm for reducing advanced reactor O&M. This is a proof-of-concept study that will explore how to move away from the traditional maintenance and repair approach. The EPRI-led project will examine a “replace and refurbish” model in which components are intentionally designed and tested for shorter and more predictable lifetimes with the potential for game-changing O&M cost savings. This approach is similar to that adopted by the commercial airline industry, in which multiple refurbishments — including engine replacement — can keep a jet aircraft flying economically over many decades. The study will evaluate several advanced reactor designs with respect to cost savings and other important economic benefits, such as increased sustainability for suppliers. The MIT team brings together Jeremy Gregory from the Department of Civil and Environmental Engineering, Lance Snead from the Nuclear Reactor Laboratory, and professors Jacopo Buongiorno and Koroush Shirvan from NSE.  “This collaborative project will take a fresh look at reducing the operation and maintenance cost by allowing nuclear technology to better adapt to the ever-changing energy market conditions. MIT’s role is to identify cost-reducing pathways that would be applicable across a range of promising advanced reactor technologies. Particularly, we need to incorporate latest advancements in material science and engineering along with civil structures in our strategies,” says MIT project lead Shirvan. The advances by these three MIT teams, along with the six other awardees in the GEMINA program, will provide a framework for more streamlined O&M costs for next-generation advanced nuclear reactors — a critical factor to being competitive with alternative energy sources. MIT teams in the GEMINA program will provide a framework for more streamlined operations and maintenance costs for next-generation advanced nuclear reactors. Photo: Yakov Ostrovsky https://news.mit.edu/2020/solve-mit-builds-partnerships-tackle-complex-challenges-during-covid-19-crisis-0527 Event convened attendees from around the world to discuss impacts of the pandemic and advance solutions to pressing global problems. Wed, 27 May 2020 16:50:01 -0400 https://news.mit.edu/2020/solve-mit-builds-partnerships-tackle-complex-challenges-during-covid-19-crisis-0527 Andrea Snyder | MIT Solve In response to the Covid-19 pandemic, MIT Solve, a marketplace for social impact innovation, transformed its annual flagship event, Solve at MIT, into an interactive virtual gathering to convene the Solve and MIT communities. On May 12, nearly 800 people tuned in from all around the world to take part in Virtual Solve at MIT.The event connected innovators and social impact leaders through breakout sessions to discuss Solve’s Global Challenges and other timely topics, brain trusts to advise 2019 Solver teams, and plenary sessions to feature partnership stories and world-class speakers such as Mary Barra, chairman and CEO of General Motors, in conversation with MIT President L. Rafael Reif; Yo-Yo Ma, world-renowned cellist; and Cady Coleman ’83, former NASA astronaut. Throughout the day, many conversations touched on the broad impacts of Covid-19 — and the importance of building partnerships to scale solutions to pressing global problems. Here are some highlights. “We’re all one crew” The opening plenary kicked off with former NASA astronaut Cady Coleman, as she reflected on her time living in space. Looking down at the Earth, she pondered societal divisions and thought about how “we’re all one crew.”  In the face of Covid-19, Coleman reminded us that despite tragedy, people are finding each other, working together, and getting things done. “There’s a chain to help you go and find those people — you just have to be open to finding [them],” she said. “The solutions are bigger and better together.” Solve partnership stories The theme of partnership resurfaced many times throughout the day, as Solver teams and Solve members shared stories about their work together. “The way that Solve measures its success is in the value and number of partnerships we are able to broker among our community members,” said Hala Hanna, Solve’s managing director of community. For example, Merck for Mothers, which works to end preventable maternal death, alongside other partners, has committed $5 million toward The MOMs Initiative, which supports promising health innovators in sub-Saharan Africa and South Asia. The first centerpiece of the initiative is LifeBank, a Frontlines of Health Solver that combines data, smart logistics, and technology to deliver life-saving medical products. LifeBank’s Founder and CEO Temie Giwa-Tuboson and Merck for Mothers’ Executive Director Mary-Ann Etiebet first met in 2018 at Solve Challenge Finals.Etiebet was impressed by the simplicity — and effectiveness — of LifeBank’s solution. “So often, when we think about the private sector and global health, we think about big global corporations,” she said. “But really, we should be thinking about local, private providers — people who live and work in these communities — and how we can invest in them to drive progress.”Later in the day, Abhilasha Purwar, founder and CEO of Blue Sky Analytics, and Melinda Marble, executive director of the Patrick J. McGovern Foundation, spoke about their own partnership. In 2019, McGovern Foundation awarded the $200,000 AI Innovations Prize to four Solver teams. Blue Sky Analytics, a Healthy Cities Solver whose AI-powered platform provides key air quality data and source emissions parameters, was one recipient. “The AI Innovations Prize is the first seed that an entrepreneur needs to get out there and start to accomplish their dream,” said Purwar. Blue Sky Analytics was selected to receive $300,000 in follow-on funding for the prize as well. Purwar was thrilled to receive this additional funding, which will enable Blue Sky Analytics to hire critical engineering talent to further develop its platform. A stretch break with an Olympian To provide a break from the discussions — and from sitting at home desks — Pita Taufatofua, two-time Tongan Olympian, led the audience in a brief stretching session. “There’s been so much talk about what the world’s going to look like after Covid-19,” he said. “But one thing that hasn’t changed is that as human beings, we have to look after each other, and we also have to look after ourselves — physically and mentally.” His simple routine reminded attendees to take a moment for themselves. Leadership during Covid-19 Mary Barra, chair and CEO of General Motors (GM), and L. Rafael Reif, president of MIT, then discussed leadership and action in the face of Covid-19. Barra shared the story of how GM partnered with Ventec Life Systems to produce masks and critical care ventilators; Ventec had been making 200-300 ventilators a month, but GM wanted to scale production to well over 10,000 a month. To make this happen, GM brought its whole team together. “It was inspiring to see how people volunteered, working 24 hours a day to get the work done,” Barra said. “Things that in a normal corporate time speed would have taken months seemed to be getting done in days and weeks,” said Barra. “What can we do to support people so they can move that quickly [all the time]?” Inspiring innovation World-renowned cellist Yo-Yo Ma closed out the event with a moving performance, and described the parallels between music and innovation: developing meaning, managing transitions, and building trust. Ultimately, “the whole point … is service,” he said. “You use your technique to transcend it in order to serve.” You can watch many of these conversations on Solve’s YouTube channel. Cellist Yo-Yo Ma performed and spoke at Virtual Solve at MIT. Photo courtesy of MIT Solve. https://news.mit.edu/2020/solar-energy-farms-electric-vehicle-batteries-life-0522 Modeling study shows battery reuse systems could be profitable for both electric vehicle companies and grid-scale solar operations. Fri, 22 May 2020 00:00:01 -0400 https://news.mit.edu/2020/solar-energy-farms-electric-vehicle-batteries-life-0522 David L. Chandler | MIT News Office As electric vehicles rapidly grow in popularity worldwide, there will soon be a wave of used batteries whose performance is no longer sufficient for vehicles that need reliable acceleration and range. But a new study shows that these batteries could still have a useful and profitable second life as backup storage for grid-scale solar photovoltaic installations, where they could perform for more than a decade in this less demanding role.The study, published in the journal Applied Energy, was carried out by six current and former MIT researchers, including postdoc Ian Mathews and professor of mechanical engineering Tonio Buonassisi, who is head of the Photovoltaics Research Laboratory.As a test case, the researchers examined in detail a hypothetical grid-scale solar farm in California. They studied the economics of several scenarios: building a 2.5-megawatt solar farm alone; building the same array along with a new lithium-ion battery storage system; and building it with a battery array made of repurposed EV batteries that had declined to 80 percent of their original capacity, the point at which they would be considered too weak for continued vehicle use.They found that the new battery installation would not provide a reasonable net return on investment, but that a properly managed system of used EV batteries could be a good, profitable investment as long as the batteries cost less than 60 percent of their original price. Not so easyThe process might sound straightforward, and it has occasionally been implemented in smaller-scale projects, but expanding that to grid scale is not simple, Mathews explains. “There are many issues on a technical level. How do you screen batteries when you take them out of the car to make sure they’re good enough to reuse? How do you pack together batteries from different cars in a way that you know that they’ll work well together, and you won’t have one battery that’s much poorer than the others and will drag the performance of the system down?”On the economic side, he says, there are also questions: “Are we sure that there’s enough value left in these batteries to justify the cost of taking them from cars, collecting them, checking them over, and repackaging them into a new application?” For the modeled case under California’s local conditions, the answer seems to be a solid yes, the team found.The study used a semiempirical model of battery degradation, trained using measured data, to predict capacity fade in these lithium-ion batteries under different operating conditions, and found that the batteries could achieve maximum lifetimes and value by operating under relatively gentle charging and discharging cycles — never going above 65 percent of full charge or below 15 percent. This finding challenges some earlier assumptions that running the batteries at maximum capacity initially would provide the most value.“I’ve talked to people who’ve said the best thing to do is just work your battery really hard, and front load all your revenue,” Mathews says. “When we looked at that, it just didn’t make sense at all.” It was clear from the analysis that maximizing the lifetime of the batteries would provide the best returns.How long will they last?One unknown factor is just how long the batteries can continue to operate usefully in this second application. The study made a conservative assumption, that the batteries would be retired from their solar-farm backup service after they had declined down to 70 percent of their rated capacity, from their initial 80 percent (the point when they were retired from EV use). But it may well be, Mathews says, that continuing to operate down to 60 percent of capacity or even lower might prove to be safe and worthwhile. Longer-term pilot studies will be required to determine that, he says. Many electric vehicle manufacturers are already beginning to do such pilot studies.“That’s a whole area of research in itself,” he says, “because the typical battery has multiple degradation pathways. Trying to figure out what happens when you move into this more rapid degradation phase, it’s an active area of research.” In part, the degradation is determined by the way the batteries are controlled. “So, you might actually adapt your control algorithms over the lifetime of the project, to just really push that out as far as possible,” he says. This is one direction the team will pursue in their ongoing research, he says. “We think this could be a great application for machine-learning methods, trying to figure out the kind of intelligent methods and predictive analytics that adjust those control policies over the life of the project.”The actual economics of such a project could vary widely depending on the local regulatory and rate-setting structures, he explains. For example, some local rules allow the cost of storage systems to be included in the overall cost of a new renewable energy supply, for rate-setting purposes, and others do not. The economics of such systems will be very site specific, but the California case study is intended to be an illustrative U.S. example.“A lot of states are really starting to see the benefit that storage can provide,” Mathews says. “And this just shows that they should have an allowance that somehow incorporates second-life batteries in those regulations. That could be favorable for them.”A recent report from McKinsey Corp. shows that as demand for backup storage for renewable energy projects grows between now and 2030, second use EV batteries could potentially meet half of that demand, Mathews says. Some EV companies, he says, including Rivian, founded by an MIT alumnus, are already designing their battery packs specifically to make this end-of-life repurposing as easy as possible.Mathews says that “the point that I made in the paper was that technically, economically, … this could work.” For the next step, he says, “There’s a lot of stakeholders who would need to be involved in this: You need to have your EV manufacturer, your lithium ion battery manufacturer, your solar project developer, the power electronics guys.” The intent, he says, “was to say, ‘Hey, you guys should actually sit down and really look at this, because we think it could really work.’”The study team included postdocs Bolum Xu and Wei He, MBA student Vanessa Barreto, and research scientist Ian Marius Peters. The work was supported by the European Union’s Horizon 2020 research program, the DoE-NSF ERF for Quantum Sustainable Solar Technologies (QESST) and the Singapore National Research Foundation through the Singapore-MIT Alliance for Research and Technology (SMART). An MIT study shows that electrical vehicle batteries could have a useful and profitable second life as backup storage for grid-scale solar photovoltaic installations, where they could perform for more than a decade in this less demanding role. This image shows a ‘cut-away’ view of a lithium-ion battery over a background of cars and solar panels. Image: MIT News https://news.mit.edu/2020/transportation-policymaking-chinese-cities-0521 A new framework for learning from each other. Thu, 21 May 2020 14:50:01 -0400 https://news.mit.edu/2020/transportation-policymaking-chinese-cities-0521 Nancy W. Stauffer | MIT Energy Initiative In recent decades, urban populations in China’s cities have grown substantially, and rising incomes have led to a rapid expansion of car ownership. Indeed, China is now the world’s largest market for automobiles. The combination of urbanization and motorization has led to an urgent need for transportation policies to address urban problems such as congestion, air pollution, and greenhouse gas emissions. For the past three years, an MIT team led by Joanna Moody, research program manager of the MIT Energy Initiative’s Mobility Systems Center, and Jinhua Zhao, the Edward H. and Joyce Linde Associate Professor in the Department of Urban Studies and Planning (DUSP) and director of MIT’s JTL Urban Mobility Lab, has been examining transportation policy and policymaking in China. “It’s often assumed that transportation policy in China is dictated by the national government,” says Zhao. “But we’ve seen that the national government sets targets and then allows individual cities to decide what policies to implement to meet those targets.” Many studies have investigated transportation policymaking in China’s megacities like Beijing and Shanghai, but few have focused on the hundreds of small- and medium-sized cities located throughout the country. So Moody, Zhao, and their team wanted to consider the process in these overlooked cities. In particular, they asked: how do municipal leaders decide what transportation policies to implement, and can they be better enabled to learn from one another’s experiences? The answers to those questions might provide guidance to municipal decision-makers trying to address the different transportation-related challenges faced by their cities. The answers could also help fill a gap in the research literature. The number and diversity of cities across China has made performing a systematic study of urban transportation policy challenging, yet that topic is of increasing importance. In response to local air pollution and traffic congestion, some Chinese cities are now enacting policies to restrict car ownership and use, and those local policies may ultimately determine whether the unprecedented growth in nationwide private vehicle sales will persist in the coming decades. Policy learning Transportation policymakers worldwide benefit from a practice called policy-learning: Decision-makers in one city look to other cities to see what policies have and haven’t been effective. In China, Beijing and Shanghai are usually viewed as trendsetters in innovative transportation policymaking, and municipal leaders in other Chinese cities turn to those megacities as role models. But is that an effective approach for them? After all, their urban settings and transportation challenges are almost certainly quite different. Wouldn’t it be better if they looked to “peer” cities with which they have more in common? Moody, Zhao, and their DUSP colleagues — postdoc Shenhao Wang and graduate students Jungwoo Chun and Xuenan Ni, all in the JTL Urban Mobility Lab — hypothesized an alternative framework for policy-learning in which cities that share common urbanization and motorization histories would share their policy knowledge. Similar development of city spaces and travel patterns could lead to the same transportation challenges, and therefore to similar needs for transportation policies. To test their hypothesis, the researchers needed to address two questions. To start, they needed to know whether Chinese cities have a limited number of common urbanization and motorization histories. If they grouped the 287 cities in China based on those histories, would they end up with a moderately small number of meaningful groups of peer cities? And second, would the cities in each group have similar transportation policies and priorities? Grouping the cities Cities in China are often grouped into three “tiers” based on political administration, or the types of jurisdictional roles the cities play. Tier 1 includes Beijing, Shanghai, and two other cities that have the same political powers as provinces. Tier 2 includes about 20 provincial capitals. The remaining cities — some 260 of them — all fall into Tier 3. These groupings are not necessarily relevant to the cities’ local urban and transportation conditions. Moody, Zhao, and their colleagues instead wanted to sort the 287 cities based on their urbanization and motorization histories. Fortunately, they had relatively easy access to the data they needed. Every year, the Chinese government requires each city to report well-defined statistics on a variety of measures and to make them public. Among those measures, the researchers chose four indicators of urbanization — gross domestic product per capita, total urban population, urban population density, and road area per capita — and four indicators of motorization — the number of automobiles, taxis, buses, and subway lines per capita. They compiled those data from 2001 to 2014 for each of the 287 cities. The next step was to sort the cities into groups based on those historical datasets — a task they accomplished using a clustering algorithm. For the algorithm to work well, they needed to select parameters that would summarize trends in the time series data for each indicator in each city. They found that they could summarize the 14-year change in each indicator using the mean value and two additional variables: the slope of change over time and the rate at which the slope changes (the acceleration). Based on those data, the clustering algorithm examined different possible numbers of groupings, and four gave the best outcome in terms of the cities’ urbanization and motorization histories. “With four groups, the cities were most similar within each cluster and most different across the clusters,” says Moody. “Adding more groups gave no additional benefit.” The four groups of similar cities are as follows. Cluster 1: 23 large, dense, wealthy megacities that have urban rail systems and high overall mobility levels over all modes, including buses, taxis, and private cars. This cluster encompasses most of the government’s Tier 1 and Tier 2 cities, while the Tier 3 cities are distributed among Clusters 2, 3, and 4. Cluster 2: 41 wealthy cities that don’t have urban rail and therefore are more sprawling, have lower population density, and have auto-oriented travel patterns. Cluster 3: 134 medium-wealth cities that have a low-density urban form and moderate mobility fairly spread across different modes, with limited but emerging car use. Cluster 4: 89 low-income cities that have generally lower levels of mobility, with some public transit buses but not many roads. Because people usually walk, these cities are concentrated in terms of density and development. City clusters and policy priorities The researchers’ next task was to determine whether the cities within a given cluster have transportation policy priorities that are similar to each other — and also different from those of cities in the other clusters. With no quantitative data to analyze, the researchers needed to look for such patterns using a different approach. First, they selected 44 cities at random (with the stipulation that at least 10 percent of the cities in each cluster had to be represented). They then downloaded the 2017 mayoral report from each of the 44 cities. Those reports highlight the main policy initiatives and directions of the city in the past year, so they include all types of policymaking. To identify the transportation-oriented sections of the reports, the researchers performed keyword searches on terms such as transportation, road, car, bus, and public transit. They extracted any sections highlighting transportation initiatives and manually labeled each of the text segments with one of 21 policy types. They then created a spreadsheet organizing the cities into the four clusters. Finally, they examined the outcome to see whether there were clear patterns within and across clusters in terms of the types of policies they prioritize. “We found strikingly clear patterns in the types of transportation policies adopted within city clusters and clear differences across clusters,” says Moody. “That reinforced our hypothesis that different motorization and urbanization trajectories would be reflected in very different policy priorities.” Here are some highlights of the policy priorities within the clusters. The cities in Cluster 1 have urban rail systems and are starting to consider policies around them. For example, how can they better connect their rail systems with other transportation modes — for instance, by taking steps to integrate them with buses or with walking infrastructure? How can they plan their land use and urban development to be more transit-oriented, such as by providing mixed-use development around the existing rail network? Cluster 2 cities are building urban rail systems, but they’re generally not yet thinking about other policies that can come with rail development. They could learn from Cluster 1 cities about other factors to take into account at the outset. For example, they could develop their urban rail with issues of multi-modality and of transit-oriented development in mind. In Cluster 3 cities, policies tend to emphasize electrifying buses and providing improved and expanded bus service. In these cities with no rail networks, the focus is on making buses work better. Cluster 4 cities are still focused on road development, even within their urban areas. Policy priorities often emphasize connecting the urban core to rural areas and to adjacent cities — steps that will give their populations access to the region as a whole, expanding the opportunities available to them. Benefits of a “mixed method” approach Results of the researchers’ analysis thus support their initial hypothesis. “Different urbanization and motorization trends that we captured in the clustering analysis are reflective of very different transportation priorities,” says Moody. “That match means we can use this approach for further policymaking analysis.” At the outset, she viewed their study as a “proof of concept” for performing transportation policy studies using a mixed-method approach. Mixed-method research involves a blending of quantitative and qualitative approaches. In their case, the former was the mathematical analysis of time series data, and the latter was the in-depth review of city government reports to identify transportation policy priorities. “Mixed-method research is a growing area of interest, and it’s a powerful and valuable tool,” says Moody. She did, however, find the experience of combining the quantitative and qualitative work challenging. “There weren’t many examples of people doing something similar, and that meant that we had to make sure that our quantitative work was defensible, that our qualitative work was defensible, and that the combination of them was defensible and meaningful,” she says. The results of their work confirm that their novel analytical framework could be used in other large, rapidly developing countries with heterogeneous urban areas. “It’s probable that if you were to do this type of analysis for cities in, say, India, you might get a different number of city types, and those city types could be very different from what we got in China,” says Moody. Regardless of the setting, the capabilities provided by this kind of mixed method framework should prove increasingly important as more and more cities around the world begin innovating and learning from one another how to shape sustainable urban transportation systems. This research was supported by the MIT Energy Initiative’s Mobility of the Future study. Information about the study, its participants and supporters, and its publications is available at energy.mit.edu/research/mobilityofthefuture. Using a novel methodology, MITEI researcher Joanna Moody and Associate Professor Jinhua Zhao uncovered patterns in the development trends and transportation policies of China’s 287 cities — including Fengcheng, shown here — that may help decision-makers learn from one another. Photo: blake.thornberry/Flickr https://news.mit.edu/2020/transportation-policymaking-chinese-cities-0521 A new framework for learning from each other. Thu, 21 May 2020 14:50:01 -0400 https://news.mit.edu/2020/transportation-policymaking-chinese-cities-0521 Nancy W. Stauffer | MIT Energy Initiative In recent decades, urban populations in China’s cities have grown substantially, and rising incomes have led to a rapid expansion of car ownership. Indeed, China is now the world’s largest market for automobiles. The combination of urbanization and motorization has led to an urgent need for transportation policies to address urban problems such as congestion, air pollution, and greenhouse gas emissions. For the past three years, an MIT team led by Joanna Moody, research program manager of the MIT Energy Initiative’s Mobility Systems Center, and Jinhua Zhao, the Edward H. and Joyce Linde Associate Professor in the Department of Urban Studies and Planning (DUSP) and director of MIT’s JTL Urban Mobility Lab, has been examining transportation policy and policymaking in China. “It’s often assumed that transportation policy in China is dictated by the national government,” says Zhao. “But we’ve seen that the national government sets targets and then allows individual cities to decide what policies to implement to meet those targets.” Many studies have investigated transportation policymaking in China’s megacities like Beijing and Shanghai, but few have focused on the hundreds of small- and medium-sized cities located throughout the country. So Moody, Zhao, and their team wanted to consider the process in these overlooked cities. In particular, they asked: how do municipal leaders decide what transportation policies to implement, and can they be better enabled to learn from one another’s experiences? The answers to those questions might provide guidance to municipal decision-makers trying to address the different transportation-related challenges faced by their cities. The answers could also help fill a gap in the research literature. The number and diversity of cities across China has made performing a systematic study of urban transportation policy challenging, yet that topic is of increasing importance. In response to local air pollution and traffic congestion, some Chinese cities are now enacting policies to restrict car ownership and use, and those local policies may ultimately determine whether the unprecedented growth in nationwide private vehicle sales will persist in the coming decades. Policy learning Transportation policymakers worldwide benefit from a practice called policy-learning: Decision-makers in one city look to other cities to see what policies have and haven’t been effective. In China, Beijing and Shanghai are usually viewed as trendsetters in innovative transportation policymaking, and municipal leaders in other Chinese cities turn to those megacities as role models. But is that an effective approach for them? After all, their urban settings and transportation challenges are almost certainly quite different. Wouldn’t it be better if they looked to “peer” cities with which they have more in common? Moody, Zhao, and their DUSP colleagues — postdoc Shenhao Wang and graduate students Jungwoo Chun and Xuenan Ni, all in the JTL Urban Mobility Lab — hypothesized an alternative framework for policy-learning in which cities that share common urbanization and motorization histories would share their policy knowledge. Similar development of city spaces and travel patterns could lead to the same transportation challenges, and therefore to similar needs for transportation policies. To test their hypothesis, the researchers needed to address two questions. To start, they needed to know whether Chinese cities have a limited number of common urbanization and motorization histories. If they grouped the 287 cities in China based on those histories, would they end up with a moderately small number of meaningful groups of peer cities? And second, would the cities in each group have similar transportation policies and priorities? Grouping the cities Cities in China are often grouped into three “tiers” based on political administration, or the types of jurisdictional roles the cities play. Tier 1 includes Beijing, Shanghai, and two other cities that have the same political powers as provinces. Tier 2 includes about 20 provincial capitals. The remaining cities — some 260 of them — all fall into Tier 3. These groupings are not necessarily relevant to the cities’ local urban and transportation conditions. Moody, Zhao, and their colleagues instead wanted to sort the 287 cities based on their urbanization and motorization histories. Fortunately, they had relatively easy access to the data they needed. Every year, the Chinese government requires each city to report well-defined statistics on a variety of measures and to make them public. Among those measures, the researchers chose four indicators of urbanization — gross domestic product per capita, total urban population, urban population density, and road area per capita — and four indicators of motorization — the number of automobiles, taxis, buses, and subway lines per capita. They compiled those data from 2001 to 2014 for each of the 287 cities. The next step was to sort the cities into groups based on those historical datasets — a task they accomplished using a clustering algorithm. For the algorithm to work well, they needed to select parameters that would summarize trends in the time series data for each indicator in each city. They found that they could summarize the 14-year change in each indicator using the mean value and two additional variables: the slope of change over time and the rate at which the slope changes (the acceleration). Based on those data, the clustering algorithm examined different possible numbers of groupings, and four gave the best outcome in terms of the cities’ urbanization and motorization histories. “With four groups, the cities were most similar within each cluster and most different across the clusters,” says Moody. “Adding more groups gave no additional benefit.” The four groups of similar cities are as follows. Cluster 1: 23 large, dense, wealthy megacities that have urban rail systems and high overall mobility levels over all modes, including buses, taxis, and private cars. This cluster encompasses most of the government’s Tier 1 and Tier 2 cities, while the Tier 3 cities are distributed among Clusters 2, 3, and 4. Cluster 2: 41 wealthy cities that don’t have urban rail and therefore are more sprawling, have lower population density, and have auto-oriented travel patterns. Cluster 3: 134 medium-wealth cities that have a low-density urban form and moderate mobility fairly spread across different modes, with limited but emerging car use. Cluster 4: 89 low-income cities that have generally lower levels of mobility, with some public transit buses but not many roads. Because people usually walk, these cities are concentrated in terms of density and development. City clusters and policy priorities The researchers’ next task was to determine whether the cities within a given cluster have transportation policy priorities that are similar to each other — and also different from those of cities in the other clusters. With no quantitative data to analyze, the researchers needed to look for such patterns using a different approach. First, they selected 44 cities at random (with the stipulation that at least 10 percent of the cities in each cluster had to be represented). They then downloaded the 2017 mayoral report from each of the 44 cities. Those reports highlight the main policy initiatives and directions of the city in the past year, so they include all types of policymaking. To identify the transportation-oriented sections of the reports, the researchers performed keyword searches on terms such as transportation, road, car, bus, and public transit. They extracted any sections highlighting transportation initiatives and manually labeled each of the text segments with one of 21 policy types. They then created a spreadsheet organizing the cities into the four clusters. Finally, they examined the outcome to see whether there were clear patterns within and across clusters in terms of the types of policies they prioritize. “We found strikingly clear patterns in the types of transportation policies adopted within city clusters and clear differences across clusters,” says Moody. “That reinforced our hypothesis that different motorization and urbanization trajectories would be reflected in very different policy priorities.” Here are some highlights of the policy priorities within the clusters. The cities in Cluster 1 have urban rail systems and are starting to consider policies around them. For example, how can they better connect their rail systems with other transportation modes — for instance, by taking steps to integrate them with buses or with walking infrastructure? How can they plan their land use and urban development to be more transit-oriented, such as by providing mixed-use development around the existing rail network? Cluster 2 cities are building urban rail systems, but they’re generally not yet thinking about other policies that can come with rail development. They could learn from Cluster 1 cities about other factors to take into account at the outset. For example, they could develop their urban rail with issues of multi-modality and of transit-oriented development in mind. In Cluster 3 cities, policies tend to emphasize electrifying buses and providing improved and expanded bus service. In these cities with no rail networks, the focus is on making buses work better. Cluster 4 cities are still focused on road development, even within their urban areas. Policy priorities often emphasize connecting the urban core to rural areas and to adjacent cities — steps that will give their populations access to the region as a whole, expanding the opportunities available to them. Benefits of a “mixed method” approach Results of the researchers’ analysis thus support their initial hypothesis. “Different urbanization and motorization trends that we captured in the clustering analysis are reflective of very different transportation priorities,” says Moody. “That match means we can use this approach for further policymaking analysis.” At the outset, she viewed their study as a “proof of concept” for performing transportation policy studies using a mixed-method approach. Mixed-method research involves a blending of quantitative and qualitative approaches. In their case, the former was the mathematical analysis of time series data, and the latter was the in-depth review of city government reports to identify transportation policy priorities. “Mixed-method research is a growing area of interest, and it’s a powerful and valuable tool,” says Moody. She did, however, find the experience of combining the quantitative and qualitative work challenging. “There weren’t many examples of people doing something similar, and that meant that we had to make sure that our quantitative work was defensible, that our qualitative work was defensible, and that the combination of them was defensible and meaningful,” she says. The results of their work confirm that their novel analytical framework could be used in other large, rapidly developing countries with heterogeneous urban areas. “It’s probable that if you were to do this type of analysis for cities in, say, India, you might get a different number of city types, and those city types could be very different from what we got in China,” says Moody. Regardless of the setting, the capabilities provided by this kind of mixed method framework should prove increasingly important as more and more cities around the world begin innovating and learning from one another how to shape sustainable urban transportation systems. This research was supported by the MIT Energy Initiative’s Mobility of the Future study. Information about the study, its participants and supporters, and its publications is available at energy.mit.edu/research/mobilityofthefuture. Using a novel methodology, MITEI researcher Joanna Moody and Associate Professor Jinhua Zhao uncovered patterns in the development trends and transportation policies of China’s 287 cities — including Fengcheng, shown here — that may help decision-makers learn from one another. Photo: blake.thornberry/Flickr https://news.mit.edu/2020/quest-practical-fusion-energy-sources-erica-salazar-0521 Graduate student Erica Salazar tackles a magnetic engineering challenge. Thu, 21 May 2020 14:35:01 -0400 https://news.mit.edu/2020/quest-practical-fusion-energy-sources-erica-salazar-0521 Peter Dunn | Department of Nuclear Science and Engineering The promise of fusion energy has grown substantially in recent years, in large part because of novel high-temperature superconducting (HTS) materials that can shrink the size and boost the performance of the extremely powerful magnets needed in fusion reactors. Realizing that potential is a complex engineering challenge, which nuclear science and engineering student Erica Salazar is taking up in her doctoral studies. Salazar works at MIT’s Plasma Science and Fusion Center (PSFC) on the SPARC project, an ambitious fast-track program being conducted in collaboration with MIT spinout Commonwealth Fusion Systems (CFS). The goal is development of a fusion energy experiment to demonstrate net energy gain at unprecedentedly small size and to validate the new magnet technology in a high-field fusion device. Success would be a major accomplishment in the effort to make safe, carbon-free fusion power ready for the world’s electrical grid by the 2030s, as part of the broader push to control climate change. A fundamental challenge is that fusion of nuclei takes place only at extreme temperatures, like those found in the cores of stars. No physical vessel can contain such conditions, so one approach to harnessing fusion involves creating a “bottle” of magnetic fields within a reactor chamber. To succeed, this magnetic-confinement approach must be capable of containing and controlling a super-heated plasma for extended periods, and that in turn requires steady, stable, predictable operation from the magnets involved, even as they deliver unprecedented levels of performance. In pursuit of that goal, Salazar is drawing on knowledge gained during a five-year stint at General Atomics, where she worked on magnet manufacturing for the ITER international fusion reactor project. It, like SPARC, uses a magnetic-confinement approach, and Salazar commissioned and managed the reaction heat treatment process for ITER’s 120-ton superconducting modules and helped design and operate a cryogenic full-current test station. “That experience is very helpful,” she notes. “Even though the ITER magnets utilize low-temperature superconductors and SPARC is using HTS, there are a lot of similarities in manufacturing, and it gives a sense of which questions to ask. It’s a situation where you know enough to understand what you don’t know, and that’s really exciting. It definitely gives me motivation to work hard, go deep, and expand my efforts.” A central focus of Salazar’s work is a phenomenon called quench. It’s a common abnormality that occurs when part of a magnet’s coil shifts out of a superconducting state, where it has almost no electrical resistance, and into a normal resistive state. The resistance causes the massive current flowing through the coil, and the energy stored in the magnet, to quickly convert to heat in the affected region. That can result in the entire magnet dropping out of its superconducting state and also cause significant physical damage. Many factors can cause quench, and it is seen as unavoidable, so real-time management is essential in a practical fusion reactor. “My PhD thesis work is on understanding quench dynamics, especially in new HTS magnet designs,” explains Salazar, who is advised by Department of Nuclear Science and Engineering Professor Zach Hartwig and started engaging with the CFS team before the company’s 2018 formation. “Those new materials are so good, and they have more temperature margin, but that makes it harder to detect when there’s a localized loss of superconductivity — so it’s a good position for me as a grad student. “I hope to answer questions like, what does the quench look like? How does it propagate, and how fast? How large of a disturbance will cause a thermal runaway? With more knowledge of what a quench looks like, I can then use that information to help design novel quench-detection systems.” Addressing this type of issue is part of the SPARC program’s strategic transition away from “big plasma physics problems,” says Salazar, and toward a greater focus on the engineering challenges involved in practical implementation. While there is more to be learned from a scientific perspective, a broad consensus has emerged in the U.S. fusion community that construction of a pilot fusion power plant should be a national priority. To this end, the SPARC program takes a systemic approach to ensure broad coordination. As Salazar notes, “to devise an effective detection system, you need to be aware of the implications within the overall systems engineering approach of the project. I really like the way the project teams are designed to be fluid. Everyone knows who’s working on what, and you can sit in on meetings if you want to. We all have a limited amount of time, but the resources are there.” Salazar has helped the process by starting a popular email list that bridges the CFS and MIT social worlds, linking people who would not otherwise be connected and creating opportunities for off-hours activities together. “Working is easy; sometimes the hard part is making sure you have time for personal stuff,” she observes. She’s also active in developing and encouraging a more-inclusive MIT community culture, via involvement with a women’s group at PSFC and the launch of an Institute-wide organization, Hermanas Unidas, for Latina-identifying women students, staff, faculty, and postdocs. “It’s important to find a community with others that share or value similar cultural backgrounds. But it’s also important to see how those with similar backgrounds have done amazing things professionally or academically. Hermanas Unidas is a great community of people from all walks of life at MIT who provide mutual support and encouragement as we navigate our careers at MIT and beyond,” explains Salazar. “It’s wonderful to learn from other Latina faculty and staff at MIT — about the hardships they faced when they were in my position as a student or how, as staff members, they work to support students and connect us with other great initiatives. On the flip side, I can share with undergraduates my work experience and my decision to go to graduate school.” Looking ahead, Salazar is encouraged by the growing momentum toward fusion energy. “I had the opportunity to go to the Congressional Fusion Day event in 2016, talk to House and Senate representatives about what fusion does for the economy and technologies, and meet researchers from outside of the ITER program,” she recalls. “I hadn’t realized how big and expansive the fusion community is, and it was interesting to hear how much was going on, and exciting to know that there’s private-sector interest in investing in fusion.” And because fusion energy has such game-changing potential for the world’s electrical grid, says Salazar, “it’s cool to talk to people about it and present it in a way that shows how it will impact them. Throughout my life, I’ve always enjoyed going deep and expending my efforts, and this is such a great area for that. There’s always something new, it’s very interdisciplinary, and it benefits society.” “I really like the way the project teams are designed to be fluid. Everyone knows who’s working on what, and you can sit in on meetings if you want to. We all have a limited amount of time, but the resources are there,”. says Erica Salazar. Photo: Eric Younge https://news.mit.edu/2020/towable-sensor-vertical-ocean-conditions-0520 Instrument may help scientists assess the ocean’s response to climate change. Wed, 20 May 2020 11:26:59 -0400 https://news.mit.edu/2020/towable-sensor-vertical-ocean-conditions-0520 Jennifer Chu | MIT News Office The motion of the ocean is often thought of in horizontal terms, for instance in the powerful currents that sweep around the planet, or the waves that ride in and out along a coastline. But there is also plenty of vertical motion, particularly in the open seas, where water from the deep can rise up, bringing nutrients to the upper ocean, while surface waters sink, sending dead organisms, along with oxygen and carbon, to the deep interior.Oceanographers use instruments to characterize the vertical mixing of the ocean’s waters and the biological communities that live there. But these tools are limited in their ability to capture small-scale features, such as the up- and down-welling of water and organisms over a small, kilometer-wide ocean region. Such features are essential for understanding the makeup of marine life that exists in a given volume of the ocean (such as in a fishery), as well as the amount of carbon that the ocean can absorb and sequester away.Now researchers at MIT and the Woods Hole Oceanographic Institution (WHOI) have engineered a lightweight instrument that measures both physical and biological features of the vertical ocean over small, kilometer-wide patches. The “ocean profiler,” named EcoCTD, is about the size of a waist-high model rocket and can be dropped off the back of a moving ship. As it free-falls through the water, its sensors measure physical features, such as temperature and salinity, as well as biological properties, such as the optical scattering of chlorophyll, the green pigment of phytoplankton.“With EcoCTD, we can see small-scale areas of fast vertical motion, where nutrients could be supplied to the surface, and where chlorophyll is carried downward, which tells you this could also be a carbon pathway. That’s something you would otherwise miss with existing technology,” says Mara Freilich, a graduate student in MIT’s Department of Earth, Atmospheric, and Planetary Sciences and the MIT-WHOI Joint Program in Oceanography/Applied Ocean Sciences and Engineering.Freilich and her colleagues have published their results today in the Journal of Atmospheric and Oceanic Technology. The paper’s co-authors are J. Thomas Farrar, Benjamin Hodges, Tom Lanagan, and Amala Mahadevan of WHOI, and Andrew Baron of Dynamic System Analysis, in Nova Scotia. The lead author is Mathieu Dever of WHOI and RBR, a developer of ocean sensors based in Ottawa.Ocean synergyOceanographers use a number of methods to measure the physical properties of the ocean. Some of the more powerful, high-resolution instruments used are known as CTDs, for their ability to measure the ocean’s conductivity, temperature, and depth. CTDs are typically bulky, as they contain multiple sensors as well as components that collect water and biological samples. Conventional CTDs require a ship to stop as scientists lower the instrument into the water, sometimes via a crane system. The ship has to stay put as the instrument collects measurements and water samples, and can only get back underway after the instrument is hauled back onboard.Physical oceanographers who do not study ocean biology, and therefore do not need to collect water samples, can sometimes use “UCTDs” — underway versions of CTDs, without the bulky water sampling components, that can be towed as a ship is underway. These instruments can sample quickly since they do not require a crane or a ship to stop as they are dropped.Freilich and her team looked to design a version of a UCTD that could also incorporate biological sensors, all in a small, lightweight, towable package, that would also keep the ship moving on course as it gathered its vertical measurements.“It seemed there could be straightforward synergy between these existing instruments, to design an instrument that captures physical and biological information, and could do this underway as well,” Freilich says.“Reaching the dark ocean”The core of the EcoCTD is the RBR Concerto Logger, a sensor that measures the temperature of the water, as well as the conductivity, which is a proxy for the ocean’s salinity. The profiler also includes a lead collar that provides enough weight to enable the instrument to free-fall through the water at about 3 meters per second — a rate that takes the instrument down to about 500 meters below the surface in about two minutes.“At 500 meters, we’re reaching the upper twilight zone,” Freilich says. “The euphotic zone is where there’s enough light in the ocean for photosynthesis, and that’s at about 100 to 200 meters in most places. So we’re reaching the dark ocean.”Another sensor, the EcoPuck, is unique to other UCTDs in that it measures the ocean’s biological properties. Specifically, it is a small, puck-shaped bio-optical sensor that emits two wavelengths of light — red and blue. The sensor captures any change in these lights as they scatter back and as chlorophyll-containing phytoplankton fluoresce in response to the light. If the red light received resembles a certain wavelength characteristic of chlorophyll, scientists can deduce the presence of phytoplankton at a given depth. Variations in red and blue light scattered back to the sensor can indicate other matter in the water, such as sediments or dead cells — a measure of the amount of carbon at various depths.The EcoCTD includes another sensor unique to UCTDs — the Rinko III Do, which measures the oxygen concentration in water, which can give scientists an estimate of how much oxygen is being taken up by any microbial communities living at a given depth and parcel of water.Finally, the entire instrument is encased in a tube of aluminum and designed to attach via a long line to a winch at the back of a ship. As the ship is moving, a team can drop the instrument overboard and use the winch to pay the line out at a rate that the instrument drops straight down, even as the ship moves away. After about two minutes, once it has reached a depth of about 500 meters, the team cranks the winch to pull the instrument back up, at a rate that the  instrument catches up to the ship within 12 minutes. The crew can then drop the instrument again, this time at some distance from their last dropoff point.“The nice thing is, by the time we go to the next cast, we’re 500 meters away from where we were the first time, so we’re exactly where we want to sample next,” Freilich says.They tested the EcoCTD on two cruises in 2018 and 2019, one to the Mediterranean and the other in the Atlantic, and in both cases were able to collect both physical and biological data at a higher resolution than existing CTDs.“The ecoCTD is capturing these ocean characteristics at a gold-standard quality with much more convenience and versatility,” Freilich says.The team will further refine their design, and hopes that their high-resolution, easily-deployable, and more efficient alternative may be adapted by both scientists to monitor the ocean’s small-scale responses to climate change, as well as fisheries that want to keep track of a certain region’s biological productivity.  This research was funded in part by the U.S. Office of Naval Research. Scientists prepare to deploy an underway CTD from the back deck of a research vessel. Image: Amala Mahadevan https://news.mit.edu/2020/mit-scientist-turns-to-entrepreneurship-pablo-ducru-0520 After delivering novel computational methods for nuclear problems, nuclear science and engineering PhD candidate Pablo Ducru plunges into startup life. Wed, 20 May 2020 00:00:01 -0400 https://news.mit.edu/2020/mit-scientist-turns-to-entrepreneurship-pablo-ducru-0520 Leda Zimmerman | Department of Nuclear Science and Engineering Like the atomic particles he studies, Pablo Ducru seems constantly on the move, vibrating with energy. But if he sometimes appears to be headed in an unexpected direction, Ducru, a doctoral candidate in nuclear science and computational engineering, knows exactly where he is going: “My goal is to address climate change as an innovator and creator, whether by pushing the boundaries of science” through research, says Ducru, or pursuing a zero-carbon future as an entrepreneur. It can be hard catching up with Ducru. In January, he returned to Cambridge, Massachusetts, from Beijing, where he was spending a year earning a master’s degree in global affairs as a Schwarzman Scholar at Tsinghua University. He flew out just days before a travel crackdown in response to Covid-19. “This year has been intense, juggling my PhD work and the master’s overseas,” he says. “But I needed to do it, to get a 360-degree understanding of the problem of climate change, which isn’t just a technological problem, but also one involving economics, trade, policy, and finance.” Schwarzman Scholars, an international cohort selected on the basis of academic excellence and leadership potential, among other criteria, focus on critical challenges of the 21st century. While all the students must learn the basics of international relations and China’s role in the world economy, they can tailor their studies according to their interests. Ducru is incorporating nuclear science into his master’s program. “It is at the core of many of the world’s key problems, from climate change to arms controls, and it also impacts artificial intelligence by advancing high-performance computing,” he says. A Franco-Mexican raised in Paris, Ducru arrived at nuclear science by way of France’s selective academic system. He excelled in math, history, and English during his high school years. “I realized technology is what drives history,” he says. “I thought that if I wanted to make history, I needed to make technology.” He graduated from Ecole Polytechnique specializing in physics and applied mathematics, and with a major in energies of the 21st century. Creating computational shortcuts Today, as a member of MIT’s Computational Reactor Physics Group (CRPG), Ducru is deploying his expertise in singular ways to help solve some of the toughest problems in nuclear science. Nuclear engineers, hoping to optimize efficiency and safety in current and next-generation reactor designs, are on a quest for high-fidelity nuclear simulations. At such fine-grained levels of modeling, the behavior of subatomic particles is sensitive to minute uncertainties in temperature change, or differences in reactor core geometry, for instance. To quantify such uncertainties, researchers currently need countless costly hours of supercomputer time to simulate the behaviors of billions of neutrons under varying conditions, estimating and then averaging outcomes. “But with some problems, more computing won’t make a difference,” notes Ducru. “We have to help computers do the work in smarter ways.” To accomplish this task, he has developed new formulations for characterizing basic nuclear physics that make it much easier for a computer to solve problems: “I dig into the fundamental properties of physics to give nuclear engineers new mathematical algorithms that outperform thousands of times over the old ways of computing.” With his novel statistical methods and algorithms, developed with CRPG colleagues and during summer stints at Los Alamos and Oak Ridge National Laboratories, Ducru offers “new ways of looking at problems that allow us to infer trends from uncertain inputs, such as physics, geometries, or temperatures,” he says.   These innovative tools accommodate other kinds of problems that involve computing average behaviors from billions of individual occurrences, such as bubbles forming in a turbulent flow of reactor coolant. “My solutions are quite fundamental and problem-agnostic — applicable to the design of new reactors, to nuclear imaging systems for tumor detection, or to the plutonium battery of a Mars rover,” he says. “They will be useful anywhere scientists need to lower costs of high-fidelity nuclear simulations.” But Ducru won’t be among the scientists deploying these computational advances. “I think we’ve done a good job, and others will continue in this area of research,” he says. “After six years of delving deep into quantum physics and statistics, I felt my next step should be a startup.” Scaling up with shrimp As he pivots away from academia and nuclear science, Ducru remains constant to his mission of addressing the climate problem. The result is Torana, a company Ducru and a partner started in 2018 to develop the financial products and services aquaculture needs to sustainably feed the world. “I thought we could develop a scalable zero-carbon food,” he says. “The world needs high-nutrition proteins to feed growing populations in a climate-friendly way, especially in developing nations.”  Land-based protein sources such as livestock can take a heavy toll on the environment. But shrimp, on the other hand, are “very efficient machines, scavenging crud at the bottom of the ocean and converting it into high-quality protein,” notes Ducru, who received the 2018 MIT Water Innovation Prize and the 2019 Rabobank-MIT Food and Agribusiness Prize, and support from MIT Sandbox to help develop his aquaculture startup (then called Velaron). Torana is still in early stages, and Ducru hopes to apply his modeling expertise to build a global system of sustainable shrimp farming. His Schwarzman master thesis studies the role of aquaculture in our future global food system, with a focus on the shrimp supply chain. In response to the Covid-19 pandemic, Ducru relocated to the family farm in southern France, which he helps run while continuing to follow the Tsinghua masters online and work on his MIT PhD. He is tweaking his business plans, and putting the final touches on his PhD research, including submitting several articles for publication. While it’s been challenging keeping all these balls in the air, he has supportive mentors — “Benoit Forget [CRPG director] has backed almost all my crazy ideas,” says Ducru. “People like him make MIT the best university on Earth.” Ducru is already mapping out his next decade or so: grow his startup, and perhaps create a green fund that could underwrite zero-carbon projects, including nuclear ones. “I don’t have Facebook and don’t watch online series or TV, because I prefer being an actor, creating things through my work,” he says. “I’m a scientific entrepreneur, and will continue to innovate across different realms.” “My goal is to address climate change as an innovator and creator, whether by pushing the boundaries of science” through research or pursuing a zero-carbon future as an entrepreneur, says MIT PhD candidate Pablo Ducru. Photo: Gretchen Ertl https://news.mit.edu/2020/3-questions-energy-studies-mit-next-generation-energy-leaders-0518 Abigail Ostriker ’16 and Addison Stark SM ’10, PhD ’15 share how their experiences with MIT’s energy programs connect them to the global energy community. Mon, 18 May 2020 14:20:01 -0400 https://news.mit.edu/2020/3-questions-energy-studies-mit-next-generation-energy-leaders-0518 Turner Jackson | MIT Energy Initiative Students who engage in energy studies at MIT develop an integrative understanding of energy as well as skills required of tomorrow’s energy professionals, leaders, and innovators in research, industry, policy, management, and governance. Two energy alumni recently shared their experiences as part of MIT’s energy community, and how their work connects to energy today. Abigail Ostriker ’16, who majored in applied mathematics, is now pursuing a PhD in economics at MIT, where she is conducting research into whether subsidized flood insurance causes overdevelopment. Prior to her graduate studies, she conducted two years of research into health economics with Amy Finkelstein, the John and Jennie S. MacDonald Professor of Economics at MIT. Addison Stark SM ’10, PhD ’15, whose degrees are in mechanical engineering and technology and policy, is the associate director for energy innovation at the Bipartisan Policy Center in Washington, which focuses on implementing effective policy on important topics for American citizens. He also serves as an adjunct professor at Georgetown University, where he teaches a course on clean energy innovation. Prior to these roles, he was a fellow and acting program director at the U.S. Department of Energy’s Advanced Research Projects Agency-Energy. Q: What experiences did you have that inspired you to pursue energy studies? Stark: I grew up on a farm in rural Iowa, surrounded by a growing biofuels industry and bearing witness to the potential impacts of climate change on agriculture. I then went to the University of Iowa as an undergrad. While there, I was lucky enough to serve as one of the student representatives on a committee that put together a large decarbonization plan for the university. I recognized at the time that the university not only needed to put together a policy, but also to think about what technologies they had to procure to implement their goals. That experience increased my awareness of the big challenges surrounding climate change. I was fortunate to have attended the University of Iowa because a large percentage of the students had an environmental outlook, and many faculty members were involved with the Intergovernmental Panel on Climate Change (IPCC) and engaged with climate and sustainability issues at a time when many other science and engineering schools hadn’t to the same degree. Q: How did your time at MIT inform your eventual work in the energy space? Ostriker: I took my first economics class in my freshman fall, but I didn’t really understand what economics could do until I took Energy Economics and Policy [14.44J/15.037] with Professor Christopher Knittel at the Sloan School the following year. That class turned the field from a collection of unrealistic maximizing equations into a framework that could make sense of real people’s decisions and predict how incentives affect outcomes. That experience led me to take a class on econometrics. The combination made me feel like economics was a powerful set of tools for understanding the world — and maybe tweaking it to get a slightly better outcome. Stark: Completing my master’s in the Technology and Policy Program (TPP) and in mechanical engineering at MIT was invaluable. The focus on systems thinking that was being employed in TPP and at the MIT Energy Initiative (MITEI) has been very important in shaping my thinking around the biggest challenges in climate and energy. While pursuing my master’s degree, I worked with Daniel Cohn, a research scientist at MITEI, and Ahmed Ghoniem, a professor of mechanical engineering, who later became my PhD advisor. We looked at a lot of big questions about how to integrate advanced biofuels into today’s transportation and distribution infrastructures: Can you ship it in a pipeline? Can you transport it? Are people able to put it into infrastructure that we’ve already spent billions of dollars building out? One of the critical lessons that I learned while at MITEI — and it’s led to a lot of my thinking today — is that in order for us to have an effective energy transition, there need to be ways that we can utilize current infrastructure. Being involved with and becoming a co-president of the MIT Energy Club in 2010 truly helped to shape my experience at MIT. When I came to MIT, one of the first things that I did was attend the MIT Energy Conference. In the early days of the club and of MITEI — in ’07 — there was a certain “energy” around energy at MIT that really got a lot of us thinking about careers in the field. Q: How does your current research connect to energy, and in what ways do the fields of economics and energy connect? Ostriker: Along with my classmate Anna Russo, I am currently studying whether subsidized flood insurance causes over-development. In the U.S., many flood maps are out of date and backward-looking: Flood risk is rising due to climate change, so in many locations, insurance premiums now cost less than expected damages. This creates an implicit subsidy for risky areas that distorts price signals and may cause a high number of homes to be built. We want to estimate the size of the subsidies and the effect they have on development. It’s a challenging question because it’s hard to find a way to compare areas that seem exactly the same except for their insurance premiums. We are hoping to get there by looking at boundaries in the flood insurance maps — areas where true flood risk is the same but premiums are different. We hope that by improving our understanding of how insurance prices affect land use, we can help governments to create more efficient policies for climate resilience. Many economists are studying issues related to both energy and the environment. One definition of economics is the study of trade-offs — how to best allocate scarce resources. In energy, there are questions such as: How should we design electricity markets so that they automatically meet demand with the lowest-cost mix of generation? As the generation mix moves from almost all fossil fuels to a higher penetration of renewables, will that market design still work, or will it need to be adapted so that renewable energy companies still find it attractive to participate? In addition to theoretical questions about how markets work, economists also study the way real people or companies respond to policies. For example, if retail electricity prices started to change by the hour or by the minute, how would people’s energy use respond to that? To answer this question convincingly, you need to find a situation in which everything is almost identical between two groups, except that one group faces different prices. You can’t always do a randomized experiment, so you must find something almost like an experiment in the real world. This kind of toolkit is also used a lot in environmental economics. For instance, we might study the effect of pollution on students’ test scores. In that setting, economists’ tools of causal inference make it possible to move beyond an observed correlation to a statement that pollution had a causal effect. Q: How do you think we can make the shift toward a clean energy-based economy a more pressing issue for people across the political spectrum? Stark: If we are serious about addressing climate change as a country, we need to recognize that any policy has to be bipartisan; it will need to hit 60 votes in the Senate. Very quickly — within the next few years — we need to develop a set of robust bipartisan policies that can move us toward decarbonization by mid-century. If the IPCC recommendations are to be followed, our ultimate goal is to hit net-zero carbon emissions by 2050. What that means to me is that we need to frame up all of the benefits of a large clean energy program to address climate change. When we address climate change, one of the valuable things that’s going to happen is major investment in technology deployment and development, which involves creating jobs — which is a bipartisan issue. As we are looking to build out a decarbonized future, one thing that needs to happen is reinvesting in our national infrastructure, which is an issue that is recognized in a bipartisan sense. It’s going to require more nuance than just the pure Green New Deal approach. In order to get Republicans on board, we need to realize that investment can’t be based only on renewables. There are a lot of people whose economies depend on the continued and smart use of fossil resources. We have to think about how we develop and deploy carbon capture technologies, as these technologies are going to be integral in garnering more support from rural and conservative communities for the energy transition. The Republican Party is embracing the role of nuclear energy more than some Democrats are. The key thing is that today, nuclear is far and away the most prevalent source of zero-carbon electricity that we have. So, expanding nuclear power is a critically important piece of decarbonizing energy, and Republicans have identified that as a place where they would like to invest along with carbon capture, utilization, and storage — another technology with less enthusiasm on the environmental left. Finding ways to bridge party lines on these critical technologies is one of the biggest pieces that I think will be important in bringing about a low-carbon future. Addison Stark (left) and Abigail Ostriker Stark photo: Greg Gibson/Bipartisan Policy Center; Ostriker photo: Thomas Dattilo https://news.mit.edu/2020/3-questions-energy-studies-mit-next-generation-energy-leaders-0518 Abigail Ostriker ’16 and Addison Stark SM ’10, PhD ’15 share how their experiences with MIT’s energy programs connect them to the global energy community. Mon, 18 May 2020 14:20:01 -0400 https://news.mit.edu/2020/3-questions-energy-studies-mit-next-generation-energy-leaders-0518 Turner Jackson | MIT Energy Initiative Students who engage in energy studies at MIT develop an integrative understanding of energy as well as skills required of tomorrow’s energy professionals, leaders, and innovators in research, industry, policy, management, and governance. Two energy alumni recently shared their experiences as part of MIT’s energy community, and how their work connects to energy today. Abigail Ostriker ’16, who majored in applied mathematics, is now pursuing a PhD in economics at MIT, where she is conducting research into whether subsidized flood insurance causes overdevelopment. Prior to her graduate studies, she conducted two years of research into health economics with Amy Finkelstein, the John and Jennie S. MacDonald Professor of Economics at MIT. Addison Stark SM ’10, PhD ’15, whose degrees are in mechanical engineering and technology and policy, is the associate director for energy innovation at the Bipartisan Policy Center in Washington, which focuses on implementing effective policy on important topics for American citizens. He also serves as an adjunct professor at Georgetown University, where he teaches a course on clean energy innovation. Prior to these roles, he was a fellow and acting program director at the U.S. Department of Energy’s Advanced Research Projects Agency-Energy. Q: What experiences did you have that inspired you to pursue energy studies? Stark: I grew up on a farm in rural Iowa, surrounded by a growing biofuels industry and bearing witness to the potential impacts of climate change on agriculture. I then went to the University of Iowa as an undergrad. While there, I was lucky enough to serve as one of the student representatives on a committee that put together a large decarbonization plan for the university. I recognized at the time that the university not only needed to put together a policy, but also to think about what technologies they had to procure to implement their goals. That experience increased my awareness of the big challenges surrounding climate change. I was fortunate to have attended the University of Iowa because a large percentage of the students had an environmental outlook, and many faculty members were involved with the Intergovernmental Panel on Climate Change (IPCC) and engaged with climate and sustainability issues at a time when many other science and engineering schools hadn’t to the same degree. Q: How did your time at MIT inform your eventual work in the energy space? Ostriker: I took my first economics class in my freshman fall, but I didn’t really understand what economics could do until I took Energy Economics and Policy [14.44J/15.037] with Professor Christopher Knittel at the Sloan School the following year. That class turned the field from a collection of unrealistic maximizing equations into a framework that could make sense of real people’s decisions and predict how incentives affect outcomes. That experience led me to take a class on econometrics. The combination made me feel like economics was a powerful set of tools for understanding the world — and maybe tweaking it to get a slightly better outcome. Stark: Completing my master’s in the Technology and Policy Program (TPP) and in mechanical engineering at MIT was invaluable. The focus on systems thinking that was being employed in TPP and at the MIT Energy Initiative (MITEI) has been very important in shaping my thinking around the biggest challenges in climate and energy. While pursuing my master’s degree, I worked with Daniel Cohn, a research scientist at MITEI, and Ahmed Ghoniem, a professor of mechanical engineering, who later became my PhD advisor. We looked at a lot of big questions about how to integrate advanced biofuels into today’s transportation and distribution infrastructures: Can you ship it in a pipeline? Can you transport it? Are people able to put it into infrastructure that we’ve already spent billions of dollars building out? One of the critical lessons that I learned while at MITEI — and it’s led to a lot of my thinking today — is that in order for us to have an effective energy transition, there need to be ways that we can utilize current infrastructure. Being involved with and becoming a co-president of the MIT Energy Club in 2010 truly helped to shape my experience at MIT. When I came to MIT, one of the first things that I did was attend the MIT Energy Conference. In the early days of the club and of MITEI — in ’07 — there was a certain “energy” around energy at MIT that really got a lot of us thinking about careers in the field. Q: How does your current research connect to energy, and in what ways do the fields of economics and energy connect? Ostriker: Along with my classmate Anna Russo, I am currently studying whether subsidized flood insurance causes over-development. In the U.S., many flood maps are out of date and backward-looking: Flood risk is rising due to climate change, so in many locations, insurance premiums now cost less than expected damages. This creates an implicit subsidy for risky areas that distorts price signals and may cause a high number of homes to be built. We want to estimate the size of the subsidies and the effect they have on development. It’s a challenging question because it’s hard to find a way to compare areas that seem exactly the same except for their insurance premiums. We are hoping to get there by looking at boundaries in the flood insurance maps — areas where true flood risk is the same but premiums are different. We hope that by improving our understanding of how insurance prices affect land use, we can help governments to create more efficient policies for climate resilience. Many economists are studying issues related to both energy and the environment. One definition of economics is the study of trade-offs — how to best allocate scarce resources. In energy, there are questions such as: How should we design electricity markets so that they automatically meet demand with the lowest-cost mix of generation? As the generation mix moves from almost all fossil fuels to a higher penetration of renewables, will that market design still work, or will it need to be adapted so that renewable energy companies still find it attractive to participate? In addition to theoretical questions about how markets work, economists also study the way real people or companies respond to policies. For example, if retail electricity prices started to change by the hour or by the minute, how would people’s energy use respond to that? To answer this question convincingly, you need to find a situation in which everything is almost identical between two groups, except that one group faces different prices. You can’t always do a randomized experiment, so you must find something almost like an experiment in the real world. This kind of toolkit is also used a lot in environmental economics. For instance, we might study the effect of pollution on students’ test scores. In that setting, economists’ tools of causal inference make it possible to move beyond an observed correlation to a statement that pollution had a causal effect. Q: How do you think we can make the shift toward a clean energy-based economy a more pressing issue for people across the political spectrum? Stark: If we are serious about addressing climate change as a country, we need to recognize that any policy has to be bipartisan; it will need to hit 60 votes in the Senate. Very quickly — within the next few years — we need to develop a set of robust bipartisan policies that can move us toward decarbonization by mid-century. If the IPCC recommendations are to be followed, our ultimate goal is to hit net-zero carbon emissions by 2050. What that means to me is that we need to frame up all of the benefits of a large clean energy program to address climate change. When we address climate change, one of the valuable things that’s going to happen is major investment in technology deployment and development, which involves creating jobs — which is a bipartisan issue. As we are looking to build out a decarbonized future, one thing that needs to happen is reinvesting in our national infrastructure, which is an issue that is recognized in a bipartisan sense. It’s going to require more nuance than just the pure Green New Deal approach. In order to get Republicans on board, we need to realize that investment can’t be based only on renewables. There are a lot of people whose economies depend on the continued and smart use of fossil resources. We have to think about how we develop and deploy carbon capture technologies, as these technologies are going to be integral in garnering more support from rural and conservative communities for the energy transition. The Republican Party is embracing the role of nuclear energy more than some Democrats are. The key thing is that today, nuclear is far and away the most prevalent source of zero-carbon electricity that we have. So, expanding nuclear power is a critically important piece of decarbonizing energy, and Republicans have identified that as a place where they would like to invest along with carbon capture, utilization, and storage — another technology with less enthusiasm on the environmental left. Finding ways to bridge party lines on these critical technologies is one of the biggest pieces that I think will be important in bringing about a low-carbon future. Addison Stark (left) and Abigail Ostriker Stark photo: Greg Gibson/Bipartisan Policy Center; Ostriker photo: Thomas Dattilo https://news.mit.edu/2020/melting-glaciers-cool-southern-ocean-0517 Research suggests glacial melting might explain the recent decadal cooling and sea ice expansion across Antarctica's Southern Ocean. Sun, 17 May 2020 00:00:00 -0400 https://news.mit.edu/2020/melting-glaciers-cool-southern-ocean-0517 Fernanda Ferreira | School of Science Tucked away at the very bottom of the globe surrounding Antarctica, the Southern Ocean has never been easy to study. Its challenging conditions have placed it out of reach to all but the most intrepid explorers. For climate modelers, however, the surface waters of the Southern Ocean provide a different kind of challenge: It doesn’t behave the way they predict it would. “It is colder and fresher than the models expected,” says Craig Rye, a postdoc in the group of Cecil and Ida Green Professor of Oceanography John Marshall within MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). In recent decades, as the world warms, the Southern Ocean’s surface temperature has cooled, allowing the amount of ice that crystallizes on the surface each winter to grow. This is not what climate models anticipated, and a recent study accepted in Geophysical Research Letters attempts to disentangle that discrepancy. “This paper is motivated by a disagreement between what should be happening according to simulations and what we observe,” says Rye, the lead author of the paper who is currently working remotely from NASA’s Goddard Institute for Space Studies, or GISS, in New York City. “This is a big conundrum in the climate community,” says Marshall, a co-author on the paper along with Maxwell Kelley, Gary Russell, Gavin A. Schmidt, and Larissa S. Nazarenko of GISS; James Hansen of Columbia University’s Earth Institute; and Yavor Kostov of the University of Exeter. There are 30 or so climate models used to foresee what the world might look like as the climate changes. According to Marshall, models don’t match the recent observations of surface temperature in the Southern Ocean, leaving scientists with a question that Rye, Marshall, and their colleagues intend to answer: how can the Southern Ocean cool when the rest of the Earth is warming? This isn’t the first time Marshall has investigated the Southern Ocean and its climate trends. In 2016, Marshall and Yavor Kostov PhD ’16 published a paper exploring two possible influences driving the observed ocean trends: greenhouse gas emissions, and westerly winds — strengthened by expansion of the Antarctic ozone hole — blowing cold water northward from the continent. Both explained some of the cooling in the Southern Ocean, but not all of it. “We ended that paper saying there must be something else,” says Marshall. That something else could be meltwater released from thawing glaciers. Rye has probed the influence of glacial melt in the Southern Ocean before, looking at its effect on sea surface height during his PhD at the University of Southampton in the UK. “Since then, I’ve been interested in the potential for glacial melt playing a role in Southern Ocean climate trends,” says Rye. The group’s recent paper uses a series of “perturbation” experiments carried out with the GISS global climate model where they abruptly introduce a fixed increase in melt water around Antarctica and then record how the model responds. The researchers then apply the model’s response to a previous climate state to estimate how the climate should react to the observed forcing. The results are then compared to the observational record, to see if a factor is missing. This method is called hindcasting. Marshall likens perturbation experiments to walking into a room and being confronted with an object you don’t recognize. “You might give it a gentle whack to see what it’s made of,” says Marshall. Perturbation experiments, he explains, are like whacking the model with inputs, such as glacial melt, greenhouse gas emissions, and wind, to uncover the relative importance of these factors on observed climate trends. In their hindcasting, they estimate what would have happened to a pre-industrial Southern Ocean (before anthropogenic climate change) if up to 750 gigatons of meltwater were added each year. That quantity of 750 gigatons of meltwater is estimated from observations of both floating ice shelves and the ice sheet that lies over land above sea level. A single gigaton of water is very large — it can fill 400,000 Olympic swimming pools, meaning 750 gigatons of meltwater is equivalent to pouring water from 300 million Olympic swimming pools into the ocean every year. When this increase in glacial melt was added to the model, it led to sea surface cooling, decreases in salinity, and expansion of sea ice coverage that are consistent with observed trends in the Southern Ocean during the last few decades. Their model results suggest that meltwater may account for the majority of previously misunderstood Southern Ocean cooling.The model shows that a warming climate may be driving, in a counterintuitive way, more sea ice by increasing the rate of melting of Antarctica’s glaciers. According to Marshall, the paper may solve the disconnect between what was expected and what was observed in the Southern Ocean, and answers the conundrum he and Kostov pointed to in 2016. “The missing process could be glacial melt.” Research like Rye’s and Marshall’s help project the future state of Earth’s climate and guide society’s decisions on how to prepare for that future. By hindcasting the Southern Ocean’s climate trends, they and their colleagues have identified another process, which must be incorporated into climate models. “What we’ve tried to do is ground this model in the historical record,” says Marshall. Now the group can probe the GISS model response with further “what if?” glacial melt scenarios to explore what might be in store for the Southern Ocean. MIT scientists suggest sea ice extent in the Southern Ocean may increase with glacial melting in Antarctica. This image shows a view of the Earth on Sept. 21, 2005 with the full Antarctic region visible. Photo: NASA/Goddard Space Flight Center More

  • in

    Two projects receive funding for technologies that avoid carbon emissions

    The Carbon Capture, Utilization, and Storage Center, one of the MIT Energy Initiative (MITEI)’s Low-Carbon Energy Centers, has awarded $900,000 in funding to two new research projects to advance technologies that avoid carbon dioxide (CO2) emissions into the atmosphere and help address climate change. The winning project is receiving $750,000, and an additional project receives $150,000.
    The winning project, led by principal investigator Asegun Henry, the Robert N. Noyce Career Development Professor in the Department of Mechanical Engineering, and co-principal investigator Paul Barton, the Lammot du Pont Professor of Chemical Engineering, aims to produce hydrogen without CO2 emissions while creating a second revenue stream of solid carbon. The additional project, led by principal investigator Matěj Peč, the Victor P. Starr Career Development Chair in the Department of Earth, Atmospheric and Planetary Sciences, seeks to expand understanding of new processes for storing CO2 in basaltic rocks by converting it from an aqueous solution into carbonate minerals.
    Carbon capture, utilization, and storage (CCUS) technologies have the potential to play an important role in limiting or reducing the amount of CO2 in the atmosphere, as part of a suite of approaches to mitigating to climate change that includes renewable energy and energy efficiency technologies, as well as policy measures. While some CCUS technologies are being deployed at the million-ton-of-CO2 per year scale, there are substantial needs to improve costs and performance of those technologies and to advance more nascent technologies. MITEI’s CCUS center is working to meet these challenges with a cohort of industry members that are supporting promising MIT research, such as these newly funded projects.
    A new process for producing hydrogen without CO2 emissions
    Henry and Barton’s project, “Lower cost, CO2-free, H2 production from CH4 using liquid tin,” investigates the use of methane pyrolysis instead of steam methane reforming (SMR) for hydrogen production.
    Currently, hydrogen production accounts for approximately 1 percent of global CO2 emissions, and the predominant production method is SMR. The SMR process relies on the formation of CO2, so replacing it with another economically competitive approach to making hydrogen would avoid emissions. 
    “Hydrogen is essential to modern life, as it is primarily used to make ammonia for fertilizer, which plays an indispensable role in feeding the world’s 7.5 billion people,” says Henry. “But we need to be able to feed a growing population and take advantage of hydrogen’s potential as a carbon-free fuel source by eliminating CO2 emissions from hydrogen production. Our process results in a solid carbon byproduct, rather than CO2 gas. The sale of the solid carbon lowers the minimum price at which hydrogen can be sold to break even with the current, CO2 emissions-intensive process.”
    Henry and Barton’s work is a new take on an existing process, pyrolysis of methane. Like SMR, methane pyrolysis uses methane as the source of hydrogen, but follows a different pathway. SMR uses the oxygen in water to liberate the hydrogen by preferentially bonding oxygen to the carbon in methane, producing CO2 gas in the process. In methane pyrolysis, the methane is heated to such a high temperature that the molecule itself becomes unstable and decomposes into hydrogen gas and solid carbon — a much more valuable byproduct than CO2 gas. Although the idea of methane pyrolysis has existed for many years, it has been difficult to commercialize because of the formation of the solid byproduct, which can deposit on the walls of the reactor, eventually plugging it up. This issue makes the process impractical. Henry and Barton’s project uses a new approach in which the reaction is facilitated with inert molten tin, which prevents the plugging from occurring. The proposed approach is enabled by recent advances in Henry’s lab that enable the flow and containment of liquid metal at extreme temperatures without leakage or material degradation. 
    Studying CO2 storage in basaltic reservoirs
    With his project, “High-fidelity monitoring for carbon sequestration: integrated geophysical and geochemical investigation of field and laboratory data,” Peč plans to conduct a comprehensive study to gain a holistic understanding of the coupled chemo-mechanical processes that accompany CO2 storage in basaltic reservoirs, with hopes of increasing adoption of this technology.
    The Intergovernmental Panel on Climate Change estimates that 100 to 1,000 gigatonnes of CO2 must be removed from the atmosphere by the end of the century. Such large volumes can only be stored below the Earth’s surface, and that storage must be accomplished safely and securely, without allowing any leakage back into the atmosphere.
    One promising storage strategy is CO2 mineralization — specifically by dissolving gaseous CO2 in water, which then reacts with reservoir rocks to form carbonate minerals. Of the technologies proposed for carbon sequestration, this approach is unique in that the sequestration is permanent: the CO2 becomes part of an inert solid, so it cannot escape back into the environment. Basaltic rocks, the most common volcanic rock on Earth, present good sites for CO2 injection due to their widespread occurrence and high concentrations of divalent cations such as calcium and magnesium that can form carbonate minerals. In one study, more than 95 percent of the CO2 injected into a pilot site in Iceland was precipitated as carbonate minerals in less than two years.
    However, ensuring the subsurface integrity of geological formations during fluid injection and accurately evaluating the reaction rates in such reservoirs require targeted studies such as Peč’s.
    “The funding by MITEI’s Low-Carbon Energy Center for Carbon Capture, Utilization, and Storage allows me to start a new research direction, bringing together a group of experts from a range of disciplines to tackle climate change, perhaps the greatest scientific challenge our generation is facing,” says Peč.
    The two projects were selected from a call for proposals that resulted in 15 entries by MIT researchers. “The application process revealed a great deal of interest from MIT researchers in advancing carbon capture, utilization, and storage processes and technologies,” says Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences, who co-directs the CCUS center with T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering. “The two projects funded through the center will result in fundamental, higher-risk research exploring novel approaches that have the potential to have high impact in the longer term. Given the short-term focus of the industry, projects like this might not have otherwise been funded, so having support for this kind of early-stage fundamental research is crucial.”

    Topics: MIT Energy Initiative, Mechanical engineering, Chemical engineering, EAPS, School of Engineering, Carbon dioxide, Carbon Emissions, Carbon sequestration, Funding, Climate change, School of Science More

  • in

    Shrinking deep learning’s carbon footprint

    In June, OpenAI unveiled the largest language model in the world, a text-generating tool called GPT-3 that can write creative fiction, translate legalese into plain English, and answer obscure trivia questions. It’s the latest feat of intelligence achieved by deep learning, a machine learning method patterned after the way neurons in the brain process and store information.
    But it came at a hefty price: at least $4.6 million and 355 years in computing time, assuming the model was trained on a standard neural network chip, or GPU. The model’s colossal size — 1,000 times larger than a typical language model — is the main factor in its high cost.
    “You have to throw a lot more computation at something to get a little improvement in performance,” says Neil Thompson, an MIT researcher who has tracked deep learning’s unquenchable thirst for computing. “It’s unsustainable. We have to find more efficient ways to scale deep learning or develop other technologies.”
    Some of the excitement over AI’s recent progress has shifted to alarm. In a study last year, researchers at the University of Massachusetts at Amherst estimated that training a large deep-learning model produces 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars. As models grow bigger, their demand for computing is outpacing improvements in hardware efficiency. Chips specialized for neural-network processing, like GPUs (graphics processing units) and TPUs (tensor processing units), have offset the demand for more computing, but not by enough. 
    “We need to rethink the entire stack — from software to hardware,” says Aude Oliva, MIT director of the MIT-IBM Watson AI Lab and co-director of the MIT Quest for Intelligence. “Deep learning has made the recent AI revolution possible, but its growing cost in energy and carbon emissions is untenable.”
    Computational limits have dogged neural networks from their earliest incarnation — the perceptron — in the 1950s. As computing power exploded, and the internet unleashed a tsunami of data, they evolved into powerful engines for pattern recognition and prediction. But each new milestone brought an explosion in cost, as data-hungry models demanded increased computation. GPT-3, for example, trained on half a trillion words and ballooned to 175 billion parameters — the mathematical operations, or weights, that tie the model together — making it 100 times bigger than its predecessor, itself just a year old.
    In work posted on the pre-print server arXiv, Thompson and his colleagues show that the ability of deep learning models to surpass key benchmarks tracks their nearly exponential rise in computing power use. (Like others seeking to track AI’s carbon footprint, the team had to guess at many models’ energy consumption due to a lack of reporting requirements). At this rate, the researchers argue, deep nets will survive only if they, and the hardware they run on, become radically more efficient.
    Toward leaner, greener algorithms
    The human perceptual system is extremely efficient at using data. Researchers have borrowed this idea for recognizing actions in video and in real life to make models more compact. In a paper at the European Conference on Computer Vision (ECCV) in August, researchers at the MIT-IBM Watson AI Lab describe a method for unpacking a scene from a few glances, as humans do, by cherry-picking the most relevant data.
    Take a video clip of someone making a sandwich. Under the method outlined in the paper, a policy network strategically picks frames of the knife slicing through roast beef, and meat being stacked on a slice of bread, to represent at high resolution. Less-relevant frames are skipped over or represented at lower resolution. A second model then uses the abbreviated CliffsNotes version of the movie to label it “making a sandwich.” The approach leads to faster video classification at half the computational cost as the next-best model, the researchers say.
    “Humans don’t pay attention to every last detail — why should our models?” says the study’s senior author, Rogerio Feris, research manager at the MIT-IBM Watson AI Lab. “We can use machine learning to adaptively select the right data, at the right level of detail, to make deep learning models more efficient.”
    In a complementary approach, researchers are using deep learning itself to design more economical models through an automated process known as neural architecture search. Song Han, an assistant professor at MIT, has used automated search to design models with fewer weights, for language understanding and scene recognition, where picking out looming obstacles quickly is acutely important in driving applications. 
    In a paper at ECCV, Han and his colleagues propose a model architecture for three-dimensional scene recognition that can spot safety-critical details like road signs, pedestrians, and cyclists with relatively less computation. They used an evolutionary-search algorithm to evaluate 1,000 architectures before settling on a model they say is three times faster and uses eight times less computation than the next-best method. 
    In another recent paper, they use evolutionary search within an augmented designed space to find the most efficient architectures for machine translation on a specific device, be it a GPU, smartphone, or tiny Raspberry Pi. Separating the search and training process leads to huge reductions in computation, they say.
    In a third approach, researchers are probing the essence of deep nets to see if it might be possible to train a small part of even hyper-efficient networks like those above. Under their proposed lottery ticket hypothesis, PhD student Jonathan Frankle and MIT Professor Michael Carbin proposed that within each model lies a tiny subnetwork that could have been trained in isolation with as few as one-tenth as many weights — what they call a “winning ticket.” 
    They showed that an algorithm could retroactively find these winning subnetworks in small image-classification models. Now, in a paper at the International Conference on Machine Learning (ICML), they show that the algorithm finds winning tickets in large models, too; the models just need to be rewound to an early, critical point in training when the order of the training data no longer influences the training outcome. 
    In less than two years, the lottery ticket idea has been cited more than more than 400 times, including by Facebook researcher Ari Morcos, who has shown that winning tickets can be transferred from one vision task to another, and that winning tickets exist in language and reinforcement learning models, too. 
    “The standard explanation for why we need such large networks is that overparameterization aids the learning process,” says Morcos. “The lottery ticket hypothesis disproves that — it’s all about finding an appropriate starting point. The big downside, of course, is that, currently, finding these ‘winning’ starting points requires training the full overparameterized network anyway.”
    Frankle says he’s hopeful that an efficient way to find winning tickets will be found. In the meantime, recycling those winning tickets, as Morcos suggests, could lead to big savings.
    Hardware designed for efficient deep net algorithms
    As deep nets push classical computers to the limit, researchers are pursuing alternatives, from optical computers that transmit and store data with photons instead of electrons, to quantum computers, which have the potential to increase computing power exponentially by representing data in multiple states at once.
    Until a new paradigm emerges, researchers have focused on adapting the modern chip to the demands of deep learning. The trend began with the discovery that video-game graphical chips, or GPUs, could turbocharge deep-net training with their ability to perform massively parallelized matrix computations. GPUs are now one of the workhorses of modern AI, and have spawned new ideas for boosting deep net efficiency through specialized hardware. 
    Much of this work hinges on finding ways to store and reuse data locally, across the chip’s processing cores, rather than waste time and energy shuttling data to and from a designated memory site. Processing data locally not only speeds up model training but improves inference, allowing deep learning applications to run more smoothly on smartphones and other mobile devices.
    Vivienne Sze, a professor at MIT, has literally written the book on efficient deep nets. In collaboration with book co-author Joel Emer, an MIT professor and researcher at NVIDIA, Sze has designed a chip that’s flexible enough to process the widely-varying shapes of both large and small deep learning models. Called Eyeriss 2, the chip uses 10 times less energy than a mobile GPU.
    Its versatility lies in its on-chip network, called a hierarchical mesh, that adaptively reuses data and adjusts to the bandwidth requirements of different deep learning models. After reading from memory, it reuses the data across as many processing elements as possible to minimize data transportation costs and maintain high throughput. 
    “The goal is to translate small and sparse networks into energy savings and fast inference,” says Sze. “But the hardware should be flexible enough to also efficiently support large and dense deep neural networks.”
    Other hardware innovators are focused on reproducing the brain’s energy efficiency. Former Go world champion Lee Sedol may have lost his title to a computer, but his performance was fueled by a mere 20 watts of power. AlphaGo, by contrast, burned an estimated megawatt of energy, or 500,000 times more.
    Inspired by the brain’s frugality, researchers are experimenting with replacing the binary, on-off switch of classical transistors with analog devices that mimic the way that synapses in the brain grow stronger and weaker during learning and forgetting.
    An electrochemical device, developed at MIT and recently published in Nature Communications, is modeled after the way resistance between two neurons grows or subsides as calcium, magnesium or potassium ions flow across the synaptic membrane dividing them. The device uses the flow of protons — the smallest and fastest ion in solid state — into and out of a crystalline lattice of tungsten trioxide to tune its resistance along a continuum, in an analog fashion.
    “Even though is not yet optimized, it gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain,” says the study’s senior author, Bilge Yildiz, a professor at MIT.
    Energy-efficient algorithms and hardware can shrink AI’s environmental impact. But there are other reasons to innovate, says Sze, listing them off: Efficiency will allow computing to move from data centers to edge devices like smartphones, making AI accessible to more people around the world; shifting computation from the cloud to personal devices reduces the flow, and potential leakage, of sensitive data; and processing data on the edge eliminates transmission costs, leading to faster inference with a shorter reaction time, which is key for interactive driving and augmented/virtual reality applications.
    “For all of these reasons, we need to embrace efficient AI,” she says.

    Topics: Quest for Intelligence, Machine learning, MIT-IBM Watson AI Lab, Electrical engineering and computer science (EECS), Computer Science and Artificial Intelligence Laboratory (CSAIL), School of Engineering, Algorithms, Artificial intelligence, Computer science and technology, Software, Computer vision, Efficiency, MIT Schwarzman College of Computing, Sustainability, Environment, Climate change More

  • in

    Study: A plunge in incoming sunlight may have triggered “Snowball Earths”

    At least twice in Earth’s history, nearly the entire planet was encased in a sheet of snow and ice. These dramatic “Snowball Earth” events occurred in quick succession, somewhere around 700 million years ago, and evidence suggests that the consecutive global ice ages set the stage for the subsequent explosion of complex, multicellular life on Earth.
    Scientists have considered multiple scenarios for what may have tipped the planet into each ice age. While no single driving process has been identified, it’s assumed that whatever triggered the temporary freeze-overs must have done so in a way that pushed the planet past a critical threshold, such as reducing incoming sunlight or atmospheric carbon dioxide to levels low enough to set off a global expansion of ice.
    But MIT scientists now say that Snowball Earths were likely the product of “rate-induced glaciations.” That is, they found the Earth can be tipped into a global ice age when the level of solar radiation it receives changes quickly over a geologically short period of time. The amount of solar radiation doesn’t have to drop to a particular threshold point; as long as the decrease in incoming sunlight occurs faster than a critical rate, a temporary glaciation, or Snowball Earth, will follow.
    These findings, published today in the Proceedings of the Royal Society A, suggest that whatever triggered the Earth’s ice ages most likely involved processes that quickly reduced the amount of solar radiation coming to the surface, such as widespread volcanic eruptions or biologically induced cloud formation that could have significantly blocked out the sun’s rays. 
    The findings may also apply to the search for life on other planets. Researchers have been keen on finding exoplanets within the habitable zone — a distance from their star that would be within a temperature range that could support life. The new study suggests that these planets, like Earth, could also ice over temporarily if their climate changes abruptly. Even if they lie within a habitable zone, Earth-like planets may be more susceptible to global ice ages than previously thought.
    “You could have a planet that stays well within the classical habitable zone, but if incoming sunlight changes too fast, you could get a Snowball Earth,” says lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What this highlights is the notion that there’s so much more nuance in the concept of habitability.”
    Arnscheidt has co-authored the paper with Daniel Rothman, EAPS professor of geophysics, and co-founder and co-director of the Lorenz Center.
    A runaway snowball
    Regardless of the particular processes that triggered past glaciations, scientists generally agree that Snowball Earths arose from a “runaway” effect involving an ice-albedo feedback: As incoming sunlight is reduced, ice expands from the poles to the equator. As more ice covers the globe, the planet becomes more reflective, or higher in albedo, which further cools the surface for more ice to expand. Eventually, if the ice reaches a certain extent, this becomes a runaway process, resulting in a global glaciation.

    Global ice ages on Earth are temporary in nature, due to the planet’s carbon cycle. When the planet is not covered in ice, levels of carbon dioxide in the atmosphere are somewhat controlled by the weathering of rocks and minerals. When the planet is covered in ice, weathering is vastly reduced, so that carbon dioxide builds up in the atmosphere, creating a greenhouse effect that eventually thaws the planet out of its ice age.
    Scientists generally agree that the formation of Snowball Earths has something to do with the balance between incoming sunlight, the ice-albedo feedback, and the global carbon cycle.
    “There are lots of ideas for what caused these global glaciations, but they all really boil down to some implicit modification of solar radiation coming in,” Arnscheidt says. “But generally it’s been studied in the context of crossing a threshold.”
    He and Rothman had previously studied other periods in Earth’s history where the speed, or rate at which certain changes in climate occurred had a role in triggering events, such as past mass extinctions.
    “In the course of this exercise, we realized there was an immediate way to make a serious point by applying such ideas of rate-induced tipping, to Snowball Earth and habitability,” Rothman says.
    “Be wary of speed”
    The researchers developed a simple mathematical model of the Earth’s climate system that includes equations to represent relations between incoming and outgoing solar radiation, the surface temperature of the Earth, the concentration of carbon dioxide in the atmosphere, and the effects of weathering in taking up and storing atmospheric carbon dioxide. The researchers were able to tune each of these parameters to observe which conditions generated a Snowball Earth.
    Ultimately, they found that a planet was more likely to freeze over if incoming solar radiation decreased quickly, at a rate that was faster than a critical rate, rather than to a critical threshold, or particular level of sunlight. There is some uncertainty in exactly what that critical rate would be, as the model is a simplified representation of the Earth’s climate. Nevertheless, Arnscheidt estimates that the Earth would have to experience about a 2 percent drop in incoming sunlight over a period of about 10,000 years to tip into a global ice age.
    “It’s reasonable to assume past glaciations were induced by geologically quick changes to solar radiation,” Arnscheidt says.
    The particular mechanisms that may have quickly darkened the skies over tens of thousands of years is still up for debate. One possibility is that widespread volcanoes may have spewed aerosols into the atmosphere, blocking incoming sunlight around the world. Another is that primitive algae may have evolved mechanisms that facilitated the formation of light-reflecting clouds. The results from this new study suggest scientists may consider processes such as these, that quickly reduce incoming solar radiation, as more likely triggers for Earth’s ice ages.
    “Even though humanity will not trigger a snowball glaciation on our current climate trajectory, the existence of such a ‘rate-induced tipping point’ at the global scale may still remain a cause for concern,” Arnscheidt points out. “For example, it teaches us that we should be wary of the speed at which we are modifying Earth’s climate, not just the magnitude of the change. There could be other such rate-induced tipping points that might be triggered by anthropogenic warming. Identifying these and constraining their critical rates is a worthwhile goal for further research.”
    This research was funded, in part, by the MIT Lorenz Center.

    Topics: Climate, Geology, Climate change, Exoplanets, EAPS, Earth and atmospheric sciences, Environment, Mathematics, Planetary science, Research, School of Science More