More stories

  • in

    MIT expands research collaboration with Commonwealth Fusion Systems to build net energy fusion machine, SPARC

    MIT’s Plasma Science and Fusion Center (PSFC) will substantially expand its fusion energy research and education activities under a new five-year agreement with Institute spinout Commonwealth Fusion Systems (CFS).

    “This expanded relationship puts MIT and PSFC in a prime position to be an even stronger academic leader that can help deliver the research and education needs of the burgeoning fusion energy industry, in part by utilizing the world’s first burning plasma and net energy fusion machine, SPARC,” says PSFC director Dennis Whyte. “CFS will build SPARC and develop a commercial fusion product, while MIT PSFC will focus on its core mission of cutting-edge research and education.”

    Commercial fusion energy has the potential to play a significant role in combating climate change, and there is a concurrent increase in interest from the energy sector, governments, and foundations. The new agreement, administered by the MIT Energy Initiative (MITEI), where CFS is a startup member, will help PSFC expand its fusion technology efforts with a wider variety of sponsors. The collaboration enables rapid execution at scale and technology transfer into the commercial sector as soon as possible.

    This new agreement doubles CFS’ financial commitment to PSFC, enabling greater recruitment and support of students, staff, and faculty. “We’ll significantly increase the number of graduate students and postdocs, and just as important they will be working on a more diverse set of fusion science and technology topics,” notes Whyte. It extends the collaboration between PSFC and CFS that resulted in numerous advances toward fusion power plants, including last fall’s demonstration of a high-temperature superconducting (HTS) fusion electromagnet with record-setting field strength of 20 tesla.

    The combined magnetic fusion efforts at PSFC will surpass those in place during the operations of the pioneering Alcator C-Mod tokamak device that operated from 1993 to 2016. This increase in activity reflects a moment when multiple fusion energy technologies are seeing rapidly accelerating development worldwide, and the emergence of a new fusion energy industry that would require thousands of trained people.

    MITEI director Robert Armstrong adds, “Our goal from the beginning was to create a membership model that would allow startups who have specific research challenges to leverage the MITEI ecosystem, including MIT faculty, students, and other MITEI members. The team at the PSFC and MITEI have worked seamlessly to support CFS, and we are excited for this next phase of the relationship.”

    PSFC is supporting CFS’ efforts toward realizing the SPARC fusion platform, which facilitates rapid development and refinement of elements (including HTS magnets) needed to build ARC, a compact, modular, high-field fusion power plant that would set the stage for commercial fusion energy production. The concepts originated in Whyte’s nuclear science and engineering class 22.63 (Principles of Fusion Engineering) and have been carried forward by students and PSFC staff, many of whom helped found CFS; the new activity will expand research into advanced technologies for the envisioned pilot plant.

    “This has been an incredibly effective collaboration that has resulted in a major breakthrough for commercial fusion with the successful demonstration of revolutionary fusion magnet technology that will enable the world’s first commercially relevant net energy fusion device, SPARC, currently under construction,” says Bob Mumgaard SM ’15, PhD ’15, CEO of Commonwealth Fusion Systems. “We look forward to this next phase in the collaboration with MIT as we tackle the critical research challenges ahead for the next steps toward fusion power plant development.”

    In the push for commercial fusion energy, the next five years are critical, requiring intensive work on materials longevity, heat transfer, fuel recycling, maintenance, and other crucial aspects of power plant development. It will need innovation from almost every engineering discipline. “Having great teams working now, it will cut the time needed to move from SPARC to ARC, and really unleash the creativity. And the thing MIT does so well is cut across disciplines,” says Whyte.

    “To address the climate crisis, the world needs to deploy existing clean energy solutions as widely and as quickly as possible, while at the same time developing new technologies — and our goal is that those new technologies will include fusion power,” says Maria T. Zuber, MIT’s vice president for research. “To make new climate solutions a reality, we need focused, sustained collaborations like the one between MIT and Commonwealth Fusion Systems. Delivering fusion power onto the grid is a monumental challenge, and the combined capabilities of these two organizations are what the challenge demands.”

    On a strategic level, climate change and the imperative need for widely implementable carbon-free energy have helped orient the PSFC team toward scalability. “Building one or 10 fusion plants doesn’t make a difference — we have to build thousands,” says Whyte. “The design decisions we make will impact the ability to do that down the road. The real enemy here is time, and we want to remove as many impediments as possible and commit to funding a new generation of scientific leaders. Those are critically important in a field with as much interdisciplinary integration as fusion.” More

  • in

    Machine learning, harnessed to extreme computing, aids fusion energy development

    MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard have just completed one of the most demanding calculations in fusion science — predicting the temperature and density profiles of a magnetically confined plasma via first-principles simulation of plasma turbulence. Solving this problem by brute force is beyond the capabilities of even the most advanced supercomputers. Instead, the researchers used an optimization methodology developed for machine learning to dramatically reduce the CPU time required while maintaining the accuracy of the solution.

    Fusion energyFusion offers the promise of unlimited, carbon-free energy through the same physical process that powers the sun and the stars. It requires heating the fuel to temperatures above 100 million degrees, well above the point where the electrons are stripped from their atoms, creating a form of matter called plasma. On Earth, researchers use strong magnetic fields to isolate and insulate the hot plasma from ordinary matter. The stronger the magnetic field, the better the quality of the insulation that it provides.

    Rodriguez-Fernandez and Howard have focused on predicting the performance expected in the SPARC device, a compact, high-magnetic-field fusion experiment, currently under construction by the MIT spin-out company Commonwealth Fusion Systems (CFS) and researchers from MIT’s Plasma Science and Fusion Center. While the calculation required an extraordinary amount of computer time, over 8 million CPU-hours, what was remarkable was not how much time was used, but how little, given the daunting computational challenge.

    The computational challenge of fusion energyTurbulence, which is the mechanism for most of the heat loss in a confined plasma, is one of the science’s grand challenges and the greatest problem remaining in classical physics. The equations that govern fusion plasmas are well known, but analytic solutions are not possible in the regimes of interest, where nonlinearities are important and solutions encompass an enormous range of spatial and temporal scales. Scientists resort to solving the equations by numerical simulation on computers. It is no accident that fusion researchers have been pioneers in computational physics for the last 50 years.

    One of the fundamental problems for researchers is reliably predicting plasma temperature and density given only the magnetic field configuration and the externally applied input power. In confinement devices like SPARC, the external power and the heat input from the fusion process are lost through turbulence in the plasma. The turbulence itself is driven by the difference in the extremely high temperature of the plasma core and the relatively cool temperatures of the plasma edge (merely a few million degrees). Predicting the performance of a self-heated fusion plasma therefore requires a calculation of the power balance between the fusion power input and the losses due to turbulence.

    These calculations generally start by assuming plasma temperature and density profiles at a particular location, then computing the heat transported locally by turbulence. However, a useful prediction requires a self-consistent calculation of the profiles across the entire plasma, which includes both the heat input and turbulent losses. Directly solving this problem is beyond the capabilities of any existing computer, so researchers have developed an approach that stitches the profiles together from a series of demanding but tractable local calculations. This method works, but since the heat and particle fluxes depend on multiple parameters, the calculations can be very slow to converge.

    However, techniques emerging from the field of machine learning are well suited to optimize just such a calculation. Starting with a set of computationally intensive local calculations run with the full-physics, first-principles CGYRO code (provided by a team from General Atomics led by Jeff Candy) Rodriguez-Fernandez and Howard fit a surrogate mathematical model, which was used to explore and optimize a search within the parameter space. The results of the optimization were compared to the exact calculations at each optimum point, and the system was iterated to a desired level of accuracy. The researchers estimate that the technique reduced the number of runs of the CGYRO code by a factor of four.

    New approach increases confidence in predictionsThis work, described in a recent publication in the journal Nuclear Fusion, is the highest fidelity calculation ever made of the core of a fusion plasma. It refines and confirms predictions made with less demanding models. Professor Jonathan Citrin, of the Eindhoven University of Technology and leader of the fusion modeling group for DIFFER, the Dutch Institute for Fundamental Energy Research, commented: “The work significantly accelerates our capabilities in more routinely performing ultra-high-fidelity tokamak scenario prediction. This algorithm can help provide the ultimate validation test of machine design or scenario optimization carried out with faster, more reduced modeling, greatly increasing our confidence in the outcomes.” 

    In addition to increasing confidence in the fusion performance of the SPARC experiment, this technique provides a roadmap to check and calibrate reduced physics models, which run with a small fraction of the computational power. Such models, cross-checked against the results generated from turbulence simulations, will provide a reliable prediction before each SPARC discharge, helping to guide experimental campaigns and improving the scientific exploitation of the device. It can also be used to tweak and improve even simple data-driven models, which run extremely quickly, allowing researchers to sift through enormous parameter ranges to narrow down possible experiments or possible future machines.

    The research was funded by CFS, with computational support from the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility. More

  • in

    What choices does the world need to make to keep global warming below 2 C?

    When the 2015 Paris Agreement set a long-term goal of keeping global warming “well below 2 degrees Celsius, compared to pre-industrial levels” to avoid the worst impacts of climate change, it did not specify how its nearly 200 signatory nations could collectively achieve that goal. Each nation was left to its own devices to reduce greenhouse gas emissions in alignment with the 2 C target. Now a new modeling strategy developed at the MIT Joint Program on the Science and Policy of Global Change that explores hundreds of potential future development pathways provides new insights on the energy and technology choices needed for the world to meet that target.

    Described in a study appearing in the journal Earth’s Future, the new strategy combines two well-known computer modeling techniques to scope out the energy and technology choices needed over the coming decades to reduce emissions sufficiently to achieve the Paris goal.

    The first technique, Monte Carlo analysis, quantifies uncertainty levels for dozens of energy and economic indicators including fossil fuel availability, advanced energy technology costs, and population and economic growth; feeds that information into a multi-region, multi-economic-sector model of the world economy that captures the cross-sectoral impacts of energy transitions; and runs that model hundreds of times to estimate the likelihood of different outcomes. The MIT study focuses on projections through the year 2100 of economic growth and emissions for different sectors of the global economy, as well as energy and technology use.

    The second technique, scenario discovery, uses machine learning tools to screen databases of model simulations in order to identify outcomes of interest and their conditions for occurring. The MIT study applies these tools in a unique way by combining them with the Monte Carlo analysis to explore how different outcomes are related to one another (e.g., do low-emission outcomes necessarily involve large shares of renewable electricity?). This approach can also identify individual scenarios, out of the hundreds explored, that result in specific combinations of outcomes of interest (e.g., scenarios with low emissions, high GDP growth, and limited impact on electricity prices), and also provide insight into the conditions needed for that combination of outcomes.

    Using this unique approach, the MIT Joint Program researchers find several possible patterns of energy and technology development under a specified long-term climate target or economic outcome.

    “This approach shows that there are many pathways to a successful energy transition that can be a win-win for the environment and economy,” says Jennifer Morris, an MIT Joint Program research scientist and the study’s lead author. “Toward that end, it can be used to guide decision-makers in government and industry to make sound energy and technology choices and avoid biases in perceptions of what ’needs’ to happen to achieve certain outcomes.”

    For example, while achieving the 2 C goal, the global level of combined wind and solar electricity generation by 2050 could be less than three times or more than 12 times the current level (which is just over 2,000 terawatt hours). These are very different energy pathways, but both can be consistent with the 2 C goal. Similarly, there are many different energy mixes that can be consistent with maintaining high GDP growth in the United States while also achieving the 2 C goal, with different possible roles for renewables, natural gas, carbon capture and storage, and bioenergy. The study finds renewables to be the most robust electricity investment option, with sizable growth projected under each of the long-term temperature targets explored.

    The researchers also find that long-term climate targets have little impact on economic output for most economic sectors through 2050, but do require each sector to significantly accelerate reduction of its greenhouse gas emissions intensity (emissions per unit of economic output) so as to reach near-zero levels by midcentury.

    “Given the range of development pathways that can be consistent with meeting a 2 degrees C goal, policies that target only specific sectors or technologies can unnecessarily narrow the solution space, leading to higher costs,” says former MIT Joint Program Co-Director John Reilly, a co-author of the study. “Our findings suggest that policies designed to encourage a portfolio of technologies and sectoral actions can be a wise strategy that hedges against risks.”

    The research was supported by the U.S. Department of Energy Office of Science. More

  • in

    At Climate Grand Challenges showcase event, an exploration of how to accelerate breakthrough solutions

    On the eve of Earth Day, more than 300 faculty, researchers, students, government officials, and industry leaders gathered in the Samberg Conference Center, along with thousands more who tuned in online, to celebrate MIT’s first-ever Climate Grand Challenges and the five most promising concepts to emerge from the two-year competition.

    The event began with a climate policy conversation between MIT President L. Rafael Reif and Special Presidential Envoy for Climate John Kerry, followed by presentations from each of the winning flagship teams, and concluded with an expert panel that explored pathways for moving from ideas to impact at scale as quickly as possible.

    “In 2020, when we launched the Climate Grand Challenges, we wanted to focus the daring creativity and pioneering expertise of the MIT community on the urgent problem of climate change,” said President Reif in kicking off the event. “Together these flagship projects will define a transformative new research agenda at MIT, one that has the potential to make meaningful contributions to the global climate response.”

    Reif and Kerry discussed multiple aspects of the climate crisis, including mitigation, adaptation, and the policies and strategies that can help the world avert the worst consequences of climate change and make the United States a leader again in bringing technology into commercial use. Referring to the accelerated wartime research effort that helped turn the tide in World War II, which included work conducted at MIT, Kerry said, “We need about five Manhattan Projects, frankly.”

    “People are now sensing a much greater urgency to finding solutions — new technology — and taking to scale some of the old technologies,” Kerry said. “There are things that are happening that I think are exciting, but the problem is it’s not happening fast enough.”

    Strategies for taking technology from the lab to the marketplace were the basis for the final portion of the event. The panel was moderated by Alicia Barton, president and CEO of FirstLight Power, and included Manish Bapna, president and CEO of the Natural Resources Defense Council; Jack Little, CEO and co-founder of MathWorks; Arati Prabhakar, president of Actuate and former head of the Defense Advanced Research Projects Agency; and Katie Rae, president and managing director of The Engine. The discussion touched upon the importance of marshaling the necessary resources and building the cross-sector partnerships required to scale the technologies being developed by the flagship teams and to deliver them to the world in time to make a difference. 

    “MIT doesn’t sit on its hands ever, and innovation is central to its founding,” said Rae. “The students coming out of MIT at every level, along with the professors, have been committed to these challenges for a long time and therefore will have a big impact. These flagships have always been in process, but now we have an extraordinary moment to commercialize these projects.”

    The panelists weighed in on how to change the mindset around finance, policy, business, and community adoption to scale massive shifts in energy generation, transportation, and other major carbon-emitting industries. They stressed the importance of policies that address the economic, equity, and public health impacts of climate change and of reimagining supply chains and manufacturing to grow and distribute these technologies quickly and affordably. 

    “We are embarking on five adventures, but we do not know yet, cannot know yet, where these projects will take us,” said Maria Zuber, MIT’s vice president for research. “These are powerful and promising ideas. But each one will require focused effort, creative and interdisciplinary teamwork, and sustained commitment and support if they are to become part of the climate and energy revolution that the world urgently needs. This work begins now.” 

    Zuber called for investment from philanthropists and financiers, and urged companies, governments, and others to join this all-of-humanity effort. Associate Provost for International Activities Richard Lester echoed this message in closing the event. 

    “Every one of us needs to put our shoulder to the wheel at the points where our leverage is maximized — where we can do what we’re best at,” Lester said. “For MIT, Climate Grand Challenges is one of those maximum leverage points.” More

  • in

    Using plant biology to address climate change

    On April 11, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This article is the fourth in a five-part series highlighting the most promising concepts to emerge from the competition and the interdisciplinary research teams behind them.

    The impact of our changing climate on agriculture and food security — and how contemporary agriculture contributes to climate change — is at the forefront of MIT’s multidisciplinary project “Revolutionizing agriculture with low-emissions, resilient crops.” The project The project is one of five flagship winners in the Climate Grand Challenges competition, and brings together researchers from the departments of Biology, Biological Engineering, Chemical Engineering, and Civil and Environmental Engineering.

    “Our team’s research seeks to address two connected challenges: first, the need to reduce the greenhouse gas emissions produced by agricultural fertilizer; second, the fact that the yields of many current agricultural crops will decrease, due to the effects of climate change on plant metabolism,” says the project’s faculty lead, Christopher Voigt, the Daniel I.C. Wang Professor in MIT’s Department of Biological Engineering. “We are pursuing six interdisciplinary projects that are each key to our overall goal of developing low-emissions methods for fertilizing plants that are bioengineered to be more resilient and productive in a changing climate.”

    Whitehead Institute members Mary Gehring and Jing-Ke Weng, plant biologists who are also associate professors in MIT’s Department of Biology, will lead two of those projects.

    Promoting crop resilience

    For most of human history, climate change occurred gradually, over hundreds or thousands of years. That pace allowed plants to adapt to variations in temperature, precipitation, and atmospheric composition. However, human-driven climate change has occurred much more quickly, and crop plants have suffered: Crop yields are down in many regions, as is seed protein content in cereal crops.

    “If we want to ensure an abundant supply of nutritious food for the world, we need to develop fundamental mechanisms for bioengineering a wide variety of crop plants that will be both hearty and nutritious in the face of our changing climate,” says Gehring. In her previous work, she has shown that many aspects of plant reproduction and seed development are controlled by epigenetics — that is, by information outside of the DNA sequence. She has been using that knowledge and the research methods she has developed to identify ways to create varieties of seed-producing plants that are more productive and resilient than current food crops.

    But plant biology is complex, and while it is possible to develop plants that integrate robustness-enhancing traits by combining dissimilar parental strains, scientists are still learning how to ensure that the new traits are carried forward from one generation to the next. “Plants that carry the robustness-enhancing traits have ‘hybrid vigor,’ and we believe that the perpetuation of those traits is controlled by epigenetics,” Gehring explains. “Right now, some food crops, like corn, can be engineered to benefit from hybrid vigor, but those traits are not inherited. That’s why farmers growing many of today’s most productive varieties of corn must purchase and plant new batches of seeds each year. Moreover, many important food crops have not yet realized the benefits of hybrid vigor.”

    The project Gehring leads, “Developing Clonal Seed Production to Fix Hybrid Vigor,” aims to enable food crop plants to create seeds that are both more robust and genetically identical to the parent — and thereby able to pass beneficial traits from generation to generation.

    The process of clonal (or asexual) production of seeds that are genetically identical to the maternal parent is called apomixis. Gehring says, “Because apomixis is present in 400 flowering plant species — about 1 percent of flowering plant species — it is probable that genes and signaling pathways necessary for apomixis are already present within crop plants. Our challenge is to tweak those genes and pathways so that the plant switches reproduction from sexual to asexual.”

    The project will leverage the fact that genes and pathways related to autonomous asexual development of the endosperm — a seed’s nutritive tissue — exist in the model plant Arabidopsis thaliana. In previous work on Arabidopsis, Gehring’s lab researched a specific gene that, when misregulated, drives development of an asexual endosperm-like material. “Normally, that seed would not be viable,” she notes. “But we believe that by epigenetic tuning of the expression of additional relevant genes, we will enable the plant to retain that material — and help achieve apomixis.”

    If Gehring and her colleagues succeed in creating a gene-expression “formula” for introducing endosperm apomixis into a wide range of crop plants, they will have made a fundamental and important achievement. Such a method could be applied throughout agriculture to create and perpetuate new crop breeds able to withstand their changing environments while requiring less fertilizer and fewer pesticides.

    Creating “self-fertilizing” crops

    Roughly a quarter of greenhouse gas (GHG) emissions in the United States are a product of agriculture. Fertilizer production and use accounts for one third of those emissions and includes nitrous oxide, which has heat-trapping capacity 298-fold stronger than carbon dioxide, according to a 2018 Frontiers in Plant Science study. Most artificial fertilizer production also consumes huge quantities of natural gas and uses minerals mined from nonrenewable resources. After all that, much of the nitrogen fertilizer becomes runoff that pollutes local waterways. For those reasons, this Climate Grand Challenges flagship project aims to greatly reduce use of human-made fertilizers.

    One tantalizing approach is to cultivate cereal crop plants — which account for about 75 percent of global food production — capable of drawing nitrogen from metabolic interactions with bacteria in the soil. Whitehead Institute’s Weng leads an effort to do just that: genetically bioengineer crops such as corn, rice, and wheat to, essentially, create their own fertilizer through a symbiotic relationship with nitrogen-fixing microbes.

    “Legumes such as bean and pea plants can form root nodules through which they receive nitrogen from rhizobia bacteria in exchange for carbon,” Weng explains. “This metabolic exchange means that legumes release far less greenhouse gas — and require far less investment of fossil energy — than do cereal crops, which use a huge portion of the artificially produced nitrogen fertilizers employed today.

    “Our goal is to develop methods for transferring legumes’ ‘self-fertilizing’ capacity to cereal crops,” Weng says. “If we can, we will revolutionize the sustainability of food production.”

    The project — formally entitled “Mimicking legume-rhizobia symbiosis for fertilizer production in cereals” — will be a multistage, five-year effort. It draws on Weng’s extensive studies of metabolic evolution in plants and his identification of molecules involved in formation of the root nodules that permit exchanges between legumes and nitrogen-fixing bacteria. It also leverages his expertise in reconstituting specific signaling and metabolic pathways in plants.

    Weng and his colleagues will begin by deciphering the full spectrum of small-molecule signaling processes that occur between legumes and rhizobium bacteria. Then they will genetically engineer an analogous system in nonlegume crop plants. Next, using state-of-the-art metabolomic methods, they will identify which small molecules excreted from legume roots prompt a nitrogen/carbon exchange from rhizobium bacteria. Finally, the researchers will genetically engineer the biosynthesis of those molecules in the roots of nonlegume plants and observe their effect on the rhizobium bacteria surrounding the roots.

    While the project is complex and technically challenging, its potential is staggering. “Focusing on corn alone, this could reduce the production and use of nitrogen fertilizer by 160,000 tons,” Weng notes. “And it could halve the related emissions of nitrous oxide gas.” More

  • in

    Empowering people to adapt on the frontlines of climate change

    On April 11, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This article is the fifth in a five-part series highlighting the most promising concepts to emerge from the competition and the interdisciplinary research teams behind them.

    In the coastal south of Bangladesh, rice paddies that farmers could once harvest three times a year lie barren. Sea-level rise brings saltwater to the soil, ruining the staple crop. It’s one of many impacts, and inequities, of climate change. Despite producing less than 1 percent of global carbon emissions, Bangladesh is suffering more than most countries. Rising seas, heat waves, flooding, and cyclones threaten 90 million people.

    A platform being developed in a collaboration between MIT and BRAC, a Bangladesh-based global development organization, aims to inform and empower climate-threatened communities to proactively adapt to a changing future. Selected as one of five MIT Climate Grand Challenges flagship projects, the Climate Resilience Early Warning System (CREWSnet) will forecast the local impacts of climate change on people’s lives, homes, and livelihoods. These forecasts will guide BRAC’s development of climate-resiliency programs to help residents prepare for and adapt to life-altering conditions.

    “The communities that CREWSnet will focus on have done little to contribute to the problem of climate change in the first place. However, because of socioeconomic situations, they may be among the most vulnerable. We hope that by providing state-of-the-art projections and sharing them broadly with communities, and working through partners like BRAC, we can help improve the capacity of local communities to adapt to climate change, significantly,” says Elfatih Eltahir, the H.M. King Bhumibol Professor in the Department of Civil and Environmental Engineering.

    Eltahir leads the project with John Aldridge and Deborah Campbell in the Humanitarian Assistance and Disaster Relief Systems Group at Lincoln Laboratory. Additional partners across MIT include the Center for Global Change Science; the Department of Earth, Atmospheric and Planetary Sciences; the Joint Program on the Science and Policy of Global Change; and the Abdul Latif Jameel Poverty Action Lab. 

    Predicting local risks

    CREWSnet’s forecasts rely upon a sophisticated model, developed in Eltahir’s research group over the past 25 years, called the MIT Regional Climate Model. This model zooms in on climate processes at local scales, at a resolution as granular as 6 miles. In Bangladesh’s population-dense cities, a 6-mile area could encompass tens, or even hundreds, of thousands of people. The model takes into account the details of a region’s topography, land use, and coastline to predict changes in local conditions.

    When applying this model over Bangladesh, researchers found that heat waves will get more severe and more frequent over the next 30 years. In particular, wet-bulb temperatures, which indicate the ability for humans to cool down by sweating, will rise to dangerous levels rarely observed today, particularly in western, inland cities.

    Such hot spots exacerbate other challenges predicted to worsen near Bangladesh’s coast. Rising sea levels and powerful cyclones are eroding and flooding coastal communities, causing saltwater to surge into land and freshwater. This salinity intrusion is detrimental to human health, ruins drinking water supplies, and harms crops, livestock, and aquatic life that farmers and fishermen depend on for food and income.

    CREWSnet will fuse climate science with forecasting tools that predict the social and economic impacts to villages and cities. These forecasts — such as how often a crop season may fail, or how far floodwaters will reach — can steer decision-making.

    “What people need to know, whether they’re a governor or head of a household, is ‘What is going to happen in my area, and what decisions should I make for the people I’m responsible for?’ Our role is to integrate this science and technology together into a decision support system,” says Aldridge, whose group at Lincoln Laboratory specializes in this area. Most recently, they transitioned a hurricane-evacuation planning system to the U.S. government. “We know that making decisions based on climate change requires a deep level of trust. That’s why having a powerful partner like BRAC is so important,” he says.

    Testing interventions

    Established 50 years ago, just after Bangladesh’s independence, BRAC works in every district of the nation to provide social services that help people rise from extreme poverty. Today, it is one of the world’s largest nongovernmental organizations, serving 110 million people across 11 countries in Asia and Africa, but its success is cultivated locally.

    “BRAC is thrilled to partner with leading researchers at MIT to increase climate resilience in Bangladesh and provide a model that can be scaled around the globe,” says Donella Rapier, president and CEO of BRAC USA. “Locally led climate adaptation solutions that are developed in partnership with communities are urgently needed, particularly in the most vulnerable regions that are on the frontlines of climate change.”

    CREWSnet will help BRAC identify communities most vulnerable to forecasted impacts. In these areas, they will share knowledge and innovate or bolster programs to improve households’ capacity to adapt.

    Many climate initiatives are already underway. One program equips homes to filter and store rainwater, as salinity intrusion makes safe drinking water hard to access. Another program is building resilient housing, able to withstand 120-mile-per-hour winds, that can double as local shelters during cyclones and flooding. Other services are helping farmers switch to different livestock or crops better suited for wetter or saltier conditions (e.g., ducks instead of chickens, or salt-tolerant rice), providing interest-free loans to enable this change.

    But adapting in place will not always be possible, for example in areas predicted to be submerged or unbearably hot by midcentury. “Bangladesh is working on identifying and developing climate-resilient cities and towns across the country, as closer-by alternative destinations as compared to moving to Dhaka, the overcrowded capital of Bangladesh,” says Campbell. “CREWSnet can help identify regions better suited for migration, and climate-resilient adaptation strategies for those regions.” At the same time, BRAC’s Climate Bridge Fund is helping to prepare cities for climate-induced migration, building up infrastructure and financial services for people who have been displaced.

    Evaluating impact

    While CREWSnet’s goal is to enable action, it can’t quite measure the impact of those actions. The Abdul Latif Jameel Poverty Action Lab (J-PAL), a development economics program in the MIT School of Humanities, Arts, and Social Sciences, will help evaluate the effectiveness of the climate-adaptation programs.

    “We conduct randomized controlled trials, similar to medical trials, that help us understand if a program improved people’s lives,” says Claire Walsh, the project director of the King Climate Action Initiative at J-PAL. “Once CREWSnet helps BRAC implement adaptation programs, we will generate scientific evidence on their impacts, so that BRAC and CREWSnet can make a case to funders and governments to expand effective programs.”

    The team aspires to bring CREWSnet to other nations disproportionately impacted by climate change. “Our vision is to have this be a globally extensible capability,” says Campbell. CREWSnet’s name evokes another early-warning decision-support system, FEWSnet, that helped organizations address famine in eastern Africa in the 1980s. Today it is a pillar of food-security planning around the world.

    CREWSnet hopes for a similar impact in climate change planning. Its selection as an MIT Climate Grand Challenges flagship project will inject the project with more funding and resources, momentum that will also help BRAC’s fundraising. The team plans to deploy CREWSnet to southwestern Bangladesh within five years.

    “The communities that we are aspiring to reach with CREWSnet are deeply aware that their lives are changing — they have been looking climate change in the eye for many years. They are incredibly resilient, creative, and talented,” says Ashley Toombs, the external affairs director for BRAC USA. “As a team, we are excited to bring this system to Bangladesh. And what we learn together, we will apply at potentially even larger scales.” More

  • in

    Looking forward to forecast the risks of a changing climate

    On April 11, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This article is the third in a five-part series highlighting the most promising concepts to emerge from the competition, and the interdisciplinary research teams behind them.

    Extreme weather events that were once considered rare have become noticeably less so, from intensifying hurricane activity in the North Atlantic to wildfires generating massive clouds of ozone-damaging smoke. But current climate models are unprepared when it comes to estimating the risk that these increasingly extreme events pose — and without adequate modeling, governments are left unable to take necessary precautions to protect their communities.

    MIT Department of Earth, Atmospheric and Planetary Science (EAPS) Professor Paul O’Gorman researches this trend by studying how climate affects the atmosphere and incorporating what he learns into climate models to improve their accuracy. One particular focus for O’Gorman has been changes in extreme precipitation and midlatitude storms that hit areas like New England.

    “These extreme events are having a lot of impact, but they’re also difficult to model or study,” he says. Seeing the pressing need for better climate models that can be used to develop preparedness plans and climate change mitigation strategies, O’Gorman and collaborators Kerry Emanuel, the Cecil and Ida Green Professor of Atmospheric Science in EAPS, and Miho Mazereeuw, associate professor in MIT’s Department of Architecture, are leading an interdisciplinary group of scientists, engineers, and designers to tackle this problem with their MIT Climate Grand Challenges flagship project, “Preparing for a new world of weather and climate extremes.”

    “We know already from observations and from climate model predictions that weather and climate extremes are changing and will change more,” O’Gorman says. “The grand challenge is preparing for those changing extremes.”

    Their proposal is one of five flagship projects recently announced by the MIT Climate Grand Challenges initiative — an Institute-wide effort catalyzing novel research and engineering innovations to address the climate crisis. Selected from a field of almost 100 submissions, the team will receive additional funding and exposure to help accelerate and scale their project goals. Other MIT collaborators on the proposal include researchers from the School of Engineering, the School of Architecture and Planning, the Office of Sustainability, the Center for Global Change Science, and the Institute for Data, Systems and Society.

    Weather risk modeling

    Fifteen years ago, Kerry Emanuel developed a simple hurricane model. It was based on physics equations, rather than statistics, and could run in real time, making it useful for modeling risk assessment. Emanuel wondered if similar models could be used for long-term risk assessment of other things, such as changes in extreme weather because of climate change.

    “I discovered, somewhat to my surprise and dismay, that almost all extant estimates of long-term weather risks in the United States are based not on physical models, but on historical statistics of the hazards,” says Emanuel. “The problem with relying on historical records is that they’re too short; while they can help estimate common events, they don’t contain enough information to make predictions for more rare events.”

    Another limitation of weather risk models which rely heavily on statistics: They have a built-in assumption that the climate is static.

    “Historical records rely on the climate at the time they were recorded; they can’t say anything about how hurricanes grow in a warmer climate,” says Emanuel. The models rely on fixed relationships between events; they assume that hurricane activity will stay the same, even while science is showing that warmer temperatures will most likely push typical hurricane activity beyond the tropics and into a much wider band of latitudes.

    As a flagship project, the goal is to eliminate this reliance on the historical record by emphasizing physical principles (e.g., the laws of thermodynamics and fluid mechanics) in next-generation models. The downside to this is that there are many variables that have to be included. Not only are there planetary-scale systems to consider, such as the global circulation of the atmosphere, but there are also small-scale, extremely localized events, like thunderstorms, that influence predictive outcomes.

    Trying to compute all of these at once is costly and time-consuming — and the results often can’t tell you the risk in a specific location. But there is a way to correct for this: “What’s done is to use a global model, and then use a method called downscaling, which tries to infer what would happen on very small scales that aren’t properly resolved by the global model,” explains O’Gorman. The team hopes to improve downscaling techniques so that they can be used to calculate the risk of very rare but impactful weather events.

    Global climate models, or general circulation models (GCMs), Emanuel explains, are constructed a bit like a jungle gym. Like the playground bars, the Earth is sectioned in an interconnected three-dimensional framework — only it’s divided 100 to 200 square kilometers at a time. Each node comprises a set of computations for characteristics like wind, rainfall, atmospheric pressure, and temperature within its bounds; the outputs of each node are connected to its neighbor. This framework is useful for creating a big picture idea of Earth’s climate system, but if you tried to zoom in on a specific location — like, say, to see what’s happening in Miami or Mumbai — the connecting nodes are too far apart to make predictions on anything specific to those areas.

    Scientists work around this problem by using downscaling. They use the same blueprint of the jungle gym, but within the nodes they weave a mesh of smaller features, incorporating equations for things like topography and vegetation or regional meteorological models to fill in the blanks. By creating a finer mesh over smaller areas they can predict local effects without needing to run the entire global model.

    Of course, even this finer-resolution solution has its trade-offs. While we might be able to gain a clearer picture of what’s happening in a specific region by nesting models within models, it can still make for a computing challenge to crunch all that data at once, with the trade-off being expense and time, or predictions that are limited to shorter windows of duration — where GCMs can be run considering decades or centuries, a particularly complex local model may be restricted to predictions on timescales of just a few years at a time.

    “I’m afraid that most of the downscaling at present is brute force, but I think there’s room to do it in better ways,” says Emanuel, who sees the problem of finding new and novel methods of achieving this goal as an intellectual challenge. “I hope that through the Grand Challenges project we might be able to get students, postdocs, and others interested in doing this in a very creative way.”

    Adapting to weather extremes for cities and renewable energy

    Improving climate modeling is more than a scientific exercise in creativity, however. There’s a very real application for models that can accurately forecast risk in localized regions.

    Another problem is that progress in climate modeling has not kept up with the need for climate mitigation plans, especially in some of the most vulnerable communities around the globe.

    “It is critical for stakeholders to have access to this data for their own decision-making process. Every community is composed of a diverse population with diverse needs, and each locality is affected by extreme weather events in unique ways,” says Mazereeuw, the director of the MIT Urban Risk Lab. 

    A key piece of the team’s project is building on partnerships the Urban Risk Lab has developed with several cities to test their models once they have a usable product up and running. The cities were selected based on their vulnerability to increasing extreme weather events, such as tropical cyclones in Broward County, Florida, and Toa Baja, Puerto Rico, and extratropical storms in Boston, Massachusetts, and Cape Town, South Africa.

    In their proposal, the team outlines a variety of deliverables that the cities can ultimately use in their climate change preparations, with ideas such as online interactive platforms and workshops with stakeholders — such as local governments, developers, nonprofits, and residents — to learn directly what specific tools they need for their local communities. By doing so, they can craft plans addressing different scenarios in their region, involving events such as sea-level rise or heat waves, while also providing information and means of developing adaptation strategies for infrastructure under these conditions that will be the most effective and efficient for them.

    “We are acutely aware of the inequity of resources both in mitigating impacts and recovering from disasters. Working with diverse communities through workshops allows us to engage a lot of people, listen, discuss, and collaboratively design solutions,” says Mazereeuw.

    By the end of five years, the team is hoping that they’ll have better risk assessment and preparedness tool kits, not just for the cities that they’re partnering with, but for others as well.

    “MIT is well-positioned to make progress in this area,” says O’Gorman, “and I think it’s an important problem where we can make a difference.” More

  • in

    Structures considered key to gene expression are surprisingly fleeting

    In human chromosomes, DNA is coated by proteins to form an exceedingly long beaded string. This “string” is folded into numerous loops, which are believed to help cells control gene expression and facilitate DNA repair, among other functions. A new study from MIT suggests that these loops are very dynamic and shorter-lived than previously thought.

    In the new study, the researchers were able to monitor the movement of one stretch of the genome in a living cell for about two hours. They saw that this stretch was fully looped for only 3 to 6 percent of the time, with the loop lasting for only about 10 to 30 minutes. The findings suggest that scientists’ current understanding of how loops influence gene expression may need to be revised, the researchers say.

    “Many models in the field have been these pictures of static loops regulating these processes. What our new paper shows is that this picture is not really correct,” says Anders Sejr Hansen, the Underwood-Prescott Career Development Assistant Professor of Biological Engineering at MIT. “We suggest that the functional state of these domains is much more dynamic.”

    Hansen is one of the senior authors of the new study, along with Leonid Mirny, a professor in MIT’s Institute for Medical Engineering and Science and the Department of Physics, and Christoph Zechner, a group leader at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany, and the Center for Systems Biology Dresden. MIT postdoc Michele Gabriele, recent Harvard University PhD recipient Hugo Brandão, and MIT graduate student Simon Grosse-Holz are the lead authors of the paper, which appears today in Science.

    Out of the loop

    Using computer simulations and experimental data, scientists including Mirny’s group at MIT have shown that loops in the genome are formed by a process called extrusion, in which a molecular motor promotes the growth of progressively larger loops. The motor stops each time it encounters a “stop sign” on DNA. The motor that extrudes such loops is a protein complex called cohesin, while the DNA-bound protein CTCF serves as the stop sign. These cohesin-mediated loops between CTCF sites were seen in previous experiments.

    However, those experiments only offered a snapshot of a moment in time, with no information on how the loops change over time. In their new study, the researchers developed techniques that allowed them to fluorescently label CTCF DNA sites so they could image the DNA loops over several hours. They also created a new computational method that can infer the looping events from the imaging data.

    “This method was crucial for us to distinguish signal from noise in our experimental data and quantify looping,” Zechner says. “We believe that such approaches will become increasingly important for biology as we continue to push the limits of detection with experiments.”

    The researchers used their method to image a stretch of the genome in mouse embryonic stem cells. “If we put our data in the context of one cell division cycle, which lasts about 12 hours, the fully formed loop only actually exists for about 20 to 45 minutes, or about 3 to 6 percent of the time,” Grosse-Holz says.

    “If the loop is only present for such a tiny period of the cell cycle and very short-lived, we shouldn’t think of this fully looped state as being the primary regulator of gene expression,” Hansen says. “We think we need new models for how the 3D structure of the genome regulates gene expression, DNA repair, and other functional downstream processes.”

    While fully formed loops were rare, the researchers found that partially extruded loops were present about 92 percent of the time. These smaller loops have been difficult to observe with the previous methods of detecting loops in the genome.

    “In this study, by integrating our experimental data with polymer simulations, we have now been able to quantify the relative extents of the unlooped, partially extruded, and fully looped states,” Brandão says.

    “Since these interactions are very short, but very frequent, the previous methodologies were not able to fully capture their dynamics,” Gabriele adds. “With our new technique, we can start to resolve transitions between fully looped and unlooped states.”

    Play video

    The researchers hypothesize that these partial loops may play more important roles in gene regulation than fully formed loops. Strands of DNA run along each other as loops begin to form and then fall apart, and these interactions may help regulatory elements such as enhancers and gene promoters find each other.

    “More than 90 percent of the time, there are some transient loops, and presumably what’s important is having those loops that are being perpetually extruded,” Mirny says. “The process of extrusion itself may be more important than the fully looped state that only occurs for a short period of time.”

    More loops to study

    Since most of the other loops in the genome are weaker than the one the researchers studied in this paper, they suspect that many other loops will also prove to be highly transient. They now plan to use their new technique study some of those other loops, in a variety of cell types.

    “There are about 10,000 of these loops, and we’ve looked at one,” Hansen says. “We have a lot of indirect evidence to suggest that the results would be generalizable, but we haven’t demonstrated that. Using the technology platform we’ve set up, which combines new experimental and computational methods, we can begin to approach other loops in the genome.”

    The researchers also plan to investigate the role of specific loops in disease. Many diseases, including a neurodevelopmental disorder called FOXG1 syndrome, could be linked to faulty loop dynamics. The researchers are now studying how both the normal and mutated form of the FOXG1 gene, as well as the cancer-causing gene MYC, are affected by genome loop formation.

    The research was funded by the National Institutes of Health, the National Science Foundation, the Mathers Foundation, a Pew-Stewart Cancer Research Scholar grant, the Chaires d’excellence Internationale Blaise Pascal, an American-Italian Cancer Foundation research scholarship, and the Max Planck Institute for Molecular Cell Biology and Genetics. More