More stories

  • in

    How can we reduce the carbon footprint of global computing?

    The voracious appetite for energy from the world’s computers and communications technology presents a clear threat for the globe’s warming climate. That was the blunt assessment from presenters in the intensive two-day Climate Implications of Computing and Communications workshop held on March 3 and 4, hosted by MIT’s Climate and Sustainability Consortium (MCSC), MIT-IBM Watson AI Lab, and the Schwarzman College of Computing.

    The virtual event featured rich discussions and highlighted opportunities for collaboration among an interdisciplinary group of MIT faculty and researchers and industry leaders across multiple sectors — underscoring the power of academia and industry coming together.

    “If we continue with the existing trajectory of compute energy, by 2040, we are supposed to hit the world’s energy production capacity. The increase in compute energy and demand has been increasing at a much faster rate than the world energy production capacity increase,” said Bilge Yildiz, the Breene M. Kerr Professor in the MIT departments of Nuclear Science and Engineering and Materials Science and Engineering, one of the workshop’s 18 presenters. This computing energy projection draws from the Semiconductor Research Corporations’s decadal report.To cite just one example: Information and communications technology already account for more than 2 percent of global energy demand, which is on a par with the aviation industries emissions from fuel.“We are the very beginning of this data-driven world. We really need to start thinking about this and act now,” said presenter Evgeni Gousev, senior director at Qualcomm.  Innovative energy-efficiency optionsTo that end, the workshop presentations explored a host of energy-efficiency options, including specialized chip design, data center architecture, better algorithms, hardware modifications, and changes in consumer behavior. Industry leaders from AMD, Ericsson, Google, IBM, iRobot, NVIDIA, Qualcomm, Tertill, Texas Instruments, and Verizon outlined their companies’ energy-saving programs, while experts from across MIT provided insight into current research that could yield more efficient computing.Panel topics ranged from “Custom hardware for efficient computing” to “Hardware for new architectures” to “Algorithms for efficient computing,” among others.

    Visual representation of the conversation during the workshop session entitled “Energy Efficient Systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    The goal, said Yildiz, is to improve energy efficiency associated with computing by more than a million-fold.“I think part of the answer of how we make computing much more sustainable has to do with specialized architectures that have very high level of utilization,” said Darío Gil, IBM senior vice president and director of research, who stressed that solutions should be as “elegant” as possible.  For example, Gil illustrated an innovative chip design that uses vertical stacking to reduce the distance data has to travel, and thus reduces energy consumption. Surprisingly, more effective use of tape — a traditional medium for primary data storage — combined with specialized hard drives (HDD), can yield a dramatic savings in carbon dioxide emissions.Gil and presenters Bill Dally, chief scientist and senior vice president of research of NVIDIA; Ahmad Bahai, CTO of Texas Instruments; and others zeroed in on storage. Gil compared data to a floating iceberg in which we can have fast access to the “hot data” of the smaller visible part while the “cold data,” the large underwater mass, represents data that tolerates higher latency. Think about digital photo storage, Gil said. “Honestly, are you really retrieving all of those photographs on a continuous basis?” Storage systems should provide an optimized mix of of HDD for hot data and tape for cold data based on data access patterns.Bahai stressed the significant energy saving gained from segmenting standby and full processing. “We need to learn how to do nothing better,” he said. Dally spoke of mimicking the way our brain wakes up from a deep sleep, “We can wake [computers] up much faster, so we don’t need to keep them running in full speed.”Several workshop presenters spoke of a focus on “sparsity,” a matrix in which most of the elements are zero, as a way to improve efficiency in neural networks. Or as Dally said, “Never put off till tomorrow, where you could put off forever,” explaining efficiency is not “getting the most information with the fewest bits. It’s doing the most with the least energy.”Holistic and multidisciplinary approaches“We need both efficient algorithms and efficient hardware, and sometimes we need to co-design both the algorithm and the hardware for efficient computing,” said Song Han, a panel moderator and assistant professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT.Some presenters were optimistic about innovations already underway. According to Ericsson’s research, as much as 15 percent of the carbon emissions globally can be reduced through the use of existing solutions, noted Mats Pellbäck Scharp, head of sustainability at Ericsson. For example, GPUs are more efficient than CPUs for AI, and the progression from 3G to 5G networks boosts energy savings.“5G is the most energy efficient standard ever,” said Scharp. “We can build 5G without increasing energy consumption.”Companies such as Google are optimizing energy use at their data centers through improved design, technology, and renewable energy. “Five of our data centers around the globe are operating near or above 90 percent carbon-free energy,” said Jeff Dean, Google’s senior fellow and senior vice president of Google Research.Yet, pointing to the possible slowdown in the doubling of transistors in an integrated circuit — or Moore’s Law — “We need new approaches to meet this compute demand,” said Sam Naffziger, AMD senior vice president, corporate fellow, and product technology architect. Naffziger spoke of addressing performance “overkill.” For example, “we’re finding in the gaming and machine learning space we can make use of lower-precision math to deliver an image that looks just as good with 16-bit computations as with 32-bit computations, and instead of legacy 32b math to train AI networks, we can use lower-energy 8b or 16b computations.”

    Visual representation of the conversation during the workshop session entitled “Wireless, networked, and distributed systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    Other presenters singled out compute at the edge as a prime energy hog.“We also have to change the devices that are put in our customers’ hands,” said Heidi Hemmer, senior vice president of engineering at Verizon. As we think about how we use energy, it is common to jump to data centers — but it really starts at the device itself, and the energy that the devices use. Then, we can think about home web routers, distributed networks, the data centers, and the hubs. “The devices are actually the least energy-efficient out of that,” concluded Hemmer.Some presenters had different perspectives. Several called for developing dedicated silicon chipsets for efficiency. However, panel moderator Muriel Medard, the Cecil H. Green Professor in EECS, described research at MIT, Boston University, and Maynooth University on the GRAND (Guessing Random Additive Noise Decoding) chip, saying, “rather than having obsolescence of chips as the new codes come in and in different standards, you can use one chip for all codes.”Whatever the chip or new algorithm, Helen Greiner, CEO of Tertill (a weeding robot) and co-founder of iRobot, emphasized that to get products to market, “We have to learn to go away from wanting to get the absolute latest and greatest, the most advanced processor that usually is more expensive.” She added, “I like to say robot demos are a dime a dozen, but robot products are very infrequent.”Greiner emphasized consumers can play a role in pushing for more energy-efficient products — just as drivers began to demand electric cars.Dean also sees an environmental role for the end user.“We have enabled our cloud customers to select which cloud region they want to run their computation in, and they can decide how important it is that they have a low carbon footprint,” he said, also citing other interfaces that might allow consumers to decide which air flights are more efficient or what impact installing a solar panel on their home would have.However, Scharp said, “Prolonging the life of your smartphone or tablet is really the best climate action you can do if you want to reduce your digital carbon footprint.”Facing increasing demandsDespite their optimism, the presenters acknowledged the world faces increasing compute demand from machine learning, AI, gaming, and especially, blockchain. Panel moderator Vivienne Sze, associate professor in EECS, noted the conundrum.“We can do a great job in making computing and communication really efficient. But there is this tendency that once things are very efficient, people use more of it, and this might result in an overall increase in the usage of these technologies, which will then increase our overall carbon footprint,” Sze said.Presenters saw great potential in academic/industry partnerships, particularly from research efforts on the academic side. “By combining these two forces together, you can really amplify the impact,” concluded Gousev.Presenters at the Climate Implications of Computing and Communications workshop also included: Joel Emer, professor of the practice in EECS at MIT; David Perreault, the Joseph F. and Nancy P. Keithley Professor of EECS at MIT; Jesús del Alamo, MIT Donner Professor and professor of electrical engineering in EECS at MIT; Heike Riel, IBM Fellow and head science and technology at IBM; and Takashi Ando, principal research staff member at IBM Research. The recorded workshop sessions are available on YouTube. More

  • in

    Given what we know, how do we live now?

    To truly engage the climate crisis, as so many at MIT are doing, can be daunting and draining. But it need not be lonely. Building collective insight and companionship for this undertaking is the aim of the Council on the Uncertain Human Future (CUHF), an international network launched at Clark University in 2014 and active at MIT since 2020.

    Gathering together in council circles of 8-12 people, MIT community members make space to examine — and even to transform — their questions and concerns about climate change. Through a practice of intentional conversation in small groups, the council calls participants to reflect on our human interdependence with each other and the natural world, and on where we are in both social and planetary terms. It urges exploration of how we got here and what that means, and culminates by asking: Given what we know, how do we live now?

    Origins

    CUHF developed gradually in conversations between co-founders Sarah Buie and Diana Chapman Walsh, who met when they were, respectively, the director of Clark’s Higgins School of Humanities and the president of Wellesley College. Buie asked Walsh to keynote a Ford-funded Difficult Dialogues initiative in 2006. In the years and conversations that followed, they concluded that the most difficult dialogue wasn’t happening: an honest engagement with the realities and implications of a rapidly heating planet Earth.

    With social scientist Susi Moser, they chose the practice of council, a blend of both modern and traditional dialogic forms, and began with a cohort of 12 environmental leaders willing to examine the gravest implications of climate change in a supportive setting — what Walsh calls “a kind of container for a deep dive into dark waters.” That original circle met in three long weekends over 2014 and continues today as the original CUHF Steady Council.

    Taking root at MIT

    Since then, the Council on the Uncertain Human Future has grown into an international network, with circles at universities, research centers, and other communities across the United States and in Scotland and Kathmandu. The practice took root at MIT (where Walsh is a life member emerita of the MIT Corporation) in 2020.

    Leadership and communications teams in the MIT School of Humanities, Arts and Social Sciences (SHASS) Office of the Dean and the Environmental Solutions Initiative (ESI) recognized the need the council could meet on a campus buzzing with research and initiatives aimed at improving the health of the planet. Joining forces with the council leadership, the two MIT groups collaborated to launch the program at MIT, inviting participants from across the institute, and sharing information on the MIT Climate Portal. Intentional conversations

    “The council gives the MIT community the kind of deep discourse that is so necessary to face climate change and a rapidly changing world,” says ESI director and professor of architecture John Fernández. “These conversations open an opportunity to create a new kind of breakthrough of mindsets. It’s a rare chance to pause and ask: Are we doing the things we should be doing, given MIT’s mission to the nation and the world, and given the challenges facing us?”

    As the CUHF practice spreads, agendas expand to acknowledge changing times; the group produces films and collections of readings, curates an online resource site, and convenes international Zoom events for members on a range of topics, many of which interact with climate, including racism and Covid-19. But its core activity remains the same: an intentional, probing conversation over time. There are no preconceived objectives, only a few simple guidelines: speak briefly, authentically, and spontaneously, moving around the circle; listen with attention and receptivity; observe confidentiality. “Through this process of honest speaking and listening, insight arises and trustworthy community is built,” says Buie.

    While these meetings were held in person before 2020, the full council experience pivoted to Zoom at the start of the pandemic with two-hour discussions forming an arc over a period of five weeks. Sessions begin with a call for participants to slow down and breathe, grounding themselves for the conversation. The convener offers a series of questions that elicit spontaneous responses, concerns, and observations; later, they invite visioning of new possibilities. Inviting emergent possibility

    While the process may yield tangible outcomes — for example, a curriculum initiative at Clark called A New Earth Conversation — its greatest value, according to Buie, “is the collective listening, acknowledgment, and emergent possibility it invites. Given the profound cultural misunderstandings and misalignments behind it, climate breakdown defies normative approaches to ‘problem-solving.’ The Council enables us to live into the uncertainty with more awareness, humility, curiosity, and compassion. Participants feel the change; they return to their work and lives differently, and less alone.”

    Roughly 60 faculty and staff from across MIT, all engaged in climate-related work, have participated so far in council circles. The 2021 edition of the Institute’s Climate Action Plan provides for the expansion of councils at MIT to deepen humanistic understanding of the climate crisis. The conversations are also a space for engaging with how the climate crisis is related to what the plan calls “the imperative of justice” and “the intertwined problems of equity and economic transition.”

    Reflecting on the growth of the council’s humanistic practice at MIT, Agustín Rayo, professor of philosophy and the Kenan Sahin Dean of MIT SHASS, says: “The council conversations about the future of our species and the planet are an invaluable contribution to MIT’s ‘whole-campus’ focus on the climate crisis.”

    Growing the council at MIT means broadening participation. Postdocs will join a new circle this fall, with opportunities for student involvement soon to follow. More than a third of MIT’s prior council participants have continued with monthly Steady Council meetings, which sometimes reference recent events while deepening the council practice at MIT. The session in December 2021, for example, began with reports from MIT community members who had attended the COP26 UN climate change conference in Glasgow, then broke into council circles to engage the questions raised.

    Cognitive leaps

    The MIT Steady Council is organized by Curt Newton, director of MIT OpenCourseWare and an early contributor to the online platform that became the Institute’s Climate Portal. Newton sees a productive tension between MIT’s culture of problem-solving and the council’s call for participants to slow down and question the paradigms in which they operate. “It can feel wrong, or at least unfamiliar, to put ourselves in a mode where we’re not trying to create an agenda and an action plan,” he says. “To get us to step back from that and think together about the biggest picture before we allow ourselves to be pulled into that solution mindset  — it’s a necessary experiment for places like MIT.”

    Over the past decade, Newton says, he has searched for ways to direct his energies toward environmental issues “with one foot firmly planted at MIT and one foot out in the world.” The silo-busting personal connections he’s made with colleagues through the council have empowered him “to show up with my full climate self at work.”

    Walsh finds it especially promising to see CUHF taking root at MIT, “a place of intensity, collaboration, and high ideals, where the most stunning breakthroughs occur when someone takes a step back, stops the action, changes the trajectory for a time and begins asking new questions that challenge received wisdom.” She sees council as a communal practice that encourages those cognitive leaps. “If ever there were a moment in history that cried out for a paradigm shift,” she says, “surely this is it.”

    Funding for the Council on the Uncertain Human Future comes from the Christopher Reynolds Foundation and the Kaiser Family Foundation.

    Prepared by MIT SHASS CommunicationsEditorial team: Nicole Estvanik Taylor and Emily Hiestand More

  • in

    Machine learning, harnessed to extreme computing, aids fusion energy development

    MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard have just completed one of the most demanding calculations in fusion science — predicting the temperature and density profiles of a magnetically confined plasma via first-principles simulation of plasma turbulence. Solving this problem by brute force is beyond the capabilities of even the most advanced supercomputers. Instead, the researchers used an optimization methodology developed for machine learning to dramatically reduce the CPU time required while maintaining the accuracy of the solution.

    Fusion energyFusion offers the promise of unlimited, carbon-free energy through the same physical process that powers the sun and the stars. It requires heating the fuel to temperatures above 100 million degrees, well above the point where the electrons are stripped from their atoms, creating a form of matter called plasma. On Earth, researchers use strong magnetic fields to isolate and insulate the hot plasma from ordinary matter. The stronger the magnetic field, the better the quality of the insulation that it provides.

    Rodriguez-Fernandez and Howard have focused on predicting the performance expected in the SPARC device, a compact, high-magnetic-field fusion experiment, currently under construction by the MIT spin-out company Commonwealth Fusion Systems (CFS) and researchers from MIT’s Plasma Science and Fusion Center. While the calculation required an extraordinary amount of computer time, over 8 million CPU-hours, what was remarkable was not how much time was used, but how little, given the daunting computational challenge.

    The computational challenge of fusion energyTurbulence, which is the mechanism for most of the heat loss in a confined plasma, is one of the science’s grand challenges and the greatest problem remaining in classical physics. The equations that govern fusion plasmas are well known, but analytic solutions are not possible in the regimes of interest, where nonlinearities are important and solutions encompass an enormous range of spatial and temporal scales. Scientists resort to solving the equations by numerical simulation on computers. It is no accident that fusion researchers have been pioneers in computational physics for the last 50 years.

    One of the fundamental problems for researchers is reliably predicting plasma temperature and density given only the magnetic field configuration and the externally applied input power. In confinement devices like SPARC, the external power and the heat input from the fusion process are lost through turbulence in the plasma. The turbulence itself is driven by the difference in the extremely high temperature of the plasma core and the relatively cool temperatures of the plasma edge (merely a few million degrees). Predicting the performance of a self-heated fusion plasma therefore requires a calculation of the power balance between the fusion power input and the losses due to turbulence.

    These calculations generally start by assuming plasma temperature and density profiles at a particular location, then computing the heat transported locally by turbulence. However, a useful prediction requires a self-consistent calculation of the profiles across the entire plasma, which includes both the heat input and turbulent losses. Directly solving this problem is beyond the capabilities of any existing computer, so researchers have developed an approach that stitches the profiles together from a series of demanding but tractable local calculations. This method works, but since the heat and particle fluxes depend on multiple parameters, the calculations can be very slow to converge.

    However, techniques emerging from the field of machine learning are well suited to optimize just such a calculation. Starting with a set of computationally intensive local calculations run with the full-physics, first-principles CGYRO code (provided by a team from General Atomics led by Jeff Candy) Rodriguez-Fernandez and Howard fit a surrogate mathematical model, which was used to explore and optimize a search within the parameter space. The results of the optimization were compared to the exact calculations at each optimum point, and the system was iterated to a desired level of accuracy. The researchers estimate that the technique reduced the number of runs of the CGYRO code by a factor of four.

    New approach increases confidence in predictionsThis work, described in a recent publication in the journal Nuclear Fusion, is the highest fidelity calculation ever made of the core of a fusion plasma. It refines and confirms predictions made with less demanding models. Professor Jonathan Citrin, of the Eindhoven University of Technology and leader of the fusion modeling group for DIFFER, the Dutch Institute for Fundamental Energy Research, commented: “The work significantly accelerates our capabilities in more routinely performing ultra-high-fidelity tokamak scenario prediction. This algorithm can help provide the ultimate validation test of machine design or scenario optimization carried out with faster, more reduced modeling, greatly increasing our confidence in the outcomes.” 

    In addition to increasing confidence in the fusion performance of the SPARC experiment, this technique provides a roadmap to check and calibrate reduced physics models, which run with a small fraction of the computational power. Such models, cross-checked against the results generated from turbulence simulations, will provide a reliable prediction before each SPARC discharge, helping to guide experimental campaigns and improving the scientific exploitation of the device. It can also be used to tweak and improve even simple data-driven models, which run extremely quickly, allowing researchers to sift through enormous parameter ranges to narrow down possible experiments or possible future machines.

    The research was funded by CFS, with computational support from the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility. More

  • in

    Using excess heat to improve electrolyzers and fuel cells

    Reducing the use of fossil fuels will have unintended consequences for the power-generation industry and beyond. For example, many industrial chemical processes use fossil-fuel byproducts as precursors to things like asphalt, glycerine, and other important chemicals. One solution to reduce the impact of the loss of fossil fuels on industrial chemical processes is to store and use the heat that nuclear fission produces. New MIT research has dramatically improved a way to put that heat toward generating chemicals through a process called electrolysis. 

    Electrolyzers are devices that use electricity to split water (H2O) and generate molecules of hydrogen (H2) and oxygen (O2). Hydrogen is used in fuel cells to generate electricity and drive electric cars or drones or in industrial operations like the production of steel, ammonia, and polymers. Electrolyzers can also take in water and carbon dioxide (CO2) and produce oxygen and ethylene (C2H4), a chemical used in polymers and elsewhere.

    There are three main types of electrolyzers. One type works at room temperature, but has downsides; they’re inefficient and require rare metals, such as platinum. A second type is more efficient but runs at high temperatures, above 700 degrees Celsius. But metals corrode at that temperature, and the devices need expensive sealing and insulation. The third type would be a Goldilocks solution for nuclear heat if it were perfected, running at 300-600 C and requiring mostly cheap materials like stainless steel. These cells have never been operated as efficiently as theory says they should. The new work, published this month in Nature, both illuminates the problem and offers a solution.

    A sandwich mystery

    The intermediate-temperature devices use what are called protonic ceramic electrochemical cells. Each cell is a sandwich, with a dense electrolyte layered between two porous electrodes. Water vapor is pumped into the top electrode. A wire on the side connects the two electrodes, and externally generated electricity runs from the top to the bottom. The voltage pulls electrons out of the water, which splits the molecule, releasing oxygen. A hydrogen atom without an electron is just a proton. The protons get pulled through the electrolyte to rejoin with the electrons at the bottom electrode and form H2 molecules, which are then collected.

    On its own, the electrolyte in the middle, made mainly of barium, cerium, and zirconium, conducts protons very well. “But when we put the same material into this three-layer device, the proton conductivity of the full cell is pretty bad,” says Yanhao Dong, a postdoc in MIT’s Department of Nuclear Science and Engineering and a paper co-author. “Its conductivity is only about 50 percent of the bulk form’s. We wondered why there’s an inconsistency here.”

    A couple of clues pointed them in the right direction. First, if they don’t prepare the cell very carefully, the top layer, only about 20 microns (.02 millimeters) thick, doesn’t stay attached. “Sometimes if you use just Scotch tape, it will peel off,” Dong says. Second, when they looked at a cross section of a device using a scanning electron microscope, they saw that the top surface of the electrolyte layer was flat, whereas the bottom surface of the porous electrode sitting on it was bumpy, and the two came into contact in only a few places. They didn’t bond well. That precarious interface leads to both structural de-lamination and poor proton passage from the electrode to the electrolyte.

    Acidic solution

    The solution turned out to be simple: researchers roughed up the top of the electrolyte. Specifically, they applied acid for 10 minutes, which etched grooves into the surface. Ju Li, the Battelle Energy Alliance Professor in Nuclear Engineering and professor of materials science and engineering at MIT, and a paper co-author, likens it to sandblasting a surface before applying paint to increase adhesion. Their acid-treated cells produced about 200 percent more hydrogen per area at 1.5 volts at 600 C than did any previous cell of its type, and worked well down to 350 C with very little performance decay over extended operation. 

    “The authors reported a surprisingly simple yet highly effective surface treatment to dramatically improve the interface,” says Liangbing Hu, the director of the Center for Materials Innovation at the Maryland Energy Innovation Institute, who was not involved in the work. He calls the cell performance “exceptional.”

    “We are excited and surprised” by the results, Dong says. “The engineering solution seems quite simple. And that’s actually good, because it makes it very applicable to real applications.” In a practical product, many such cells would be stacked together to form a module. MIT’s partner in the project, Idaho National Laboratory, is very strong in engineering and prototyping, so Li expects to see electrolyzers based on this technology at scale before too long. “At the materials level, this is a breakthrough that shows that at a real-device scale you can work at this sweet spot of temperature of 350 to 600 degrees Celsius for nuclear fission and fusion reactors,” he says.

    “Reduced operating temperature enables cheaper materials for the large-scale assembly, including the stack,” says Idaho National Laboratory researcher and paper co-author Dong Ding. “The technology operates within the same temperature range as several important, current industrial processes, including ammonia production and CO2 reduction. Matching these temperatures will expedite the technology’s adoption within the existing industry.”

    “This is very significant for both Idaho National Lab and us,” Li adds, “because it bridges nuclear energy and renewable electricity.” He notes that the technology could also help fuel cells, which are basically electrolyzers run in reverse, using green hydrogen or hydrocarbons to generate electricity. According to Wei Wu, a materials scientist at Idaho National Laboratory and a paper co-author, “this technique is quite universal and compatible with other solid electrochemical devices.”

    Dong says it’s rare for a paper to advance both science and engineering to such a degree. “We are happy to combine those together and get both very good scientific understanding and also very good real-world performance.”

    This work, done in collaboration with Idaho National Laboratory, New Mexico State University, and the University of Nebraska–Lincoln, was funded, in part, by the U.S. Department of Energy. More

  • in

    Using plant biology to address climate change

    On April 11, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This article is the fourth in a five-part series highlighting the most promising concepts to emerge from the competition and the interdisciplinary research teams behind them.

    The impact of our changing climate on agriculture and food security — and how contemporary agriculture contributes to climate change — is at the forefront of MIT’s multidisciplinary project “Revolutionizing agriculture with low-emissions, resilient crops.” The project The project is one of five flagship winners in the Climate Grand Challenges competition, and brings together researchers from the departments of Biology, Biological Engineering, Chemical Engineering, and Civil and Environmental Engineering.

    “Our team’s research seeks to address two connected challenges: first, the need to reduce the greenhouse gas emissions produced by agricultural fertilizer; second, the fact that the yields of many current agricultural crops will decrease, due to the effects of climate change on plant metabolism,” says the project’s faculty lead, Christopher Voigt, the Daniel I.C. Wang Professor in MIT’s Department of Biological Engineering. “We are pursuing six interdisciplinary projects that are each key to our overall goal of developing low-emissions methods for fertilizing plants that are bioengineered to be more resilient and productive in a changing climate.”

    Whitehead Institute members Mary Gehring and Jing-Ke Weng, plant biologists who are also associate professors in MIT’s Department of Biology, will lead two of those projects.

    Promoting crop resilience

    For most of human history, climate change occurred gradually, over hundreds or thousands of years. That pace allowed plants to adapt to variations in temperature, precipitation, and atmospheric composition. However, human-driven climate change has occurred much more quickly, and crop plants have suffered: Crop yields are down in many regions, as is seed protein content in cereal crops.

    “If we want to ensure an abundant supply of nutritious food for the world, we need to develop fundamental mechanisms for bioengineering a wide variety of crop plants that will be both hearty and nutritious in the face of our changing climate,” says Gehring. In her previous work, she has shown that many aspects of plant reproduction and seed development are controlled by epigenetics — that is, by information outside of the DNA sequence. She has been using that knowledge and the research methods she has developed to identify ways to create varieties of seed-producing plants that are more productive and resilient than current food crops.

    But plant biology is complex, and while it is possible to develop plants that integrate robustness-enhancing traits by combining dissimilar parental strains, scientists are still learning how to ensure that the new traits are carried forward from one generation to the next. “Plants that carry the robustness-enhancing traits have ‘hybrid vigor,’ and we believe that the perpetuation of those traits is controlled by epigenetics,” Gehring explains. “Right now, some food crops, like corn, can be engineered to benefit from hybrid vigor, but those traits are not inherited. That’s why farmers growing many of today’s most productive varieties of corn must purchase and plant new batches of seeds each year. Moreover, many important food crops have not yet realized the benefits of hybrid vigor.”

    The project Gehring leads, “Developing Clonal Seed Production to Fix Hybrid Vigor,” aims to enable food crop plants to create seeds that are both more robust and genetically identical to the parent — and thereby able to pass beneficial traits from generation to generation.

    The process of clonal (or asexual) production of seeds that are genetically identical to the maternal parent is called apomixis. Gehring says, “Because apomixis is present in 400 flowering plant species — about 1 percent of flowering plant species — it is probable that genes and signaling pathways necessary for apomixis are already present within crop plants. Our challenge is to tweak those genes and pathways so that the plant switches reproduction from sexual to asexual.”

    The project will leverage the fact that genes and pathways related to autonomous asexual development of the endosperm — a seed’s nutritive tissue — exist in the model plant Arabidopsis thaliana. In previous work on Arabidopsis, Gehring’s lab researched a specific gene that, when misregulated, drives development of an asexual endosperm-like material. “Normally, that seed would not be viable,” she notes. “But we believe that by epigenetic tuning of the expression of additional relevant genes, we will enable the plant to retain that material — and help achieve apomixis.”

    If Gehring and her colleagues succeed in creating a gene-expression “formula” for introducing endosperm apomixis into a wide range of crop plants, they will have made a fundamental and important achievement. Such a method could be applied throughout agriculture to create and perpetuate new crop breeds able to withstand their changing environments while requiring less fertilizer and fewer pesticides.

    Creating “self-fertilizing” crops

    Roughly a quarter of greenhouse gas (GHG) emissions in the United States are a product of agriculture. Fertilizer production and use accounts for one third of those emissions and includes nitrous oxide, which has heat-trapping capacity 298-fold stronger than carbon dioxide, according to a 2018 Frontiers in Plant Science study. Most artificial fertilizer production also consumes huge quantities of natural gas and uses minerals mined from nonrenewable resources. After all that, much of the nitrogen fertilizer becomes runoff that pollutes local waterways. For those reasons, this Climate Grand Challenges flagship project aims to greatly reduce use of human-made fertilizers.

    One tantalizing approach is to cultivate cereal crop plants — which account for about 75 percent of global food production — capable of drawing nitrogen from metabolic interactions with bacteria in the soil. Whitehead Institute’s Weng leads an effort to do just that: genetically bioengineer crops such as corn, rice, and wheat to, essentially, create their own fertilizer through a symbiotic relationship with nitrogen-fixing microbes.

    “Legumes such as bean and pea plants can form root nodules through which they receive nitrogen from rhizobia bacteria in exchange for carbon,” Weng explains. “This metabolic exchange means that legumes release far less greenhouse gas — and require far less investment of fossil energy — than do cereal crops, which use a huge portion of the artificially produced nitrogen fertilizers employed today.

    “Our goal is to develop methods for transferring legumes’ ‘self-fertilizing’ capacity to cereal crops,” Weng says. “If we can, we will revolutionize the sustainability of food production.”

    The project — formally entitled “Mimicking legume-rhizobia symbiosis for fertilizer production in cereals” — will be a multistage, five-year effort. It draws on Weng’s extensive studies of metabolic evolution in plants and his identification of molecules involved in formation of the root nodules that permit exchanges between legumes and nitrogen-fixing bacteria. It also leverages his expertise in reconstituting specific signaling and metabolic pathways in plants.

    Weng and his colleagues will begin by deciphering the full spectrum of small-molecule signaling processes that occur between legumes and rhizobium bacteria. Then they will genetically engineer an analogous system in nonlegume crop plants. Next, using state-of-the-art metabolomic methods, they will identify which small molecules excreted from legume roots prompt a nitrogen/carbon exchange from rhizobium bacteria. Finally, the researchers will genetically engineer the biosynthesis of those molecules in the roots of nonlegume plants and observe their effect on the rhizobium bacteria surrounding the roots.

    While the project is complex and technically challenging, its potential is staggering. “Focusing on corn alone, this could reduce the production and use of nitrogen fertilizer by 160,000 tons,” Weng notes. “And it could halve the related emissions of nitrous oxide gas.” More

  • in

    Empowering people to adapt on the frontlines of climate change

    On April 11, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This article is the fifth in a five-part series highlighting the most promising concepts to emerge from the competition and the interdisciplinary research teams behind them.

    In the coastal south of Bangladesh, rice paddies that farmers could once harvest three times a year lie barren. Sea-level rise brings saltwater to the soil, ruining the staple crop. It’s one of many impacts, and inequities, of climate change. Despite producing less than 1 percent of global carbon emissions, Bangladesh is suffering more than most countries. Rising seas, heat waves, flooding, and cyclones threaten 90 million people.

    A platform being developed in a collaboration between MIT and BRAC, a Bangladesh-based global development organization, aims to inform and empower climate-threatened communities to proactively adapt to a changing future. Selected as one of five MIT Climate Grand Challenges flagship projects, the Climate Resilience Early Warning System (CREWSnet) will forecast the local impacts of climate change on people’s lives, homes, and livelihoods. These forecasts will guide BRAC’s development of climate-resiliency programs to help residents prepare for and adapt to life-altering conditions.

    “The communities that CREWSnet will focus on have done little to contribute to the problem of climate change in the first place. However, because of socioeconomic situations, they may be among the most vulnerable. We hope that by providing state-of-the-art projections and sharing them broadly with communities, and working through partners like BRAC, we can help improve the capacity of local communities to adapt to climate change, significantly,” says Elfatih Eltahir, the H.M. King Bhumibol Professor in the Department of Civil and Environmental Engineering.

    Eltahir leads the project with John Aldridge and Deborah Campbell in the Humanitarian Assistance and Disaster Relief Systems Group at Lincoln Laboratory. Additional partners across MIT include the Center for Global Change Science; the Department of Earth, Atmospheric and Planetary Sciences; the Joint Program on the Science and Policy of Global Change; and the Abdul Latif Jameel Poverty Action Lab. 

    Predicting local risks

    CREWSnet’s forecasts rely upon a sophisticated model, developed in Eltahir’s research group over the past 25 years, called the MIT Regional Climate Model. This model zooms in on climate processes at local scales, at a resolution as granular as 6 miles. In Bangladesh’s population-dense cities, a 6-mile area could encompass tens, or even hundreds, of thousands of people. The model takes into account the details of a region’s topography, land use, and coastline to predict changes in local conditions.

    When applying this model over Bangladesh, researchers found that heat waves will get more severe and more frequent over the next 30 years. In particular, wet-bulb temperatures, which indicate the ability for humans to cool down by sweating, will rise to dangerous levels rarely observed today, particularly in western, inland cities.

    Such hot spots exacerbate other challenges predicted to worsen near Bangladesh’s coast. Rising sea levels and powerful cyclones are eroding and flooding coastal communities, causing saltwater to surge into land and freshwater. This salinity intrusion is detrimental to human health, ruins drinking water supplies, and harms crops, livestock, and aquatic life that farmers and fishermen depend on for food and income.

    CREWSnet will fuse climate science with forecasting tools that predict the social and economic impacts to villages and cities. These forecasts — such as how often a crop season may fail, or how far floodwaters will reach — can steer decision-making.

    “What people need to know, whether they’re a governor or head of a household, is ‘What is going to happen in my area, and what decisions should I make for the people I’m responsible for?’ Our role is to integrate this science and technology together into a decision support system,” says Aldridge, whose group at Lincoln Laboratory specializes in this area. Most recently, they transitioned a hurricane-evacuation planning system to the U.S. government. “We know that making decisions based on climate change requires a deep level of trust. That’s why having a powerful partner like BRAC is so important,” he says.

    Testing interventions

    Established 50 years ago, just after Bangladesh’s independence, BRAC works in every district of the nation to provide social services that help people rise from extreme poverty. Today, it is one of the world’s largest nongovernmental organizations, serving 110 million people across 11 countries in Asia and Africa, but its success is cultivated locally.

    “BRAC is thrilled to partner with leading researchers at MIT to increase climate resilience in Bangladesh and provide a model that can be scaled around the globe,” says Donella Rapier, president and CEO of BRAC USA. “Locally led climate adaptation solutions that are developed in partnership with communities are urgently needed, particularly in the most vulnerable regions that are on the frontlines of climate change.”

    CREWSnet will help BRAC identify communities most vulnerable to forecasted impacts. In these areas, they will share knowledge and innovate or bolster programs to improve households’ capacity to adapt.

    Many climate initiatives are already underway. One program equips homes to filter and store rainwater, as salinity intrusion makes safe drinking water hard to access. Another program is building resilient housing, able to withstand 120-mile-per-hour winds, that can double as local shelters during cyclones and flooding. Other services are helping farmers switch to different livestock or crops better suited for wetter or saltier conditions (e.g., ducks instead of chickens, or salt-tolerant rice), providing interest-free loans to enable this change.

    But adapting in place will not always be possible, for example in areas predicted to be submerged or unbearably hot by midcentury. “Bangladesh is working on identifying and developing climate-resilient cities and towns across the country, as closer-by alternative destinations as compared to moving to Dhaka, the overcrowded capital of Bangladesh,” says Campbell. “CREWSnet can help identify regions better suited for migration, and climate-resilient adaptation strategies for those regions.” At the same time, BRAC’s Climate Bridge Fund is helping to prepare cities for climate-induced migration, building up infrastructure and financial services for people who have been displaced.

    Evaluating impact

    While CREWSnet’s goal is to enable action, it can’t quite measure the impact of those actions. The Abdul Latif Jameel Poverty Action Lab (J-PAL), a development economics program in the MIT School of Humanities, Arts, and Social Sciences, will help evaluate the effectiveness of the climate-adaptation programs.

    “We conduct randomized controlled trials, similar to medical trials, that help us understand if a program improved people’s lives,” says Claire Walsh, the project director of the King Climate Action Initiative at J-PAL. “Once CREWSnet helps BRAC implement adaptation programs, we will generate scientific evidence on their impacts, so that BRAC and CREWSnet can make a case to funders and governments to expand effective programs.”

    The team aspires to bring CREWSnet to other nations disproportionately impacted by climate change. “Our vision is to have this be a globally extensible capability,” says Campbell. CREWSnet’s name evokes another early-warning decision-support system, FEWSnet, that helped organizations address famine in eastern Africa in the 1980s. Today it is a pillar of food-security planning around the world.

    CREWSnet hopes for a similar impact in climate change planning. Its selection as an MIT Climate Grand Challenges flagship project will inject the project with more funding and resources, momentum that will also help BRAC’s fundraising. The team plans to deploy CREWSnet to southwestern Bangladesh within five years.

    “The communities that we are aspiring to reach with CREWSnet are deeply aware that their lives are changing — they have been looking climate change in the eye for many years. They are incredibly resilient, creative, and talented,” says Ashley Toombs, the external affairs director for BRAC USA. “As a team, we are excited to bring this system to Bangladesh. And what we learn together, we will apply at potentially even larger scales.” More

  • in

    Developing electricity-powered, low-emissions alternatives to carbon-intensive industrial processes

    On April 11, 2022, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This is the second article in a five-part series highlighting the most promising concepts to emerge from the competition, and the interdisciplinary research teams behind them.

    One of the biggest leaps that humankind could take to drastically lower greenhouse gas emissions globally would be the complete decarbonization of industry. But without finding low-cost, environmentally friendly substitutes for industrial materials, the traditional production of steel, cement, ammonia, and ethylene will continue pumping out billions of tons of carbon annually; these sectors alone are responsible for at least one third of society’s global greenhouse gas emissions. 

    A major problem is that industrial manufacturers, whose success depends on reliable, cost-efficient, and large-scale production methods, are too heavily invested in processes that have historically been powered by fossil fuels to quickly switch to new alternatives. It’s a machine that kicked on more than 100 years ago, and which MIT electrochemical engineer Yet-Ming Chiang says we can’t shut off without major disruptions to the world’s massive supply chain of these materials. What’s needed, Chiang says, is a broader, collaborative clean energy effort that takes “targeted fundamental research, all the way through to pilot demonstrations that greatly lowers the risk for adoption of new technology by industry.”

    This would be a new approach to decarbonization of industrial materials production that relies on largely unexplored but cleaner electrochemical processes. New production methods could be optimized and integrated into the industrial machine to make it run on low-cost, renewable electricity in place of fossil fuels. 

    Recognizing this, Chiang, the Kyocera Professor in the Department of Materials Science and Engineering, teamed with research collaborator Bilge Yildiz, the Breene M. Kerr Professor of Nuclear Science and Engineering and professor of materials science and engineering, with key input from Karthish Manthiram, visiting professor in the Department of Chemical Engineering, to submit a project proposal to the MIT Climate Grand Challenges. Their plan: to create an innovation hub on campus that would bring together MIT researchers individually investigating decarbonization of steel, cement, ammonia, and ethylene under one roof, combining research equipment and directly collaborating on new methods to produce these four key materials.

    Many researchers across MIT have already signed on to join the effort, including Antoine Allanore, associate professor of metallurgy, who specializes in the development of sustainable materials and manufacturing processes, and Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in the Department of Materials Science and Engineering, who is an expert in materials economics and sustainability. Other MIT faculty currently involved include Fikile Brushett, Betar Gallant, Ahmed Ghoniem, William Green, Jeffrey Grossman, Ju Li, Yuriy Román-Leshkov, Yang Shao-Horn, Robert Stoner, Yogesh Surendranath, Timothy Swager, and Kripa Varanasi.

    “The team we brought together has the expertise needed to tackle these challenges, including electrochemistry — using electricity to decarbonize these chemical processes — and materials science and engineering, process design and scale-up technoeconomic analysis, and system integration, which is all needed for this to go out from our labs to the field,” says Yildiz.

    Selected from a field of more than 100 proposals, their Center for Electrification and Decarbonization of Industry (CEDI) will be the first such institute worldwide dedicated to testing and scaling the most innovative and promising technologies in sustainable chemicals and materials. CEDI will work to facilitate rapid translation of lab discoveries into affordable, scalable industry solutions, with potential to offset as much as 15 percent of greenhouse gas emissions. The team estimates that some CEDI projects already underway could be commercialized within three years.

    “The real timeline is as soon as possible,” says Chiang.

    To achieve CEDI’s ambitious goals, a physical location is key, staffed with permanent faculty, as well as undergraduates, graduate students, and postdocs. Yildiz says the center’s success will depend on engaging student researchers to carry forward with research addressing the biggest ongoing challenges to decarbonization of industry.

    “We are training young scientists, students, on the learned urgency of the problem,” says Yildiz. “We empower them with the skills needed, and even if an individual project does not find the implementation in the field right away, at least, we would have trained the next generation that will continue to go after them in the field.”

    Chiang’s background in electrochemistry showed him how the efficiency of cement production could benefit from adopting clean electricity sources, and Yildiz’s work on ethylene, the source of plastic and one of industry’s most valued chemicals, has revealed overlooked cost benefits to switching to electrochemical processes with less expensive starting materials. With industry partners, they hope to continue these lines of fundamental research along with Allanore, who is focused on electrifying steel production, and Manthiram, who is developing new processes for ammonia. Olivetti will focus on understanding risks and barriers to implementation. This multilateral approach aims to speed up the timeline to industry adoption of new technologies at the scale needed for global impact.

    “One of the points of emphasis in this whole center is going to be applying technoeconomic analysis of what it takes to be successful at a technical and economic level, as early in the process as possible,” says Chiang.

    The impact of large-scale industry adoption of clean energy sources in these four key areas that CEDI plans to target first would be profound, as these sectors are currently responsible for 7.5 billion tons of emissions annually. There is the potential for even greater impact on emissions as new knowledge is applied to other industrial products beyond the initial four targets of steel, cement, ammonia, and ethylene. Meanwhile, the center will stand as a hub to attract new industry, government stakeholders, and research partners to collaborate on urgently needed solutions, both newly arising and long overdue.

    When Chiang and Yildiz first met to discuss ideas for MIT Climate Grand Challenges, they decided they wanted to build a climate research center that functioned unlike any other to help pivot large industry toward decarbonization. Beyond considering how new solutions will impact industry’s bottom line, CEDI will also investigate unique synergies that could arise from the electrification of industry, like processes that would create new byproducts that could be the feedstock to other industry processes, reducing waste and increasing efficiencies in the larger system. And because industry is so good at scaling, those added benefits would be widespread, finally replacing century-old technologies with critical updates designed to improve production and markedly reduce industry’s carbon footprint sooner rather than later.

    “Everything we do, we’re going to try to do with urgency,” Chiang says. “The fundamental research will be done with urgency, and the transition to commercialization, we’re going to do with urgency.” More

  • in

    Computing our climate future

    On Monday, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This article is the first in a five-part series highlighting the most promising concepts to emerge from the competition, and the interdisciplinary research teams behind them.

    With improvements to computer processing power and an increased understanding of the physical equations governing the Earth’s climate, scientists are continually working to refine climate models and improve their predictive power. But the tools they’re refining were originally conceived decades ago with only scientists in mind. When it comes to developing tangible climate action plans, these models remain inscrutable to the policymakers, public safety officials, civil engineers, and community organizers who need their predictive insight most.

    “What you end up having is a gap between what’s typically used in practice, and the real cutting-edge science,” says Noelle Selin, a professor in the Institute for Data, Systems and Society and the Department of Earth, Atmospheric and Planetary Sciences (EAPS), and co-lead with Professor Raffaele Ferrari on the MIT Climate Grand Challenges flagship project “Bringing Computation to the Climate Crisis.” “How can we use new computational techniques, new understandings, new ways of thinking about modeling, to really bridge that gap between state-of-the-art scientific advances and modeling, and people who are actually needing to use these models?”

    Using this as a driving question, the team won’t just be trying to refine current climate models, they’re building a new one from the ground up.

    This kind of game-changing advancement is exactly what the MIT Climate Grand Challenges is looking for, which is why the proposal has been named one of the five flagship projects in the ambitious Institute-wide program aimed at tackling the climate crisis. The proposal, which was selected from 100 submissions and was among 27 finalists, will receive additional funding and support to further their goal of reimagining the climate modeling system. It also brings together contributors from across the Institute, including the MIT Schwarzman College of Computing, the School of Engineering, and the Sloan School of Management.

    When it comes to pursuing high-impact climate solutions that communities around the world can use, “it’s great to do it at MIT,” says Ferrari, EAPS Cecil and Ida Green Professor of Oceanography. “You’re not going to find many places in the world where you have the cutting-edge climate science, the cutting-edge computer science, and the cutting-edge policy science experts that we need to work together.”

    The climate model of the future

    The proposal builds on work that Ferrari began three years ago as part of a joint project with Caltech, the Naval Postgraduate School, and NASA’s Jet Propulsion Lab. Called the Climate Modeling Alliance (CliMA), the consortium of scientists, engineers, and applied mathematicians is constructing a climate model capable of more accurately projecting future changes in critical variables, such as clouds in the atmosphere and turbulence in the ocean, with uncertainties at least half the size of those in existing models.

    To do this, however, requires a new approach. For one thing, current models are too coarse in resolution — at the 100-to-200-kilometer scale — to resolve small-scale processes like cloud cover, rainfall, and sea ice extent. But also, explains Ferrari, part of this limitation in resolution is due to the fundamental architecture of the models themselves. The languages most global climate models are coded in were first created back in the 1960s and ’70s, largely by scientists for scientists. Since then, advances in computing driven by the corporate world and computer gaming have given rise to dynamic new computer languages, powerful graphics processing units, and machine learning.

    For climate models to take full advantage of these advancements, there’s only one option: starting over with a modern, more flexible language. Written in Julia, a part of Julialab’s Scientific Machine Learning technology, and spearheaded by Alan Edelman, a professor of applied mathematics in MIT’s Department of Mathematics, CliMA will be able to harness far more data than the current models can handle.

    “It’s been real fun finally working with people in computer science here at MIT,” Ferrari says. “Before it was impossible, because traditional climate models are in a language their students can’t even read.”

    The result is what’s being called the “Earth digital twin,” a climate model that can simulate global conditions on a large scale. This on its own is an impressive feat, but the team wants to take this a step further with their proposal.

    “We want to take this large-scale model and create what we call an ‘emulator’ that is only predicting a set of variables of interest, but it’s been trained on the large-scale model,” Ferrari explains. Emulators are not new technology, but what is new is that these emulators, being referred to as the “Earth digital cousins,” will take advantage of machine learning.

    “Now we know how to train a model if we have enough data to train them on,” says Ferrari. Machine learning for projects like this has only become possible in recent years as more observational data become available, along with improved computer processing power. The goal is to create smaller, more localized models by training them using the Earth digital twin. Doing so will save time and money, which is key if the digital cousins are going to be usable for stakeholders, like local governments and private-sector developers.

    Adaptable predictions for average stakeholders

    When it comes to setting climate-informed policy, stakeholders need to understand the probability of an outcome within their own regions — in the same way that you would prepare for a hike differently if there’s a 10 percent chance of rain versus a 90 percent chance. The smaller Earth digital cousin models will be able to do things the larger model can’t do, like simulate local regions in real time and provide a wider range of probabilistic scenarios.

    “Right now, if you wanted to use output from a global climate model, you usually would have to use output that’s designed for general use,” says Selin, who is also the director of the MIT Technology and Policy Program. With the project, the team can take end-user needs into account from the very beginning while also incorporating their feedback and suggestions into the models, helping to “democratize the idea of running these climate models,” as she puts it. Doing so means building an interactive interface that eventually will give users the ability to change input values and run the new simulations in real time. The team hopes that, eventually, the Earth digital cousins could run on something as ubiquitous as a smartphone, although developments like that are currently beyond the scope of the project.

    The next thing the team will work on is building connections with stakeholders. Through participation of other MIT groups, such as the Joint Program on the Science and Policy of Global Change and the Climate and Sustainability Consortium, they hope to work closely with policymakers, public safety officials, and urban planners to give them predictive tools tailored to their needs that can provide actionable outputs important for planning. Faced with rising sea levels, for example, coastal cities could better visualize the threat and make informed decisions about infrastructure development and disaster preparedness; communities in drought-prone regions could develop long-term civil planning with an emphasis on water conservation and wildfire resistance.

    “We want to make the modeling and analysis process faster so people can get more direct and useful feedback for near-term decisions,” she says.

    The final piece of the challenge is to incentivize students now so that they can join the project and make a difference. Ferrari has already had luck garnering student interest after co-teaching a class with Edelman and seeing the enthusiasm students have about computer science and climate solutions.

    “We’re intending in this project to build a climate model of the future,” says Selin. “So it seems really appropriate that we would also train the builders of that climate model.” More