More stories

  • in

    Two projects receive funding for technologies that avoid carbon emissions

    The Carbon Capture, Utilization, and Storage Center, one of the MIT Energy Initiative (MITEI)’s Low-Carbon Energy Centers, has awarded $900,000 in funding to two new research projects to advance technologies that avoid carbon dioxide (CO2) emissions into the atmosphere and help address climate change. The winning project is receiving $750,000, and an additional project receives $150,000.
    The winning project, led by principal investigator Asegun Henry, the Robert N. Noyce Career Development Professor in the Department of Mechanical Engineering, and co-principal investigator Paul Barton, the Lammot du Pont Professor of Chemical Engineering, aims to produce hydrogen without CO2 emissions while creating a second revenue stream of solid carbon. The additional project, led by principal investigator Matěj Peč, the Victor P. Starr Career Development Chair in the Department of Earth, Atmospheric and Planetary Sciences, seeks to expand understanding of new processes for storing CO2 in basaltic rocks by converting it from an aqueous solution into carbonate minerals.
    Carbon capture, utilization, and storage (CCUS) technologies have the potential to play an important role in limiting or reducing the amount of CO2 in the atmosphere, as part of a suite of approaches to mitigating to climate change that includes renewable energy and energy efficiency technologies, as well as policy measures. While some CCUS technologies are being deployed at the million-ton-of-CO2 per year scale, there are substantial needs to improve costs and performance of those technologies and to advance more nascent technologies. MITEI’s CCUS center is working to meet these challenges with a cohort of industry members that are supporting promising MIT research, such as these newly funded projects.
    A new process for producing hydrogen without CO2 emissions
    Henry and Barton’s project, “Lower cost, CO2-free, H2 production from CH4 using liquid tin,” investigates the use of methane pyrolysis instead of steam methane reforming (SMR) for hydrogen production.
    Currently, hydrogen production accounts for approximately 1 percent of global CO2 emissions, and the predominant production method is SMR. The SMR process relies on the formation of CO2, so replacing it with another economically competitive approach to making hydrogen would avoid emissions. 
    “Hydrogen is essential to modern life, as it is primarily used to make ammonia for fertilizer, which plays an indispensable role in feeding the world’s 7.5 billion people,” says Henry. “But we need to be able to feed a growing population and take advantage of hydrogen’s potential as a carbon-free fuel source by eliminating CO2 emissions from hydrogen production. Our process results in a solid carbon byproduct, rather than CO2 gas. The sale of the solid carbon lowers the minimum price at which hydrogen can be sold to break even with the current, CO2 emissions-intensive process.”
    Henry and Barton’s work is a new take on an existing process, pyrolysis of methane. Like SMR, methane pyrolysis uses methane as the source of hydrogen, but follows a different pathway. SMR uses the oxygen in water to liberate the hydrogen by preferentially bonding oxygen to the carbon in methane, producing CO2 gas in the process. In methane pyrolysis, the methane is heated to such a high temperature that the molecule itself becomes unstable and decomposes into hydrogen gas and solid carbon — a much more valuable byproduct than CO2 gas. Although the idea of methane pyrolysis has existed for many years, it has been difficult to commercialize because of the formation of the solid byproduct, which can deposit on the walls of the reactor, eventually plugging it up. This issue makes the process impractical. Henry and Barton’s project uses a new approach in which the reaction is facilitated with inert molten tin, which prevents the plugging from occurring. The proposed approach is enabled by recent advances in Henry’s lab that enable the flow and containment of liquid metal at extreme temperatures without leakage or material degradation. 
    Studying CO2 storage in basaltic reservoirs
    With his project, “High-fidelity monitoring for carbon sequestration: integrated geophysical and geochemical investigation of field and laboratory data,” Peč plans to conduct a comprehensive study to gain a holistic understanding of the coupled chemo-mechanical processes that accompany CO2 storage in basaltic reservoirs, with hopes of increasing adoption of this technology.
    The Intergovernmental Panel on Climate Change estimates that 100 to 1,000 gigatonnes of CO2 must be removed from the atmosphere by the end of the century. Such large volumes can only be stored below the Earth’s surface, and that storage must be accomplished safely and securely, without allowing any leakage back into the atmosphere.
    One promising storage strategy is CO2 mineralization — specifically by dissolving gaseous CO2 in water, which then reacts with reservoir rocks to form carbonate minerals. Of the technologies proposed for carbon sequestration, this approach is unique in that the sequestration is permanent: the CO2 becomes part of an inert solid, so it cannot escape back into the environment. Basaltic rocks, the most common volcanic rock on Earth, present good sites for CO2 injection due to their widespread occurrence and high concentrations of divalent cations such as calcium and magnesium that can form carbonate minerals. In one study, more than 95 percent of the CO2 injected into a pilot site in Iceland was precipitated as carbonate minerals in less than two years.
    However, ensuring the subsurface integrity of geological formations during fluid injection and accurately evaluating the reaction rates in such reservoirs require targeted studies such as Peč’s.
    “The funding by MITEI’s Low-Carbon Energy Center for Carbon Capture, Utilization, and Storage allows me to start a new research direction, bringing together a group of experts from a range of disciplines to tackle climate change, perhaps the greatest scientific challenge our generation is facing,” says Peč.
    The two projects were selected from a call for proposals that resulted in 15 entries by MIT researchers. “The application process revealed a great deal of interest from MIT researchers in advancing carbon capture, utilization, and storage processes and technologies,” says Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences, who co-directs the CCUS center with T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering. “The two projects funded through the center will result in fundamental, higher-risk research exploring novel approaches that have the potential to have high impact in the longer term. Given the short-term focus of the industry, projects like this might not have otherwise been funded, so having support for this kind of early-stage fundamental research is crucial.”

    Topics: MIT Energy Initiative, Mechanical engineering, Chemical engineering, EAPS, School of Engineering, Carbon dioxide, Carbon Emissions, Carbon sequestration, Funding, Climate change, School of Science More

  • in

    Shrinking deep learning’s carbon footprint

    In June, OpenAI unveiled the largest language model in the world, a text-generating tool called GPT-3 that can write creative fiction, translate legalese into plain English, and answer obscure trivia questions. It’s the latest feat of intelligence achieved by deep learning, a machine learning method patterned after the way neurons in the brain process and store information.
    But it came at a hefty price: at least $4.6 million and 355 years in computing time, assuming the model was trained on a standard neural network chip, or GPU. The model’s colossal size — 1,000 times larger than a typical language model — is the main factor in its high cost.
    “You have to throw a lot more computation at something to get a little improvement in performance,” says Neil Thompson, an MIT researcher who has tracked deep learning’s unquenchable thirst for computing. “It’s unsustainable. We have to find more efficient ways to scale deep learning or develop other technologies.”
    Some of the excitement over AI’s recent progress has shifted to alarm. In a study last year, researchers at the University of Massachusetts at Amherst estimated that training a large deep-learning model produces 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars. As models grow bigger, their demand for computing is outpacing improvements in hardware efficiency. Chips specialized for neural-network processing, like GPUs (graphics processing units) and TPUs (tensor processing units), have offset the demand for more computing, but not by enough. 
    “We need to rethink the entire stack — from software to hardware,” says Aude Oliva, MIT director of the MIT-IBM Watson AI Lab and co-director of the MIT Quest for Intelligence. “Deep learning has made the recent AI revolution possible, but its growing cost in energy and carbon emissions is untenable.”
    Computational limits have dogged neural networks from their earliest incarnation — the perceptron — in the 1950s. As computing power exploded, and the internet unleashed a tsunami of data, they evolved into powerful engines for pattern recognition and prediction. But each new milestone brought an explosion in cost, as data-hungry models demanded increased computation. GPT-3, for example, trained on half a trillion words and ballooned to 175 billion parameters — the mathematical operations, or weights, that tie the model together — making it 100 times bigger than its predecessor, itself just a year old.
    In work posted on the pre-print server arXiv, Thompson and his colleagues show that the ability of deep learning models to surpass key benchmarks tracks their nearly exponential rise in computing power use. (Like others seeking to track AI’s carbon footprint, the team had to guess at many models’ energy consumption due to a lack of reporting requirements). At this rate, the researchers argue, deep nets will survive only if they, and the hardware they run on, become radically more efficient.
    Toward leaner, greener algorithms
    The human perceptual system is extremely efficient at using data. Researchers have borrowed this idea for recognizing actions in video and in real life to make models more compact. In a paper at the European Conference on Computer Vision (ECCV) in August, researchers at the MIT-IBM Watson AI Lab describe a method for unpacking a scene from a few glances, as humans do, by cherry-picking the most relevant data.
    Take a video clip of someone making a sandwich. Under the method outlined in the paper, a policy network strategically picks frames of the knife slicing through roast beef, and meat being stacked on a slice of bread, to represent at high resolution. Less-relevant frames are skipped over or represented at lower resolution. A second model then uses the abbreviated CliffsNotes version of the movie to label it “making a sandwich.” The approach leads to faster video classification at half the computational cost as the next-best model, the researchers say.
    “Humans don’t pay attention to every last detail — why should our models?” says the study’s senior author, Rogerio Feris, research manager at the MIT-IBM Watson AI Lab. “We can use machine learning to adaptively select the right data, at the right level of detail, to make deep learning models more efficient.”
    In a complementary approach, researchers are using deep learning itself to design more economical models through an automated process known as neural architecture search. Song Han, an assistant professor at MIT, has used automated search to design models with fewer weights, for language understanding and scene recognition, where picking out looming obstacles quickly is acutely important in driving applications. 
    In a paper at ECCV, Han and his colleagues propose a model architecture for three-dimensional scene recognition that can spot safety-critical details like road signs, pedestrians, and cyclists with relatively less computation. They used an evolutionary-search algorithm to evaluate 1,000 architectures before settling on a model they say is three times faster and uses eight times less computation than the next-best method. 
    In another recent paper, they use evolutionary search within an augmented designed space to find the most efficient architectures for machine translation on a specific device, be it a GPU, smartphone, or tiny Raspberry Pi. Separating the search and training process leads to huge reductions in computation, they say.
    In a third approach, researchers are probing the essence of deep nets to see if it might be possible to train a small part of even hyper-efficient networks like those above. Under their proposed lottery ticket hypothesis, PhD student Jonathan Frankle and MIT Professor Michael Carbin proposed that within each model lies a tiny subnetwork that could have been trained in isolation with as few as one-tenth as many weights — what they call a “winning ticket.” 
    They showed that an algorithm could retroactively find these winning subnetworks in small image-classification models. Now, in a paper at the International Conference on Machine Learning (ICML), they show that the algorithm finds winning tickets in large models, too; the models just need to be rewound to an early, critical point in training when the order of the training data no longer influences the training outcome. 
    In less than two years, the lottery ticket idea has been cited more than more than 400 times, including by Facebook researcher Ari Morcos, who has shown that winning tickets can be transferred from one vision task to another, and that winning tickets exist in language and reinforcement learning models, too. 
    “The standard explanation for why we need such large networks is that overparameterization aids the learning process,” says Morcos. “The lottery ticket hypothesis disproves that — it’s all about finding an appropriate starting point. The big downside, of course, is that, currently, finding these ‘winning’ starting points requires training the full overparameterized network anyway.”
    Frankle says he’s hopeful that an efficient way to find winning tickets will be found. In the meantime, recycling those winning tickets, as Morcos suggests, could lead to big savings.
    Hardware designed for efficient deep net algorithms
    As deep nets push classical computers to the limit, researchers are pursuing alternatives, from optical computers that transmit and store data with photons instead of electrons, to quantum computers, which have the potential to increase computing power exponentially by representing data in multiple states at once.
    Until a new paradigm emerges, researchers have focused on adapting the modern chip to the demands of deep learning. The trend began with the discovery that video-game graphical chips, or GPUs, could turbocharge deep-net training with their ability to perform massively parallelized matrix computations. GPUs are now one of the workhorses of modern AI, and have spawned new ideas for boosting deep net efficiency through specialized hardware. 
    Much of this work hinges on finding ways to store and reuse data locally, across the chip’s processing cores, rather than waste time and energy shuttling data to and from a designated memory site. Processing data locally not only speeds up model training but improves inference, allowing deep learning applications to run more smoothly on smartphones and other mobile devices.
    Vivienne Sze, a professor at MIT, has literally written the book on efficient deep nets. In collaboration with book co-author Joel Emer, an MIT professor and researcher at NVIDIA, Sze has designed a chip that’s flexible enough to process the widely-varying shapes of both large and small deep learning models. Called Eyeriss 2, the chip uses 10 times less energy than a mobile GPU.
    Its versatility lies in its on-chip network, called a hierarchical mesh, that adaptively reuses data and adjusts to the bandwidth requirements of different deep learning models. After reading from memory, it reuses the data across as many processing elements as possible to minimize data transportation costs and maintain high throughput. 
    “The goal is to translate small and sparse networks into energy savings and fast inference,” says Sze. “But the hardware should be flexible enough to also efficiently support large and dense deep neural networks.”
    Other hardware innovators are focused on reproducing the brain’s energy efficiency. Former Go world champion Lee Sedol may have lost his title to a computer, but his performance was fueled by a mere 20 watts of power. AlphaGo, by contrast, burned an estimated megawatt of energy, or 500,000 times more.
    Inspired by the brain’s frugality, researchers are experimenting with replacing the binary, on-off switch of classical transistors with analog devices that mimic the way that synapses in the brain grow stronger and weaker during learning and forgetting.
    An electrochemical device, developed at MIT and recently published in Nature Communications, is modeled after the way resistance between two neurons grows or subsides as calcium, magnesium or potassium ions flow across the synaptic membrane dividing them. The device uses the flow of protons — the smallest and fastest ion in solid state — into and out of a crystalline lattice of tungsten trioxide to tune its resistance along a continuum, in an analog fashion.
    “Even though is not yet optimized, it gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain,” says the study’s senior author, Bilge Yildiz, a professor at MIT.
    Energy-efficient algorithms and hardware can shrink AI’s environmental impact. But there are other reasons to innovate, says Sze, listing them off: Efficiency will allow computing to move from data centers to edge devices like smartphones, making AI accessible to more people around the world; shifting computation from the cloud to personal devices reduces the flow, and potential leakage, of sensitive data; and processing data on the edge eliminates transmission costs, leading to faster inference with a shorter reaction time, which is key for interactive driving and augmented/virtual reality applications.
    “For all of these reasons, we need to embrace efficient AI,” she says.

    Topics: Quest for Intelligence, Machine learning, MIT-IBM Watson AI Lab, Electrical engineering and computer science (EECS), Computer Science and Artificial Intelligence Laboratory (CSAIL), School of Engineering, Algorithms, Artificial intelligence, Computer science and technology, Software, Computer vision, Efficiency, MIT Schwarzman College of Computing, Sustainability, Environment, Climate change More

  • in

    Study: A plunge in incoming sunlight may have triggered “Snowball Earths”

    At least twice in Earth’s history, nearly the entire planet was encased in a sheet of snow and ice. These dramatic “Snowball Earth” events occurred in quick succession, somewhere around 700 million years ago, and evidence suggests that the consecutive global ice ages set the stage for the subsequent explosion of complex, multicellular life on Earth.
    Scientists have considered multiple scenarios for what may have tipped the planet into each ice age. While no single driving process has been identified, it’s assumed that whatever triggered the temporary freeze-overs must have done so in a way that pushed the planet past a critical threshold, such as reducing incoming sunlight or atmospheric carbon dioxide to levels low enough to set off a global expansion of ice.
    But MIT scientists now say that Snowball Earths were likely the product of “rate-induced glaciations.” That is, they found the Earth can be tipped into a global ice age when the level of solar radiation it receives changes quickly over a geologically short period of time. The amount of solar radiation doesn’t have to drop to a particular threshold point; as long as the decrease in incoming sunlight occurs faster than a critical rate, a temporary glaciation, or Snowball Earth, will follow.
    These findings, published today in the Proceedings of the Royal Society A, suggest that whatever triggered the Earth’s ice ages most likely involved processes that quickly reduced the amount of solar radiation coming to the surface, such as widespread volcanic eruptions or biologically induced cloud formation that could have significantly blocked out the sun’s rays. 
    The findings may also apply to the search for life on other planets. Researchers have been keen on finding exoplanets within the habitable zone — a distance from their star that would be within a temperature range that could support life. The new study suggests that these planets, like Earth, could also ice over temporarily if their climate changes abruptly. Even if they lie within a habitable zone, Earth-like planets may be more susceptible to global ice ages than previously thought.
    “You could have a planet that stays well within the classical habitable zone, but if incoming sunlight changes too fast, you could get a Snowball Earth,” says lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What this highlights is the notion that there’s so much more nuance in the concept of habitability.”
    Arnscheidt has co-authored the paper with Daniel Rothman, EAPS professor of geophysics, and co-founder and co-director of the Lorenz Center.
    A runaway snowball
    Regardless of the particular processes that triggered past glaciations, scientists generally agree that Snowball Earths arose from a “runaway” effect involving an ice-albedo feedback: As incoming sunlight is reduced, ice expands from the poles to the equator. As more ice covers the globe, the planet becomes more reflective, or higher in albedo, which further cools the surface for more ice to expand. Eventually, if the ice reaches a certain extent, this becomes a runaway process, resulting in a global glaciation.

    Global ice ages on Earth are temporary in nature, due to the planet’s carbon cycle. When the planet is not covered in ice, levels of carbon dioxide in the atmosphere are somewhat controlled by the weathering of rocks and minerals. When the planet is covered in ice, weathering is vastly reduced, so that carbon dioxide builds up in the atmosphere, creating a greenhouse effect that eventually thaws the planet out of its ice age.
    Scientists generally agree that the formation of Snowball Earths has something to do with the balance between incoming sunlight, the ice-albedo feedback, and the global carbon cycle.
    “There are lots of ideas for what caused these global glaciations, but they all really boil down to some implicit modification of solar radiation coming in,” Arnscheidt says. “But generally it’s been studied in the context of crossing a threshold.”
    He and Rothman had previously studied other periods in Earth’s history where the speed, or rate at which certain changes in climate occurred had a role in triggering events, such as past mass extinctions.
    “In the course of this exercise, we realized there was an immediate way to make a serious point by applying such ideas of rate-induced tipping, to Snowball Earth and habitability,” Rothman says.
    “Be wary of speed”
    The researchers developed a simple mathematical model of the Earth’s climate system that includes equations to represent relations between incoming and outgoing solar radiation, the surface temperature of the Earth, the concentration of carbon dioxide in the atmosphere, and the effects of weathering in taking up and storing atmospheric carbon dioxide. The researchers were able to tune each of these parameters to observe which conditions generated a Snowball Earth.
    Ultimately, they found that a planet was more likely to freeze over if incoming solar radiation decreased quickly, at a rate that was faster than a critical rate, rather than to a critical threshold, or particular level of sunlight. There is some uncertainty in exactly what that critical rate would be, as the model is a simplified representation of the Earth’s climate. Nevertheless, Arnscheidt estimates that the Earth would have to experience about a 2 percent drop in incoming sunlight over a period of about 10,000 years to tip into a global ice age.
    “It’s reasonable to assume past glaciations were induced by geologically quick changes to solar radiation,” Arnscheidt says.
    The particular mechanisms that may have quickly darkened the skies over tens of thousands of years is still up for debate. One possibility is that widespread volcanoes may have spewed aerosols into the atmosphere, blocking incoming sunlight around the world. Another is that primitive algae may have evolved mechanisms that facilitated the formation of light-reflecting clouds. The results from this new study suggest scientists may consider processes such as these, that quickly reduce incoming solar radiation, as more likely triggers for Earth’s ice ages.
    “Even though humanity will not trigger a snowball glaciation on our current climate trajectory, the existence of such a ‘rate-induced tipping point’ at the global scale may still remain a cause for concern,” Arnscheidt points out. “For example, it teaches us that we should be wary of the speed at which we are modifying Earth’s climate, not just the magnitude of the change. There could be other such rate-induced tipping points that might be triggered by anthropogenic warming. Identifying these and constraining their critical rates is a worthwhile goal for further research.”
    This research was funded, in part, by the MIT Lorenz Center.

    Topics: Climate, Geology, Climate change, Exoplanets, EAPS, Earth and atmospheric sciences, Environment, Mathematics, Planetary science, Research, School of Science More

  • in

    $25 million gift launches ambitious new effort tackling poverty and climate change

    With a founding $25 million gift from King Philanthropies, MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) is launching a new initiative to solve problems at the nexus of climate change and global poverty.
    The new program, the King Climate Action Initiative (K-CAI), was announced today by King Philanthropies and J-PAL, and will start immediately. K-CAI plans to rigorously study programs reducing the effects of climate change on vulnerable populations, and then work with policymakers to scale up the most successful interventions.
    “To protect our well-being and improve the lives of people living in poverty, we must be better stewards of our climate and our planet,” says Esther Duflo, director of J-PAL and the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics at MIT. “Through K-CAI, we will work to build a movement for evidence-informed policy at the nexus of climate change and poverty alleviation similar to the movement J-PAL helped build in global development. The moment is perhaps unique: The only silver lining of this global pandemic is that it reminds us that nature is sometimes stronger than us. It is a moment to act decisively to change behavior to stave off a much larger catastrophe in the future.”
    K-CAI constitutes an ambitious effort: The initiative intends to help improve the lives of at least 25 million people over the next decade. K-CAI will announce a call for proposals this summer and select its first funded projects by the end of 2020.
    “We are short on time to take action on climate change,” says Robert King, co-founder of King Philanthropies. “K-CAI reflects our commitment to confront this global crisis by focusing on solutions that benefit people in extreme poverty. They are already the hardest hit by climate change, and if we fail to act, their circumstances will become even more dire.”
    There are currently an estimated 736 million people globally living in extreme poverty, on as little as $1.90 per day or less. The World Bank estimates that climate change could push roughly another 100 million into extreme poverty by 2030.
    As vast as its effects may be, climate change also presents a diverse set of problems to tackle. Among other things, climate change, as well as fossil-fuel pollution, is expected to reduce crop yields, raise food prices, and generate more malnutrition; increase the prevalence of respiratory illness, heat stress, and numerous other diseases; and increase extreme weather events, wiping out homes, livelihoods, and communities.
    With this in mind, the initiative will focus on specific projects within four areas: climate change mitigation, to reduce carbon emissions; pollution reduction; adaptation to ongoing climate change; and shifting toward cleaner, reliable, and more affordable souces of energy. In each area, K-CAI will study smaller-scale programs, evaluate their impact, and work with partners to scale up the projects with the most effective solutions.
    Projects backed by J-PAL have already had an impact in these areas. In one recent study, J-PAL-affiliated researchers found that changing the emissions audit system in Gujarat, India, reduced industrial-plant pollution by 28 percent; the state then implemented the reforms. In another study in India, J-PAL affiliated researchers found that farmers using a flood-resistant rice variety called Swarna-Sub1 increased their crop yields by 41 percent.
    In Zambia, a study by researchers in the J-PAL network showed that lean-season loans for farmers increased agricultural output by 8 percent; in Uganda, J-PAL affiliated researchers found that a payment system to landowners reduced deforestation by 5 percent and is a cost-effective way to lower carbon emissions.
    Other J-PAL field experiments in progress include one providing cash payments that stop farmers in Punjab, India, from burning crops, which generates half the air pollution in Delhi; another implementing an emissions-trading plan in India; and a new program to harvest rainwater more effectively in Niger. All told, J-PAL researchers have evaluated over 40 programs focused on climate, energy, and the environment.
    By conducting these kinds of field experiments, and implementing some widely, K-CAI aims to apply the same approach J-PAL has directed toward multiple aspects of poverty alleviation, including food production, health care, education, and transparent governance.
    A unique academic enterprise, J-PAL emphasizes randomized controlled trials to identify useful poverty-reduction programs, then works with governments and nongovernmental organizations to implement them. All told, programs evaluated by J-PAL affiliated researchers and found to be effective have been scaled up to reach 400 million people worldwide since the lab’s founding in 2003.
    “J-PAL has distinctive core competencies that equip it to achieve outsized impact over the long run,” says Kim Starkey, president and CEO of King Philanthropies. “Its researchers excel at conducting randomized evaluations to figure out what works, its leadership is tremendous, and J-PAL as an organization has a rare, demonstrated ability to partner with governments and other organizations to scale up proven interventions and programs.”
    K-CAI aims to conduct an increasing number of field experiments over the initial five-year period and focus on implementing the highest-quality programs at scale over the subsequent five years. As Starkey observes, this approach may generate increasing interest from additional partners.
    “There is an immense need for a larger body of evidence about what interventions work at this nexus of climate change and extreme poverty,” Starkey says. “The findings of the King Climate Action Initiative will inform policymakers and funders as they seek to prioritize opportunities with the highest impact.”
    King Philanthropies was founded by Robert E. (Bob) King and Dorothy J. (Dottie) King in 2016. The organization has a goal of making “a meaningful difference in the lives of the world’s poorest people” by developing and supporting a variety of antipoverty initiatives.
    J-PAL was co-founded by Duflo; Abhijit Banerjee, the Ford International Professor of Economics at MIT; and Sendhil Mullainathan, now a professor at the University of Chicago’s Booth School of Business. It has over 200 affiliated researchers at more than 60 universities across the globe. J-PAL is housed in the Department of Economics in MIT’s School of Humanities, Arts, and Social Sciences.
    Last fall, Duflo and Banerjee, along with long-time collaborator Michael Kremer of Harvard University, were awarded the Nobel Prize in economic sciences. The Nobel citation observed that their work has “dramatically improved our ability to fight poverty in practice” and provided a “new approach to obtaining reliable answers about the best ways to fight global poverty.”
    K-CAI will be co-chaired by two professors, Michael Greenstone and Kelsey Jack, who have extensive research experience in environmental economics. Both are already affiliated researchers with J-PAL.
    Greenstone is the Milton Friedman Distinguished Service Professor in Economics at the University of Chicago. He is also director of the Energy Policy Institute at the University of Chicago. Greenstone, who was a tenured faculty member in MIT’s Department of Economics from 2003 to 2014, has published high-profile work on energy access, the consequences of air pollution, and the effectiveness of policy measures, among other topics.
    Jack is an associate professor in the Bren School of Environmental Science and Management at the University of California at Santa Barbara. She is an expert on environment-related programs in developing countries, with a focus on incentives that encourage the private-sector development of environmental goods. Jack was previously a faculty member at Tufts University, and a postdoc at MIT in 2010-11, working on J-PAL’s Agricultural Technology Adoption Initiative. More

  • in

    Engineering superpowered organisms for a more sustainable world

    Making corn salt-tolerant by engineering its microbiome. Increasing nut productivity with fungal symbiosis. Cleaning up toxic metals in the water supply with algae. Capturing soil nutrient runoff with bacterial biofilms. These were the bio-sustainability innovations designed and presented by students in the Department of Biological Engineering (BE) last May. With the sun shining brightly on an empty Killian Court, the students gathered for the final class presentations over Zoom, physically distanced due to the Covid-19-related closing of MIT’s campus this spring.
    For decades, the sustainable technologies dominating public discourse have tended toward the mechanical: wind power, solar power, saltwater distillation, etc. But in recent years, biological solutions have increasingly taken the forefront. For recent BE graduate Adrianna Amaro ’20, being able to make use of “existing organisms in the natural world and improve their capabilities, instead of building whole new machines, is the most exciting aspect of biological engineering approaches to sustainability problems.”
    Each semester, the BE capstone class (20.380: Biological Engineering Design) challenges students to design, in teams, biological engineering solutions to problems focused on a theme selected by the instructors. Teams are tasked with presenting their solutions in two distinct ways: as a written academic grant proposal and as a startup pitch. For Professor Christopher Voigt, one of the lead instructors, the goal of the class is to “create the climate where a half-baked concept emerges and gets transformed into a project that is both achievable and could have a real-world impact.”
    A glance at the research portfolio on the MIT biological engineering homepage reveals a particular focus on human biology. But over the years, students and faculty alike have started pushing for a greater diversity in challenges to which the cutting-edge technology they were developing could be applied. Indeed, “sustainability has been one of the top areas that students raise when asked what they want to address with biological engineering,” says Sean Clarke PhD ’13, another instructor for the class.
    In response to student input, the instructors chose food and water security as the theme for the spring 2020 semester. (Sustainability, broadly, was the theme the previous semester.) The topic was well-received by the 20.380 students. Recent BE graduate Cecilia Padilla ’20 appreciated how wide-reaching and impactful the issues were, while teammate Abby McGee ’20 was thrilled because she had always been interested in environmental issues — and is “not into pharma.”
    Since this is the biological engineering capstone, students had to incorporate engineering principles in their biology-based solutions. This meant developing computational models of their proposed biological systems to predict the output of a system from a defined set of inputs. Team SuperSoil, for example, designed a genetic circuit that, when inserted into B. subtilis, a common soil bacteria, would allow it to change behavior based on water and nutrient levels. During heavy rain, for example, the bacteria would respond by producing a phosphate-binding protein biofilm. This would theoretically reduce phosphate runoff, thus preserving soil nutrients and reducing the pollution of waterways. By modeling natural processes such as protein production, bacterial activation, and phosphate diffusion in the soil using differential equations, they were able to predict the degree of phosphate capture and show that significant impact could be achieved with a realistic amount of engineered bacterial input.
    Biological engineering Professor Forest White co-leads the class every spring with Voigt. White also teaches the prerequisite, where students learn how to construct computational models of biological systems. He points out how the models helped students develop their capstone projects: “In a couple of cases the model revealed true design challenges, where the feasibility of the project requires optimal engineering of particular aspects of the design.”
    Models aside, simply thinking about the mathematical reality of proposed solutions helped teams early on in the idea selection process. Team Nutlettes initially considered using methane-consuming bacteria to capture methane gas from landfills, but back-of-the-envelope calculations revealed unfavorable kinetics. Additionally, further reading brought to light a possible toxic byproduct of bacterial methane metabolism: formaldehyde. Instead, they chose to develop an intervention for water-intensive nut producers: engineer the tree’s fungal symbionts to provide a boost of hormones that would promote flower production, which in turn increases nut yields.
    Team Halo saw water filtration as the starting point for ideation, deeming it the most impactful issue to tackle. For inspiration, they looked to mangrove trees, which naturally take up salt from the water that they grow in. They applied this concept to their design of corn-associated, salt-tolerant bacteria that could enhance their plant host’s ability to grow in high salinity conditions — an increasingly common consequence of drought and industrial agricultural irrigation. Additional inspiration came from research in the Department of Civil and Environmental Engineering: In their design, the team incorporated a silk-based seed coating developed by Professor Benedetto Marelli’s group.
    Many of the capstone students found themselves exploring unfamiliar fields of research. During their foray into plant-fungal symbiosis, Team Nutlettes was often frustrated by the prevalence of outdated and contradictory findings, and by the lack of quantitative results that they could use in their models. Still, Vaibhavi Shah, one of the few juniors in the class, says she found a lot of value in “diving into something you’ve no experience in.”
    In addition to biological design, teams were encouraged to think about the financial feasibility of their proposed solutions. This posed a challenge for Team H2Woah and their algal-based solution for sequestering heavy metals from wastewater. Unlike traditional remediation methods, which produce toxic sludge, their system allows for the recycling of metals from the wastewater for manufacturing, and the opportunity to harvest the algae for biofuels. However, as they developed their concept, they realized that breaking into the existing market would be difficult due to the cost of all the new infrastructure that would be required.
    Students read broadly over the course of the semester, which helped them enhance their understanding of food and water insecurity beyond their specific projects. Before the class, Kayla Vodehnal ’20 of Team Nutlettes had only been exposed to policy-driven solutions. Amaro, meanwhile, came to realize how close to home the issues they were researching are: all Americans may soon have to confront inadequate access to clean water due to, among other factors, pollution, climate change, and overuse.
    In any other semester, the capstone students would have done their final presentations in a seminar room before peers, instructors, a panel of judges, and the indispensable pastry-laden brunch table. This semester, however, the presentations took place, like everything else this spring, on Zoom. Instructors beamed in front of digital congratulatory messages, while some students coordinated background images to present as a single cohesive team. Despite the loss of in-person engagement, the Zoom presentations did come with benefits. This year’s class had a larger group of audience members compared to past years, including at least two dozen faculty, younger students, and alumni who joined virtually to show their support.
    Coordinating a group project remotely was challenging for all the teams, but Team Nutlettes found a silver lining: Because having spontaneous conversations over Zoom is harder than in person, they found that their meetings became a lot more productive.
    One attendee was Renee Robins ’83, executive director of the Abdul Latif Jameel Water and Food Systems Lab, who had previously interacted with the class as a guest speaker. “Many of the students’ innovative concepts for research and commercialization,” she says, “were of the caliber we see from MIT faculty submitting proposals to J-WAFS’ various grant programs.”
    Now that they have graduated, the seniors in the class are all going their separate ways, and some have sustainability careers in mind. Joseph S. Faraguna ’20 of Team Halo will be joining Ginkgo Bioworks in the fall, where he hopes to work on a bioremediation or agricultural project. His teammate, McGee, will be doing therapeutic CRISPR research at the Broad Institute of MIT and Harvard, but says that environment-focused research is definitely her end goal.
    Between Covid-19 and post-graduation plans, the capstone projects will likely end with the class. Still, this experience will continue to have an influence on the student participants. Team H2Woah is open to continuing their project in the future in some way, Amaro says, since it was their “first real bioengineering experience, and will always have a special place in our hearts.”
    Their instructors certainly hope that the class will prove a lasting inspiration. “Even in the face of the Covid-19 pandemic,” White says, “the problems with global warming and food and water security are still the most pressing problems we face as a species. These problems need lots of smart, motivated people thinking of different solutions. If our class ends up motivating even a couple of these students to engage on these problems in the future, then we will have been very successful.”

    Topics: Biological engineering, School of Engineering, Civil and environmental engineering, Broad Institute, J-WAFS, Classes and programs, Sustainability, Water, Undergraduates, Students, Alumni/ae More

  • in

    Building a more sustainable MIT — from home

    Like most offices across MIT, the Office of Sustainability (MITOS) has in recent months worked to pivot projects while seeking to understand and participate in the emergence of a new normal as the result of the Covid-19 pandemic. Despite now working off campus, the MITOS team methodology — one that warrants collective engagement, commitment to innovative problem solving, and robust data collection — has continued.
    An expanded look at resiliency
    When the MIT community transitioned off campus, many began to use the word “resilient” for good reason — it’s one way to describe a community of thousands that quickly learned how to study, research, work, and teach from afar in the face of a major disruption. In the field of sustainability, resiliency is frequently used when referring to how communities can not only continue to function, but thrive during and after flooding or extreme heat events as the result of climate change.
    In recent months, the term has taken on expanded meaning. “The challenges associated with Covid-19 and its impact on MIT and the greater community has provided a moment to explore what a sustainable, resilient campus and community looks like in practice,” says Director of Sustainability Julie Newman.
    The MIT campus climate resiliency framework codified by MITOS — and in response to a changing climate — has long been organized around the interdependencies of four core systems: community (academic, research, and student life), buildings, utilities, and landscape systems. This same framework is now being applied in part to the MIT response to Covid-19. “The MIT campus climate resiliency framework has enabled us to understand the vulnerabilities and capacities within each core system that inhibit or enable fulfillment of MIT’s mission,” explains Brian Goldberg, MITOS assistant director. “The pandemic’s disruption of the community layer provides us with a remarkable test in progress of this adaptive capacity.”
    The campus response to the pandemic has, in fact, informed future modeling and demonstrated how the community can advance its important work even when displaced. “MIT has been able to offer countless virtual resources to maintain a connected community,” Goldberg explains. “While a future major flood could physically displace segments of our community, we’ve now seen that the ability to quickly evacuate and regroup virtually demonstrates a remarkable adaptive capacity.”
    Taking the hive home
    Also resilient are the flowering plants growing in the Hive Garden — the Institute’s student-supported pollinator garden. Maintained by MIT Grounds Services alongside students, the closure of campus meant many would miss the first spring bloom in the new garden. To make up for this, a group of UA Sustainability Committee (UA Sustain) students began to brainstorm ways to bring sustainable gardening to the MIT community if they couldn’t come to campus. Working with MITOS, students hatched the idea for the Hive@Home — a project that empowers students and staff to try their hands (and green thumbs) at growing a jalapeno or two, while building community.
    “The Hive@Home is designed to link students and staff through gardening — continuing to strengthen the relationships built between MIT Grounds and the community since the Hive Garden started,” says Susy Jones, senior project manager who is leading the effort for MITOS. With funding from UA Sustain and MindHandHeart, the Hive@Home pilot launched in April with more than four dozen community members receiving vegetable seeds and growing supplies. Now the community is sharing their sprouts and lessons learned on Slack with guidance from MIT Grounds experts like Norm Magnusson and Mike Seaberg, who helped bring the campus garden to life, along with professor of ocean and mechanical engineering Alexandra Techet, who is also an experienced home gardener.
    Lessons learned from Covid-19 response 
    The impacts of Covid-19 continue to provide insights into community behavior and views. Seeing an opportunity to better understand these views, the Sustainability Leadership Committee, in collaboration with the Office of Sustainability, the Environmental Solutions Initiative, Terrascope, and the MIT Energy Initiative, hosted a community sustainability forum where more than 100 participants — including staff, students, and faculty — shared ideas on how they thought the response to Covid-19 could inform sustainability efforts at MIT and beyond. Common themes of human health and well-being, climate action, food security, consumption and waste, sustainability education, and bold leadership emerged from the forum. “The event gave us a view into how MIT can be a sustainability leader in a post Covid-19 world, and how our community would like to see this accomplished,” says Newman.
    Community members also shared a renewed focus on the impacts of consumption and single-use plastics, as well as the idea that remote work can decrease the carbon footprint of the Institute. The Sustainability Leadership Committee is now working to share these insights to drive action and launch new ideas with sustainability partners across campus. 
    These actions are just the beginning, as plans for campus are updated and the MIT community learns and adapts to a new normal at MIT. “We are looking at these ideas as a starting place,” explains Newman. “As we look to a future return to campus, we know the sustainability challenges and opportunities faced will continue to shift thinking about our mobility choices, where we eat, what we buy, and more. We will continue to have these community conversations and work across campus to support a sustainable, safe MIT.” More