More stories

  • in

    Study: Weaker ocean circulation could enhance CO2 buildup in the atmosphere

    As climate change advances, the ocean’s overturning circulation is predicted to weaken substantially. With such a slowdown, scientists estimate the ocean will pull down less carbon dioxide from the atmosphere. However, a slower circulation should also dredge up less carbon from the deep ocean that would otherwise be released back into the atmosphere. On balance, the ocean should maintain its role in reducing carbon emissions from the atmosphere, if at a slower pace.However, a new study by an MIT researcher finds that scientists may have to rethink the relationship between the ocean’s circulation and its long-term capacity to store carbon. As the ocean gets weaker, it could release more carbon from the deep ocean into the atmosphere instead.The reason has to do with a previously uncharacterized feedback between the ocean’s available iron, upwelling carbon and nutrients, surface microorganisms, and a little-known class of molecules known generally as “ligands.” When the ocean circulates more slowly, all these players interact in a self-perpetuating cycle that ultimately increases the amount of carbon that the ocean outgases back to the atmosphere.“By isolating the impact of this feedback, we see a fundamentally different relationship between ocean circulation and atmospheric carbon levels, with implications for the climate,” says study author Jonathan Lauderdale, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “What we thought is going on in the ocean is completely overturned.”Lauderdale says the findings show that “we can’t count on the ocean to store carbon in the deep ocean in response to future changes in circulation. We must be proactive in cutting emissions now, rather than relying on these natural processes to buy us time to mitigate climate change.”His study appears today in the journal Nature Communications.Box flowIn 2020, Lauderdale led a study that explored ocean nutrients, marine organisms, and iron, and how their interactions influence the growth of phytoplankton around the world. Phytoplankton are microscopic, plant-like organisms that live on the ocean surface and consume a diet of carbon and nutrients that upwell from the deep ocean and iron that drifts in from desert dust.The more phytoplankton that can grow, the more carbon dioxide they can absorb from the atmosphere via photosynthesis, and this plays a large role in the ocean’s ability to sequester carbon.For the 2020 study, the team developed a simple “box” model, representing conditions in different parts of the ocean as general boxes, each with a different balance of nutrients, iron, and ligands — organic molecules that are thought to be byproducts of phytoplankton. The team modeled a general flow between the boxes to represent the ocean’s larger circulation — the way seawater sinks, then is buoyed back up to the surface in different parts of the world.This modeling revealed that, even if scientists were to “seed” the oceans with extra iron, that iron wouldn’t have much of an effect on global phytoplankton growth. The reason was due to a limit set by ligands. It turns out that, if left on its own, iron is insoluble in the ocean and therefore unavailable to phytoplankton. Iron only becomes soluble at “useful” levels when linked with ligands, which keep iron in a form that plankton can consume. Lauderdale found that adding iron to one ocean region to consume additional nutrients robs other regions of nutrients that phytoplankton there need to grow. This lowers the production of ligands and the supply of iron back to the original ocean region, limiting the amount of extra carbon that would be taken up from the atmosphere.Unexpected switchOnce the team published their study, Lauderdale worked the box model into a form that he could make publicly accessible, including ocean and atmosphere carbon exchange and extending the boxes to represent more diverse environments, such as conditions similar to the Pacific, the North Atlantic, and the Southern Ocean. In the process, he tested other interactions within the model, including the effect of varying ocean circulation.He ran the model with different circulation strengths, expecting to see less atmospheric carbon dioxide with weaker ocean overturning — a relationship that previous studies have supported, dating back to the 1980s. But what he found instead was a clear and opposite trend: The weaker the ocean’s circulation, the more CO2 built up in the atmosphere.“I thought there was some mistake,” Lauderdale recalls. “Why were atmospheric carbon levels trending the wrong way?”When he checked the model, he found that the parameter describing ocean ligands had been left “on” as a variable. In other words, the model was calculating ligand concentrations as changing from one ocean region to another.On a hunch, Lauderdale turned this parameter “off,” which set ligand concentrations as constant in every modeled ocean environment, an assumption that many ocean models typically make. That one change reversed the trend, back to the assumed relationship: A weaker circulation led to reduced atmospheric carbon dioxide. But which trend was closer to the truth?Lauderdale looked to the scant available data on ocean ligands to see whether their concentrations were more constant or variable in the actual ocean. He found confirmation in GEOTRACES, an international study that coordinates measurements of trace elements and isotopes across the world’s oceans, that scientists can use to compare concentrations from region to region. Indeed, the molecules’ concentrations varied. If ligand concentrations do change from one region to another, then his surprise new result was likely representative of the real ocean: A weaker circulation leads to more carbon dioxide in the atmosphere.“It’s this one weird trick that changed everything,” Lauderdale says. “The ligand switch has revealed this completely different relationship between ocean circulation and atmospheric CO2 that we thought we understood pretty well.”Slow cycleTo see what might explain the overturned trend, Lauderdale analyzed biological activity and carbon, nutrient, iron, and ligand concentrations from the ocean model under different circulation strengths, comparing scenarios where ligands were variable or constant across the various boxes.This revealed a new feedback: The weaker the ocean’s circulation, the less carbon and nutrients the ocean pulls up from the deep. Any phytoplankton at the surface would then have fewer resources to grow and would produce fewer byproducts (including ligands) as a result. With fewer ligands available, less iron at the surface would be usable, further reducing the phytoplankton population. There would then be fewer phytoplankton available to absorb carbon dioxide from the atmosphere and consume upwelled carbon from the deep ocean.“My work shows that we need to look more carefully at how ocean biology can affect the climate,” Lauderdale points out. “Some climate models predict a 30 percent slowdown in the ocean circulation due to melting ice sheets, particularly around Antarctica. This huge slowdown in overturning circulation could actually be a big problem: In addition to a host of other climate issues, not only would the ocean take up less anthropogenic CO2 from the atmosphere, but that could be amplified by a net outgassing of deep ocean carbon, leading to an unanticipated increase in atmospheric CO2 and unexpected further climate warming.”  More

  • in

    Pioneering the future of materials extraction

    The next time you cook pasta, imagine that you are cooking spaghetti, rigatoni, and seven other varieties all together, and they need to be separated onto 10 different plates before serving. A colander can remove the water — but you still have a mound of unsorted noodles. Now imagine that this had to be done for thousands of tons of pasta a day. That gives you an idea of the scale of the problem facing Brendan Smith PhD ’18, co-founder and CEO of SiTration, a startup formed out of MIT’s Department of Materials Science and Engineering (DMSE) in 2020. SiTration, which raised $11.8 million in seed capital led by venture capital firm 2150 earlier this month, is revolutionizing the extraction and refining of copper, cobalt, nickel, lithium, precious metals, and other materials critical to manufacturing clean-energy technologies such as electric motors, wind turbines, and batteries. Its initial target applications are recovering the materials from complex mining feed streams, spent lithium-ion batteries from electric vehicles, and various metals refining processes. The company’s breakthrough lies in a new silicon membrane technology that can be adjusted to efficiently recover disparate materials, providing a more sustainable and economically viable alternative to conventional, chemically intensive processes. Think of a colander with adjustable pores to strain different types of pasta. SiTration’s technology has garnered interest from industry players, including mining giant Rio Tinto. Some observers may question whether targeting such different industries could cause the company to lose focus. “But when you dig into these markets, you discover there is actually a significant overlap in how all of these materials are recovered, making it possible for a single solution to have impact across verticals,” Smith says.Powering up materials recoveryConventional methods of extracting critical materials in mining, refining, and recycling lithium-ion batteries involve heavy use of chemicals and heat, which harm the environment. Typically, raw ore from mines or spent batteries are ground into fine particles before being dissolved in acid or incinerated in a furnace. Afterward, they undergo intensive chemical processing to separate and purify the valuable materials. “It requires as much as 10 tons of chemical input to produce one ton of critical material recovered from the mining or battery recycling feedstock,” says Smith. Operators can then sell the recaptured materials back into the supply chain, but suffer from wide swings in profitability due to uncertain market prices. Lithium prices have been the most volatile, having surged more than 400 percent before tumbling back to near-original levels in the past two years. Despite their poor economics and negative environmental impact, these processes remain the state of the art today. By contrast, SiTration is electrifying the critical-materials recovery process, improving efficiency, producing less chemical waste, and reducing the use of chemicals and heat. What’s more, the company’s processing technology is built to be highly adaptable, so it can handle all kinds of materials. The core technology is based on work done at MIT to develop a novel type of membrane made from silicon, which is durable enough to withstand harsh chemicals and high temperatures while conducting electricity. It’s also highly tunable, meaning it can be modified or adjusted to suit different conditions or target specific materials. SiTration’s technology also incorporates electro-extraction, a technique that uses electrochemistry to further isolate and extract specific target materials. This powerful combination of methods in a single system makes it more efficient and effective at isolating and recovering valuable materials, Smith says. So depending on what needs to be separated or extracted, the filtration and electro-extraction processes are adjusted accordingly. “We can produce membranes with pore sizes from the molecular scale up to the size of a human hair in diameter, and everything in between. Combined with the ability to electrify the membrane and separate based on a material’s electrochemical properties, this tunability allows us to target a vast array of different operations and separation applications across industrial fields,” says Smith. Efficient access to materials like lithium, cobalt, and copper — and precious metals like platinum, gold, silver, palladium, and rare-earth elements — is key to unlocking innovation in business and sustainability as the world moves toward electrification and away from fossil fuels.“This is an era when new materials are critical,” says Professor Jeffrey Grossman, co-founder and chief scientist of SiTration and the Morton and Claire Goulder and Family Professor in Environmental Systems at DMSE. “For so many technologies, they’re both the bottleneck and the opportunity, offering tremendous potential for non-incremental advances. And the role they’re having in commercialization and in entrepreneurship cannot be overstated.”SiTration’s commercial frontierSmith became interested in separation technology in 2013 as a PhD student in Grossman’s DMSE research group, which has focused on the design of new membrane materials for a range of applications. The two shared a curiosity about separation of critical materials and a hunger to advance the technology. After years of study under Grossman’s mentorship, and with support from several MIT incubators and foundations including the Abdul Latif Jameel Water and Food Systems Lab’s Solutions Program, the Deshpande Center for Technological Innovation, the Kavanaugh Fellowship, MIT Sandbox, and Venture Mentoring Service, Smith was ready to officially form SiTration in 2020. Grossman has a seat on the board and plays an active role as a strategic and technical advisor. Grossman is involved in several MIT spinoffs and embraces the different imperatives of research versus commercialization. “At SiTration, we’re driving this technology to work at scale. There’s something super exciting about that goal,” he says. “The challenges that come with scaling are very different than the challenges that come in a university lab.” At the same time, although not every research breakthrough becomes a commercial product, open-ended, curiosity-driven knowledge pursuit holds its own crucial value, he adds.It has been rewarding for Grossman to see his technically gifted student and colleague develop a host of other skills the role of CEO demands. Getting out to the market and talking about the technology with potential partners, putting together a dynamic team, discovering the challenges facing industry, drumming up support, early on — those became the most pressing activities on Smith’s agenda. “What’s most fun to me about being a CEO of an early-stage startup is that there are 100 different factors, most people-oriented, that you have to navigate every day. Each stakeholder has different motivations and objectives. And you basically try to fit that all together, to create value for our partners and customers, the company, and for society,” says Smith. “You start with just an idea, and you have to keep leveraging that to form a more and more tangible product, to multiply and progress commercial relationships, and do it all at an ever-expanding scale.” MIT DNA runs deep in the nine-person company, with DMSE grad and former Grossman student Jatin Patil as director of product; Ahmed Helal, from MIT’s Department of Mechanical Engineering, as vice president of research and development; Daniel Bregante, from the Department of Chemistry, as VP of technology; and Sarah Melvin, from the departments of Physics and Political Science, as VP of strategy and operations. Melvin is the first hire devoted to business development. Smith plans to continue expanding the team following the closing of the company’s seed round.  Strategic alliancesBeing a good communicator was important when it came to securing funding, Smith says. SiTration received $2.35 million in pre-seed funding in 2022 led by Azolla Ventures, which reserves its $239 million in investment capital for startups that would not otherwise easily obtain funding. “We invest only in solution areas that can achieve gigaton-scale climate impact by 2050,” says Matthew Nordan, a general partner at Azolla and now SiTration board member. The MIT-affiliated E14 Fund also contributed to the pre-seed round; Azolla and E14 both participated in the recent seed funding round. “Brendan demonstrated an extraordinary ability to go from being a thoughtful scientist to a business leader and thinker who has punched way above his weight in engaging with customers and recruiting a well-balanced team and navigating tricky markets,” says Nordan. One of SiTration’s first partnerships is with Rio Tinto, one of the largest mining companies in the world. As SiTration evaluated various uses cases in its early days, identifying critical materials as its target market, Rio Tinto was looking for partners to recover valuable metals such as cobalt and copper from the wastewater generated at mines. These metals were typically trapped in the water, creating harmful waste and resulting in lost revenue. “We thought this was a great innovation challenge and posted it on our website to scout for companies to partner with who can help us solve this water challenge,” said Nick Gurieff, principal advisor for mine closure, in an interview with MIT’s Industrial Liaison Program in 2023. At SiTration, mining was not yet a market focus, but Smith couldn’t help noticing that Rio Tinto’s needs were in alignment with what his young company offered. SiTration submitted its proposal in August 2022. Gurieff said SiTration’s tunable membrane set it apart. The companies formed a business partnership in June 2023, with SiTration adjusting its membrane to handle mine wastewater and incorporating Rio Tinto feedback to refine the technology. After running tests with water from mine sites, SiTration will begin building a small-scale critical-materials recovery unit, followed by larger-scale systems processing up to 100 cubic meters of water an hour.SiTration’s focused technology development with Rio Tinto puts it in a good position for future market growth, Smith says. “Every ounce of effort and resource we put into developing our product is geared towards creating real-world value. Having an industry-leading partner constantly validating our progress is a tremendous advantage.”It’s a long time from the days when Smith began tinkering with tiny holes in silicon in Grossman’s DMSE lab. Now, they work together as business partners who are scaling up technology to meet a global need. Their joint passion for applying materials innovation to tough problems has served them well. “Materials science and engineering is an engine for a lot of the innovation that is happening today,” Grossman says. “When you look at all of the challenges we face to make the transition to a more sustainable planet, you realize how many of these are materials challenges.” More

  • in

    Making climate models relevant for local decision-makers

    Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. “It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. Traditional wisdomIn climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. “If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing power to run, while also being expensive. A little bit of both In their new paper, Saha and Ravela have figured out a way to add the data another way. They’ve employed a technique in machine learning called adversarial learning. It uses two machines: One generates data to go into our photo. But the other machine judges the sample by comparing it to actual data. If it thinks the image is fake, then the first machine has to try again until it convinces the second machine. The end-goal of the process is to create super-resolution data. Using machine learning techniques like adversarial learning is not a new idea in climate modeling; where it currently struggles is its inability to handle large amounts of basic physics, like conservation laws. The researchers discovered that simplifying the physics going in and supplementing it with statistics from the historical data was enough to generate the results they needed. “If you augment machine learning with some information from the statistics and simplified physics both, then suddenly, it’s magical,” says Ravela. He and Saha started with estimating extreme rainfall amounts by removing more complex physics equations and focusing on water vapor and land topography. They then generated general rainfall patterns for mountainous Denver and flat Chicago alike, applying historical accounts to correct the output. “It’s giving us extremes, like the physics does, at a much lower cost. And it’s giving us similar speeds to statistics, but at much higher resolution.” Another unexpected benefit of the results was how little training data was needed. “The fact that that only a little bit of physics and little bit of statistics was enough to improve the performance of the ML [machine learning] model … was actually not obvious from the beginning,” says Saha. It only takes a few hours to train, and can produce results in minutes, an improvement over the months other models take to run. Quantifying risk quicklyBeing able to run the models quickly and often is a key requirement for stakeholders such as insurance companies and local policymakers. Ravela gives the example of Bangladesh: By seeing how extreme weather events will impact the country, decisions about what crops should be grown or where populations should migrate to can be made considering a very broad range of conditions and uncertainties as soon as possible.“We can’t wait months or years to be able to quantify this risk,” he says. “You need to look out way into the future and at a large number of uncertainties to be able to say what might be a good decision.”While the current model only looks at extreme precipitation, training it to examine other critical events, such as tropical storms, winds, and temperature, is the next step of the project. With a more robust model, Ravela is hoping to apply it to other places like Boston and Puerto Rico as part of a Climate Grand Challenges project.“We’re very excited both by the methodology that we put together, as well as the potential applications that it could lead to,” he says.  More

  • in

    Students research pathways for MIT to reach decarbonization goals

    A number of emerging technologies hold promise for helping organizations move away from fossil fuels and achieve deep decarbonization. The challenge is deciding which technologies to adopt, and when.MIT, which has a goal of eliminating direct campus emissions by 2050, must make such decisions sooner than most to achieve its mission. That was the challenge at the heart of the recently concluded class 4.s42 (Building Technology — Carbon Reduction Pathways for the MIT Campus).The class brought together undergraduate and graduate students from across the Institute to learn about different technologies and decide on the best path forward. It concluded with a final report as well as student presentations to members of MIT’s Climate Nucleus on May 9.“The mission of the class is to put together a cohesive document outlining how MIT can reach its goal of decarbonization by 2050,” says Morgan Johnson Quamina, an undergraduate in the Department of Civil and Environmental Engineering. “We’re evaluating how MIT can reach these goals on time, what sorts of technologies can help, and how quickly and aggressively we’ll have to move. The final report details a ton of scenarios for partial and full implementation of different technologies, outlines timelines for everything, and features recommendations.”The class was taught by professor of architecture Christoph Reinhart but included presentations by other faculty about low- and zero-carbon technology areas in their fields, including advanced nuclear reactors, deep geothermal energy, carbon capture, and more.The students’ work served as an extension of MIT’s Campus Decarbonization Working Group, which Reinhart co-chairs with Director of Sustainability Julie Newman. The group is charged with developing a technology roadmap for the campus to reach its goal of decarbonizing its energy systems.Reinhart says the class was a way to leverage the energy and creativity of students to accelerate his group’s work.“It’s very much focused on establishing a vision for what could happen at MIT,” Reinhart says. “We are trying to bring these technologies together so that we see how this [decarbonization process] would actually look on our campus.”A class with impactThroughout the semester, every Thursday from 9 a.m. to 12 p.m., around 20 students gathered to explore different decarbonization technology pathways. They also discussed energy policies, methods for evaluating risk, and future electric grid supply changes in New England.“I love that this work can have a real-world impact,” says Emile Germonpre, a master’s student in the Department of Nuclear Science and Engineering. “You can tell people aren’t thinking about grades or workload — I think people would’ve loved it even if the workload was doubled. Everyone is just intrinsically motivated to help solve this problem.”The classes typically began with an introduction to one of 10 different technologies. The introductions covered technical maturity, ease of implementation, costs, and how to model the technology’s impact on campus emissions. Students were then split into teams to evaluate each technology’s feasibility.“I’ve learned a lot about decarbonization and climate change,” says Johnson Quamina. “As an undergrad, I haven’t had many focused classes like this. But it was really beneficial to learn about some of these technologies I hadn’t even heard of before. It’s awesome to be contributing to the community like this.”As part of the class, students also developed a model that visualizes each intervention’s effect on emissions, allowing users to select interventions or combinations of interventions to see how they shape emissions trajectories.“We have a physics-based model that takes into account every building,” says Reinhart. “You can look at variants where we retrofit buildings, where we add rooftop photovoltaics, nuclear, carbon capture, and adopting different types of district underground heating systems. The point is you can start to see how fast we could do something like this and what the real game-changers are.”The class also designed and conducted a preliminary survey, to be expanded in the fall, that captures the MIT community’s attitudes towards the different technologies. Preliminary results were shared with the Climate Nucleus during students’ May 9 presentations.“I think it’s this unique and wonderful intersection of the forward-looking and innovative nature of academia with real world impact and specificity that you’d typically only find in industry,” Germonpre says. “It lets you work on a tangible project, the MIT campus, while exploring technologies that companies today find too risky to be the first mover on.”From MIT’s campus to the worldThe students recommended MIT form a building energy team to audit and retrofit all campus buildings. They also suggested MIT order a comprehensive geological feasibility survey to support planning regarding shallow and deep borehole fields for harvesting underground heat. A third recommendation was to communicate with the MIT community as well as with regulators and policymakers in the area about the deployment of nuclear batteries and deep geothermal boreholes on campus.The students’ modeling tool can also help members of the working group explore various decarbonization pathways. For instance, installing rooftop photovoltaics now would effectively reduce emissions, but installing them in a few decades, when the regional electricity grid is expected to be reducing its reliance on fossil fuels anyways, would have a much smaller impact.“When you have students working together, the recommendations are a little less filtered, which I think is a good thing,” Reinhart says. “I think there’s a real sense of urgency in the class. For certain choices, we have to basically act now.”Reinhart plans to do more activities related to the Working Group and the class’ recommendations in the fall, and he says he’s currently engaged with the Massachusetts Governor’s Office to explore doing something similar for the state.Students say they plan to keep working on the survey this summer and continue studying their technology areas. In the longer term, they believe the experience will help them in their careers.“Decarbonization is really important, and understanding how we can implement new technologies on campuses or in buildings provides me with a more well-rounded vision for what I could design in my career,” says Johnson Quamina, who wants to work as a structural or environmental engineer but says the class has also inspired her to consider careers in energy.The students’ findings also have implications beyond MIT campus. In accordance with MIT’s 2015 climate plan that committed to using the campus community as a “test bed for change,” the students’ recommendations also hold value for organizations around the world.“The mission is definitely broader than just MIT,” Germonpre says. “We don’t just want to solve MIT’s problem. We’ve dismissed technologies that were too specific to MIT. The goal is for MIT to lead by example and help certain technologies mature so that we can accelerate their impact.” More

  • in

    Reducing carbon emissions from long-haul trucks

    People around the world rely on trucks to deliver the goods they need, and so-called long-haul trucks play a critical role in those supply chains. In the United States, long-haul trucks moved 71 percent of all freight in 2022. But those long-haul trucks are heavy polluters, especially of the carbon emissions that threaten the global climate. According to U.S. Environmental Protection Agency estimates, in 2022 more than 3 percent of all carbon dioxide (CO2) emissions came from long-haul trucks.The problem is that long-haul trucks run almost exclusively on diesel fuel, and burning diesel releases high levels of CO2 and other carbon emissions. Global demand for freight transport is projected to as much as double by 2050, so it’s critical to find another source of energy that will meet the needs of long-haul trucks while also reducing their carbon emissions. And conversion to the new fuel must not be costly. “Trucks are an indispensable part of the modern supply chain, and any increase in the cost of trucking will be felt universally,” notes William H. Green, the Hoyt Hottel Professor in Chemical Engineering and director of the MIT Energy Initiative.For the past year, Green and his research team have been seeking a low-cost, cleaner alternative to diesel. Finding a replacement is difficult because diesel meets the needs of the trucking industry so well. For one thing, diesel has a high energy density — that is, energy content per pound of fuel. There’s a legal limit on the total weight of a truck and its contents, so using an energy source with a lower weight allows the truck to carry more payload — an important consideration, given the low profit margin of the freight industry. In addition, diesel fuel is readily available at retail refueling stations across the country — a critical resource for drivers, who may travel 600 miles in a day and sleep in their truck rather than returning to their home depot. Finally, diesel fuel is a liquid, so it’s easy to distribute to refueling stations and then pump into trucks.Past studies have examined numerous alternative technology options for powering long-haul trucks, but no clear winner has emerged. Now, Green and his team have evaluated the available options based on consistent and realistic assumptions about the technologies involved and the typical operation of a long-haul truck, and assuming no subsidies to tip the cost balance. Their in-depth analysis of converting long-haul trucks to battery electric — summarized below — found a high cost and negligible emissions gains in the near term. Studies of methanol and other liquid fuels from biomass are ongoing, but already a major concern is whether the world can plant and harvest enough biomass for biofuels without destroying the ecosystem. An analysis of hydrogen — also summarized below — highlights specific challenges with using that clean-burning fuel, which is a gas at normal temperatures.Finally, the team identified an approach that could make hydrogen a promising, low-cost option for long-haul trucks. And, says Green, “it’s an option that most people are probably unaware of.” It involves a novel way of using materials that can pick up hydrogen, store it, and then release it when and where it’s needed to serve as a clean-burning fuel.Defining the challenge: A realistic drive cycle, plus diesel values to beatThe MIT researchers believe that the lack of consensus on the best way to clean up long-haul trucking may have a simple explanation: Different analyses are based on different assumptions about the driving behavior of long-haul trucks. Indeed, some of them don’t accurately represent actual long-haul operations. So the first task for the MIT team was to define a representative — and realistic — “drive cycle” for actual long-haul truck operations in the United States. Then the MIT researchers — and researchers elsewhere — can assess potential replacement fuels and engines based on a consistent set of assumptions in modeling and simulation analyses.To define the drive cycle for long-haul operations, the MIT team used a systematic approach to analyze many hours of real-world driving data covering 58,000 miles. They examined 10 features and identified three — daily range, vehicle speed, and road grade — that have the greatest impact on energy demand and thus on fuel consumption and carbon emissions. The representative drive cycle that emerged covers a distance of 600 miles, an average vehicle speed of 55 miles per hour, and a road grade ranging from negative 6 percent to positive 6 percent.The next step was to generate key values for the performance of the conventional diesel “powertrain,” that is, all the components involved in creating power in the engine and delivering it to the wheels on the ground. Based on their defined drive cycle, the researchers simulated the performance of a conventional diesel truck, generating “benchmarks” for fuel consumption, CO2 emissions, cost, and other performance parameters.Now they could perform parallel simulations — based on the same drive-cycle assumptions — of possible replacement fuels and powertrains to see how the cost, carbon emissions, and other performance parameters would compare to the diesel benchmarks.The battery electric optionWhen considering how to decarbonize long-haul trucks, a natural first thought is battery power. After all, battery electric cars and pickup trucks are proving highly successful. Why not switch to battery electric long-haul trucks? “Again, the literature is very divided, with some studies saying that this is the best idea ever, and other studies saying that this makes no sense,” says Sayandeep Biswas, a graduate student in chemical engineering.To assess the battery electric option, the MIT researchers used a physics-based vehicle model plus well-documented estimates for the efficiencies of key components such as the battery pack, generators, motor, and so on. Assuming the previously described drive cycle, they determined operating parameters, including how much power the battery-electric system needs. From there they could calculate the size and weight of the battery required to satisfy the power needs of the battery electric truck.The outcome was disheartening. Providing enough energy to travel 600 miles without recharging would require a 2 megawatt-hour battery. “That’s a lot,” notes Kariana Moreno Sader, a graduate student in chemical engineering. “It’s the same as what two U.S. households consume per month on average.” And the weight of such a battery would significantly reduce the amount of payload that could be carried. An empty diesel truck typically weighs 20,000 pounds. With a legal limit of 80,000 pounds, there’s room for 60,000 pounds of payload. The 2 MWh battery would weigh roughly 27,000 pounds — significantly reducing the allowable capacity for carrying payload.Accounting for that “payload penalty,” the researchers calculated that roughly four electric trucks would be required to replace every three of today’s diesel-powered trucks. Furthermore, each added truck would require an additional driver. The impact on operating expenses would be significant.Analyzing the emissions reductions that might result from shifting to battery electric long-haul trucks also brought disappointing results. One might assume that using electricity would eliminate CO2 emissions. But when the researchers included emissions associated with making that electricity, that wasn’t true.“Battery electric trucks are only as clean as the electricity used to charge them,” notes Moreno Sader. Most of the time, drivers of long-haul trucks will be charging from national grids rather than dedicated renewable energy plants. According to Energy Information Agency statistics, fossil fuels make up more than 60 percent of the current U.S. power grid, so electric trucks would still be responsible for significant levels of carbon emissions. Manufacturing batteries for the trucks would generate additional CO2 emissions.Building the charging infrastructure would require massive upfront capital investment, as would upgrading the existing grid to reliably meet additional energy demand from the long-haul sector. Accomplishing those changes would be costly and time-consuming, which raises further concern about electrification as a means of decarbonizing long-haul freight.In short, switching today’s long-haul diesel trucks to battery electric power would bring major increases in costs for the freight industry and negligible carbon emissions benefits in the near term. Analyses assuming various types of batteries as well as other drive cycles produced comparable results.However, the researchers are optimistic about where the grid is going in the future. “In the long term, say by around 2050, emissions from the grid are projected to be less than half what they are now,” says Moreno Sader. “When we do our calculations based on that prediction, we find that emissions from battery electric trucks would be around 40 percent lower than our calculated emissions based on today’s grid.”For Moreno Sader, the goal of the MIT research is to help “guide the sector on what would be the best option.” With that goal in mind, she and her colleagues are now examining the battery electric option under different scenarios — for example, assuming battery swapping (a depleted battery isn’t recharged but replaced by a fully charged one), short-haul trucking, and other applications that might produce a more cost-competitive outcome, even for the near term.A promising option: hydrogenAs the world looks to get off reliance on fossil fuels for all uses, much attention is focusing on hydrogen. Could hydrogen be a good alternative for today’s diesel-burning long-haul trucks?To find out, the MIT team performed a detailed analysis of the hydrogen option. “We thought that hydrogen would solve a lot of the problems we had with battery electric,” says Biswas. It doesn’t have associated CO2 emissions. Its energy density is far higher, so it doesn’t create the weight problem posed by heavy batteries. In addition, existing compression technology can get enough hydrogen fuel into a regular-sized tank to cover the needed distance and range. “You can actually give drivers the range they want,” he says. “There’s no issue with ‘range anxiety.’”But while using hydrogen for long-haul trucking would reduce carbon emissions, it would cost far more than diesel. Based on their detailed analysis of hydrogen, the researchers concluded that the main source of incurred cost is in transporting it. Hydrogen can be made in a chemical facility, but then it needs to be distributed to refueling stations across the country. Conventionally, there have been two main ways of transporting hydrogen: as a compressed gas and as a cryogenic liquid. As Biswas notes, the former is “super high pressure,” and the latter is “super cold.” The researchers’ calculations show that as much as 80 percent of the cost of delivered hydrogen is due to transportation and refueling, plus there’s the need to build dedicated refueling stations that can meet new environmental and safety standards for handling hydrogen as a compressed gas or a cryogenic liquid.Having dismissed the conventional options for shipping hydrogen, they turned to a less-common approach: transporting hydrogen using “liquid organic hydrogen carriers” (LOHCs), special organic (carbon-containing) chemical compounds that can under certain conditions absorb hydrogen atoms and under other conditions release them.LOHCs are in use today to deliver small amounts of hydrogen for commercial use. Here’s how the process works: In a chemical plant, the carrier compound is brought into contact with hydrogen in the presence of a catalyst under elevated temperature and pressure, and the compound picks up the hydrogen. The “hydrogen-loaded” compound — still a liquid — is then transported under atmospheric conditions. When the hydrogen is needed, the compound is again exposed to a temperature increase and a different catalyst, and the hydrogen is released.LOHCs thus appear to be ideal hydrogen carriers for long-haul trucking. They’re liquid, so they can easily be delivered to existing refueling stations, where the hydrogen would be released; and they contain at least as much energy per gallon as hydrogen in a cryogenic liquid or compressed gas form. However, a detailed analysis of using hydrogen carriers showed that the approach would decrease emissions but at a considerable cost.The problem begins with the “dehydrogenation” step at the retail station. Releasing the hydrogen from the chemical carrier requires heat, which is generated by burning some of the hydrogen being carried by the LOHC. The researchers calculate that getting the needed heat takes 36 percent of that hydrogen. (In theory, the process would take only 27 percent — but in reality, that efficiency won’t be achieved.) So out of every 100 units of starting hydrogen, 36 units are now gone.But that’s not all. The hydrogen that comes out is at near-ambient pressure. So the facility dispensing the hydrogen will need to compress it — a process that the team calculates will use up 20-30 percent of the starting hydrogen.Because of the needed heat and compression, there’s now less than half of the starting hydrogen left to be delivered to the truck — and as a result, the hydrogen fuel becomes twice as expensive. The bottom line is that the technology works, but “when it comes to really beating diesel, the economics don’t work. It’s quite a bit more expensive,” says Biswas. In addition, the refueling stations would require expensive compressors and auxiliary units such as cooling systems. The capital investment and the operating and maintenance costs together imply that the market penetration of hydrogen refueling stations will be slow.A better strategy: onboard release of hydrogen from LOHCsGiven the potential benefits of using of LOHCs, the researchers focused on how to deal with both the heat needed to release the hydrogen and the energy needed to compress it. “That’s when we had the idea,” says Biswas. “Instead of doing the dehydrogenation [hydrogen release] at the refueling station and then loading the truck with hydrogen, why don’t we just take the LOHC and load that onto the truck?” Like diesel, LOHC is a liquid, so it’s easily transported and pumped into trucks at existing refueling stations. “We’ll then make hydrogen as it’s needed based on the power demands of the truck — and we can capture waste heat from the engine exhaust and use it to power the dehydrogenation process,” says Biswas.In their proposed plan, hydrogen-loaded LOHC is created at a chemical “hydrogenation” plant and then delivered to a retail refueling station, where it’s pumped into a long-haul truck. Onboard the truck, the loaded LOHC pours into the fuel-storage tank. From there it moves to the “dehydrogenation unit” — the reactor where heat and a catalyst together promote chemical reactions that separate the hydrogen from the LOHC. The hydrogen is sent to the powertrain, where it burns, producing energy that propels the truck forward.Hot exhaust from the powertrain goes to a “heat-integration unit,” where its waste heat energy is captured and returned to the reactor to help encourage the reaction that releases hydrogen from the loaded LOHC. The unloaded LOHC is pumped back into the fuel-storage tank, where it’s kept in a separate compartment to keep it from mixing with the loaded LOHC. From there, it’s pumped back into the retail refueling station and then transported back to the hydrogenation plant to be loaded with more hydrogen.Switching to onboard dehydrogenation brings down costs by eliminating the need for extra hydrogen compression and by using waste heat in the engine exhaust to drive the hydrogen-release process. So how does their proposed strategy look compared to diesel? Based on a detailed analysis, the researchers determined that using their strategy would be 18 percent more expensive than using diesel, and emissions would drop by 71 percent.But those results need some clarification. The 18 percent cost premium of using LOHC with onboard hydrogen release is based on the price of diesel fuel in 2020. In spring of 2023 the price was about 30 percent higher. Assuming the 2023 diesel price, the LOHC option is actually cheaper than using diesel.Both the cost and emissions outcomes are affected by another assumption: the use of “blue hydrogen,” which is hydrogen produced from natural gas with carbon capture and storage. Another option is to assume the use of “green hydrogen,” which is hydrogen produced using electricity generated from renewable sources, such as wind and solar. Green hydrogen is much more expensive than blue hydrogen, so then the costs would increase dramatically.If in the future the price of green hydrogen drops, the researchers’ proposed plan would shift to green hydrogen — and then the decline in emissions would no longer be 71 percent but rather close to 100 percent. There would be almost no emissions associated with the researchers’ proposed plan for using LHOCs with onboard hydrogen release.Comparing the options on cost and emissionsTo compare the options, Moreno Sader prepared bar charts showing the per-mile cost of shipping by truck in the United States and the CO2 emissions that result using each of the fuels and approaches discussed above: diesel fuel, battery electric, hydrogen as a cryogenic liquid or compressed gas, and LOHC with onboard hydrogen release. The LOHC strategy with onboard dehydrogenation looked promising on both the cost and the emissions charts. In addition to such quantitative measures, the researchers believe that their strategy addresses two other, less-obvious challenges in finding a less-polluting fuel for long-haul trucks.First, the introduction of the new fuel and trucks to use it must not disrupt the current freight-delivery setup. “You have to keep the old trucks running while you’re introducing the new ones,” notes Green. “You cannot have even a day when the trucks aren’t running because it’d be like the end of the economy. Your supermarket shelves would all be empty; your factories wouldn’t be able to run.” The researchers’ plan would be completely compatible with the existing diesel supply infrastructure and would require relatively minor retrofits to today’s long-haul trucks, so the current supply chains would continue to operate while the new fuel and retrofitted trucks are introduced.Second, the strategy has the potential to be adopted globally. Long-haul trucking is important in other parts of the world, and Moreno Sader thinks that “making this approach a reality is going to have a lot of impact, not only in the United States but also in other countries,” including her own country of origin, Colombia. “This is something I think about all the time.” The approach is compatible with the current diesel infrastructure, so the only requirement for adoption is to build the chemical hydrogenation plant. “And I think the capital expenditure related to that will be less than the cost of building a new fuel-supply infrastructure throughout the country,” says Moreno Sader.Testing in the lab“We’ve done a lot of simulations and calculations to show that this is a great idea,” notes Biswas. “But there’s only so far that math can go to convince people.” The next step is to demonstrate their concept in the lab.To that end, the researchers are now assembling all the core components of the onboard hydrogen-release reactor as well as the heat-integration unit that’s key to transferring heat from the engine exhaust to the hydrogen-release reactor. They estimate that this spring they’ll be ready to demonstrate their ability to release hydrogen and confirm the rate at which it’s formed. And — guided by their modeling work — they’ll be able to fine-tune critical components for maximum efficiency and best performance.The next step will be to add an appropriate engine, specially equipped with sensors to provide the critical readings they need to optimize the performance of all their core components together. By the end of 2024, the researchers hope to achieve their goal: the first experimental demonstration of a power-dense, robust onboard hydrogen-release system with highly efficient heat integration.In the meantime, they believe that results from their work to date should help spread the word, bringing their novel approach to the attention of other researchers and experts in the trucking industry who are now searching for ways to decarbonize long-haul trucking.Financial support for development of the representative drive cycle and the diesel benchmarks as well as the analysis of the battery electric option was provided by the MIT Mobility Systems Center of the MIT Energy Initiative. Analysis of LOHC-powered trucks with onboard dehydrogenation was supported by the MIT Climate and Sustainability Consortium. Sayandeep Biswas is supported by a fellowship from the Martin Family Society of Fellows for Sustainability, and Kariana Moreno Sader received fellowship funding from MathWorks through the MIT School of Science. More

  • in

    Microscopic defects in ice influence how massive glaciers flow, study shows

    As they seep and calve into the sea, melting glaciers and ice sheets are raising global water levels at unprecedented rates. To predict and prepare for future sea-level rise, scientists need a better understanding of how fast glaciers melt and what influences their flow.Now, a study by MIT scientists offers a new picture of glacier flow, based on microscopic deformation in the ice. The results show that a glacier’s flow depends strongly on how microscopic defects move through the ice.The researchers found they could estimate a glacier’s flow based on whether the ice is prone to microscopic defects of one kind versus another. They used this relationship between micro- and macro-scale deformation to develop a new model for how glaciers flow. With the new model, they mapped the flow of ice in locations across the Antarctic Ice Sheet.Contrary to conventional wisdom, they found, the ice sheet is not a monolith but instead is more varied in where and how it flows in response to warming-driven stresses. The study “dramatically alters the climate conditions under which marine ice sheets may become unstable and drive rapid rates of sea-level rise,” the researchers write in their paper.“This study really shows the effect of microscale processes on macroscale behavior,” says Meghana Ranganathan PhD ’22, who led the study as a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) and is now a postdoc at Georgia Tech. “These mechanisms happen at the scale of water molecules and ultimately can affect the stability of the West Antarctic Ice Sheet.”“Broadly speaking, glaciers are accelerating, and there are a lot of variants around that,” adds co-author and EAPS Associate Professor Brent Minchew. “This is the first study that takes a step from the laboratory to the ice sheets and starts evaluating what the stability of ice is in the natural environment. That will ultimately feed into our understanding of the probability of catastrophic sea-level rise.”Ranganathan and Minchew’s study appears this week in the Proceedings of the National Academy of Sciences.Micro flowGlacier flow describes the movement of ice from the peak of a glacier, or the center of an ice sheet, down to the edges, where the ice then breaks off and melts into the ocean — a normally slow process that contributes over time to raising the world’s average sea level.In recent years, the oceans have risen at unprecedented rates, driven by global warming and the accelerated melting of glaciers and ice sheets. While the loss of polar ice is known to be a major contributor to sea-level rise, it is also the biggest uncertainty when it comes to making predictions.“Part of it’s a scaling problem,” Ranganathan explains. “A lot of the fundamental mechanisms that cause ice to flow happen at a really small scale that we can’t see. We wanted to pin down exactly what these microphysical processes are that govern ice flow, which hasn’t been represented in models of sea-level change.”The team’s new study builds on previous experiments from the early 2000s by geologists at the University of Minnesota, who studied how small chips of ice deform when physically stressed and compressed. Their work revealed two microscopic mechanisms by which ice can flow: “dislocation creep,” where molecule-sized cracks migrate through the ice, and “grain boundary sliding,” where individual ice crystals slide against each other, causing the boundary between them to move through the ice.The geologists found that ice’s sensitivity to stress, or how likely it is to flow, depends on which of the two mechanisms is dominant. Specifically, ice is more sensitive to stress when microscopic defects occur via dislocation creep rather than grain boundary sliding.Ranganathan and Minchew realized that those findings at the microscopic level could redefine how ice flows at much larger, glacial scales.“Current models for sea-level rise assume a single value for the sensitivity of ice to stress and hold this value constant across an entire ice sheet,” Ranganathan explains. “What these experiments showed was that actually, there’s quite a bit of variability in ice sensitivity, due to which of these mechanisms is at play.”A mapping matchFor their new study, the MIT team took insights from the previous experiments and developed a model to estimate an icy region’s sensitivity to stress, which directly relates to how likely that ice is to flow. The model takes in information such as the ambient temperature, the average size of ice crystals, and the estimated mass of ice in the region, and calculates how much the ice is deforming by dislocation creep versus grain boundary sliding. Depending on which of the two mechanisms is dominant, the model then estimates the region’s sensitivity to stress.The scientists fed into the model actual observations from various locations across the Antarctic Ice Sheet, where others had previously recorded data such as the local height of ice, the size of ice crystals, and the ambient temperature. Based on the model’s estimates, the team generated a map of ice sensitivity to stress across the Antarctic Ice Sheet. When they compared this map to satellite and field measurements taken of the ice sheet over time, they observed a close match, suggesting that the model could be used to accurately predict how glaciers and ice sheets will flow in the future.“As climate change starts to thin glaciers, that could affect the sensitivity of ice to stress,” Ranganathan says. “The instabilities that we expect in Antarctica could be very different, and we can now capture those differences, using this model.”  More

  • in

    Getting to systemic sustainability

    Add up the commitments from the Paris Agreement, the Glasgow Climate Pact, and various commitments made by cities, countries, and businesses, and the world would be able to hold the global average temperature increase to 1.9 degrees Celsius above preindustrial levels, says Ani Dasgupta, the president and chief executive officer of the World Resources Institute (WRI).While that is well above the 1.5 C threshold that many scientists agree would limit the most severe impacts of climate change, it is below the 2.0 degree threshold that could lead to even more catastrophic impacts, such as the collapse of ice sheets and a 30-foot rise in sea levels.However, Dasgupta notes, actions have so far not matched up with commitments.“There’s a huge gap between commitment and outcomes,” Dasgupta said during his talk, “Energizing the global transition,” at the 2024 Earth Day Colloquium co-hosted by the MIT Energy Initiative and MIT Department of Earth, Atmospheric and Planetary Sciences, and sponsored by the Climate Nucleus.Dasgupta noted that oil companies did $6 trillion worth of business across the world last year — $1 trillion more than they were planning. About 7 percent of the world’s remaining tropical forests were destroyed during that same time, he added, and global inequality grew even worse than before.“None of these things were illegal, because the system we have today produces these outcomes,” he said. “My point is that it’s not one thing that needs to change. The whole system needs to change.”People, climate, and natureDasgupta, who previously held positions in nonprofits in India and at the World Bank, is a recognized leader in sustainable cities, poverty alleviation, and building cultures of inclusion. Under his leadership, WRI, a global research nonprofit that studies sustainable practices with the goal of fundamentally transforming the world’s food, land and water, energy, and cities, adopted a new five-year strategy called “Getting the Transition Right for People, Nature, and Climate 2023-2027.” It focuses on creating new economic opportunities to meet people’s essential needs, restore nature, and rapidly lower emissions, while building resilient communities. In fact, during his talk, Dasgupta said that his organization has moved away from talking about initiatives in terms of their impact on greenhouse gas emissions — instead taking a more holistic view of sustainability.“There is no net zero without nature,” Dasgupta said. He showed a slide with a graphic illustrating potential progress toward net-zero goals. “If nature gets diminished, that chart becomes even steeper. It’s very steep right now, but natural systems absorb carbon dioxide. So, if the natural systems keep getting destroyed, that curve becomes harder and harder.”A focus on people is necessary, Dasgupta said, in part because of the unequal climate impacts that the rich and the poor are likely to face in the coming years. “If you made it to this room, you will not be impacted by climate change,” he said. “You have resources to figure out what to do about it. The people who get impacted are people who don’t have resources. It is immensely unfair. Our belief is, if we don’t do climate policy that helps people directly, we won’t be able to make progress.”Where to start?Although Dasgupta stressed that systemic change is needed to bring carbon emissions in line with long-term climate goals, he made the case that it is unrealistic to implement this change around the globe all at once. “This transition will not happen in 196 countries at the same time,” he said. “The question is, how do we get to the tipping point so that it happens at scale? We’ve worked the past few years to ask the question, what is it you need to do to create this tipping point for change?”Analysts at WRI looked for countries that are large producers of carbon, those with substantial tropical forest cover, and those with large quantities of people living in poverty. “We basically tried to draw a map of, where are the biggest challenges for climate change?” Dasgupta said.That map features a relative handful of countries, including the United States, Mexico, China, Brazil, South Africa, India, and Indonesia. Dasgupta said, “Our argument is that, if we could figure out and focus all our efforts to help these countries transition, that will create a ripple effect — of understanding technology, understanding the market, understanding capacity, and understanding the politics of change that will unleash how the rest of these regions will bring change.”Spotlight on the subcontinentDasgupta used one of these countries, his native India, to illustrate the nuanced challenges and opportunities presented by various markets around the globe. In India, he noted, there are around 3 million projected jobs tied to the country’s transition to renewable energy. However, that number is dwarfed by the 10 to 12 million jobs per year the Indian economy needs to create simply to keep up with population growth.“Every developing country faces this question — how to keep growing in a way that reduces their carbon footprint,” Dasgupta said.Five states in India worked with WRI to pool their buying power and procure 5,000 electric buses, saving 60 percent of the cost as a result. Over the next two decades, Dasgupta said, the fleet of electric buses in those five states is expected to increase to 800,000.In the Indian state of Rajasthan, Dasgupta said, 59 percent of power already comes from solar energy. At times, Rajasthan produces more solar than it can use, and officials are exploring ways to either store the excess energy or sell it to other states. But in another state, Jharkhand, where much of the country’s coal is sourced, only 5 percent of power comes from solar. Officials in Jharkhand have reached out to WRI to discuss how to transition their energy economy, as they recognize that coal will fall out of favor in the future, Dasgupta said.“The complexities of the transition are enormous in a country this big,” Dasgupta said. “This is true in most large countries.”The road aheadDespite the challenges ahead, the colloquium was also marked by notes of optimism. In his opening remarks, Robert Stoner, the founding director of the MIT Tata Center for Technology and Design, pointed out how much progress has been made on environmental cleanup since the first Earth Day in 1970. “The world was a very different, much dirtier, place in many ways,” Stoner said. “Our air was a mess, our waterways were a mess, and it was beginning to be noticeable. Since then, Earth Day has become an important part of the fabric of American and global society.”While Dasgupta said that the world presently lacks the “orchestration” among various stakeholders needed to bring climate change under control, he expressed hope that collaboration in key countries could accelerate progress.“I strongly believe that what we need is a very different way of collaborating radically — across organizations like yours, organizations like ours, businesses, and governments,” Dasgupta said. “Otherwise, this transition will not happen at the scale and speed we need.” More

  • in

    School of Engineering welcomes new faculty

    The School of Engineering welcomes 15 new faculty members across six of its academic departments. This new cohort of faculty members, who have either recently started their roles at MIT or will start within the next year, conduct research across a diverse range of disciplines.Many of these new faculty specialize in research that intersects with multiple fields. In addition to positions in the School of Engineering, a number of these faculty have positions at other units across MIT. Faculty with appointments in the Department of Electrical Engineering and Computer Science (EECS) report into both the School of Engineering and the MIT Stephen A. Schwarzman College of Computing. This year, new faculty also have joint appointments between the School of Engineering and the School of Humanities, Arts, and Social Sciences and the School of Science.“I am delighted to welcome this cohort of talented new faculty to the School of Engineering,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science. “I am particularly struck by the interdisciplinary approach many of these new faculty take in their research. They are working in areas that are poised to have tremendous impact. I look forward to seeing them grow as researchers and educators.”The new engineering faculty include:Stephen Bates joined the Department of Electrical Engineering and Computer Science as an assistant professor in September 2023. He is also a member of the Laboratory for Information and Decision Systems (LIDS). Bates uses data and AI for reliable decision-making in the presence of uncertainty. In particular, he develops tools for statistical inference with AI models, data impacted by strategic behavior, and settings with distribution shift. Bates also works on applications in life sciences and sustainability. He previously worked as a postdoc in the Statistics and EECS departments at the University of California at Berkeley (UC Berkeley). Bates received a BS in statistics and mathematics at Harvard University and a PhD from Stanford University.Abigail Bodner joined the Department of EECS and Department of Earth, Atmospheric and Planetary Sciences as an assistant professor in January. She is also a member of the LIDS. Bodner’s research interests span climate, physical oceanography, geophysical fluid dynamics, and turbulence. Previously, she worked as a Simons Junior Fellow at the Courant Institute of Mathematical Sciences at New York University. Bodner received her BS in geophysics and mathematics and MS in geophysics from Tel Aviv University, and her SM in applied mathematics and PhD from Brown University.Andreea Bobu ’17 will join the Department of Aeronautics and Astronautics as an assistant professor in July. Her research sits at the intersection of robotics, mathematical human modeling, and deep learning. Previously, she was a research scientist at the Boston Dynamics AI Institute, focusing on how robots and humans can efficiently arrive at shared representations of their tasks for more seamless and reliable interactions. Bobu earned a BS in computer science and engineering from MIT and a PhD in electrical engineering and computer science from UC Berkeley.Suraj Cheema will join the Department of Materials Science and Engineering, with a joint appointment in the Department of EECS, as an assistant professor in July. His research explores atomic-scale engineering of electronic materials to tackle challenges related to energy consumption, storage, and generation, aiming for more sustainable microelectronics. This spans computing and energy technologies via integrated ferroelectric devices. He previously worked as a postdoc at UC Berkeley. Cheema earned a BS in applied physics and applied mathematics from Columbia University and a PhD in materials science and engineering from UC Berkeley.Samantha Coday joins the Department of EECS as an assistant professor in July. She will also be a member of the MIT Research Laboratory of Electronics. Her research interests include ultra-dense power converters enabling renewable energy integration, hybrid electric aircraft and future space exploration. To enable high-performance converters for these critical applications her research focuses on the optimization, design, and control of hybrid switched-capacitor converters. Coday earned a BS in electrical engineering and mathematics from Southern Methodist University and an MS and a PhD in electrical engineering and computer science from UC Berkeley.Mitchell Gordon will join the Department of EECS as an assistant professor in July. He will also be a member of the MIT Computer Science and Artificial Intelligence Laboratory. In his research, Gordon designs interactive systems and evaluation approaches that bridge principles of human-computer interaction with the realities of machine learning. He currently works as a postdoc at the University of Washington. Gordon received a BS from the University of Rochester, and MS and PhD from Stanford University, all in computer science.Kaiming He joined the Department of EECS as an associate professor in February. He will also be a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His research interests cover a wide range of topics in computer vision and deep learning. He is currently focused on building computer models that can learn representations and develop intelligence from and for the complex world. Long term, he hopes to augment human intelligence with improved artificial intelligence. Before joining MIT, He was a research scientist at Facebook AI. He earned a BS from Tsinghua University and a PhD from the Chinese University of Hong Kong.Anna Huang SM ’08 will join the departments of EECS and Music and Theater Arts as assistant professor in September. She will help develop graduate programming focused on music technology. Previously, she spent eight years with Magenta at Google Brain and DeepMind, spearheading efforts in generative modeling, reinforcement learning, and human-computer interaction to support human-AI partnerships in music-making. She is the creator of Music Transformer and Coconet (which powered the Bach Google Doodle). She was a judge and organizer for the AI Song Contest. Anna holds a Canada CIFAR AI Chair at Mila, a BM in music composition, and BS in computer science from the University of Southern California, an MS from the MIT Media Lab, and a PhD from Harvard University.Yael Kalai PhD ’06 will join the Department of EECS as a professor in September. She is also a member of CSAIL. Her research interests include cryptography, the theory of computation, and security and privacy. Kalai currently focuses on both the theoretical and real-world applications of cryptography, including work on succinct and easily verifiable non-interactive proofs. She received her bachelor’s degree from the Hebrew University of Jerusalem, a master’s degree at the Weizmann Institute of Science, and a PhD from MIT.Sendhil Mullainathan will join the departments of EECS and Economics as a professor in July. His research uses machine learning to understand complex problems in human behavior, social policy, and medicine. Previously, Mullainathan spent five years at MIT before joining the faculty at Harvard in 2004, and then the University of Chicago in 2018. He received his BA in computer science, mathematics, and economics from Cornell University and his PhD from Harvard University.Alex Rives will join the Department of EECS as an assistant professor in September, with a core membership in the Broad Institute of MIT and Harvard. In his research, Rives is focused on AI for scientific understanding, discovery, and design for biology. Rives worked with Meta as a New York University graduate student, where he founded and led the Evolutionary Scale Modeling team that developed large language models for proteins. Rives received his BS in philosophy and biology from Yale University and is completing his PhD in computer science at NYU.Sungho Shin will join the Department of Chemical Engineering as an assistant professor in July. His research interests include control theory, optimization algorithms, high-performance computing, and their applications to decision-making in complex systems, such as energy infrastructures. Shin is a postdoc at the Mathematics and Computer Science Division at Argonne National Laboratory. He received a BS in mathematics and chemical engineering from Seoul National University and a PhD in chemical engineering from the University of Wisconsin-Madison.Jessica Stark joined the Department of Biological Engineering as an assistant professor in January. In her research, Stark is developing technologies to realize the largely untapped potential of cell-surface sugars, called glycans, for immunological discovery and immunotherapy. Previously, Stark was an American Cancer Society postdoc at Stanford University. She earned a BS in chemical and biomolecular engineering from Cornell University and a PhD in chemical and biological engineering at Northwestern University.Thomas John “T.J.” Wallin joined the Department of Materials Science and Engineering as an assistant professor in January. As a researcher, Wallin’s interests lay in advanced manufacturing of functional soft matter, with an emphasis on soft wearable technologies and their applications in human-computer interfaces. Previously, he was a research scientist at Meta’s Reality Labs Research working in their haptic interaction team. Wallin earned a BS in physics and chemistry from the College of William and Mary, and an MS and PhD in materials science and engineering from Cornell University.Gioele Zardini joined the Department of Civil and Environmental Engineering as an assistant professor in September. He will also join LIDS and the Institute for Data, Systems, and Society. Driven by societal challenges, Zardini’s research interests include the co-design of sociotechnical systems, compositionality in engineering, applied category theory, decision and control, optimization, and game theory, with society-critical applications to intelligent transportation systems, autonomy, and complex networks and infrastructures. He received his BS, MS, and PhD in mechanical engineering with a focus on robotics, systems, and control from ETH Zurich, and spent time at MIT, Stanford University, and Motional. More