More stories

  • in

    MIT gears up to transform manufacturing

    “Manufacturing is the engine of society, and it is the backbone of robust, resilient economies,” says John Hart, head of MIT’s Department of Mechanical Engineering (MechE) and faculty co-director of the MIT Initiative for New Manufacturing (INM). “With manufacturing a lively topic in today’s news, there’s a renewed appreciation and understanding of the importance of manufacturing to innovation, to economic and national security, and to daily lives.”Launched this May, INM will “help create a transformation of manufacturing through new technology, through development of talent, and through an understanding of how to scale manufacturing in a way that enables imparts higher productivity and resilience, drives adoption of new technologies, and creates good jobs,” Hart says.INM is one of MIT’s strategic initiatives and builds on the successful three-year-old Manufacturing@MIT program. “It’s a recognition by MIT that manufacturing is an Institute-wide theme and an Institute-wide priority, and that manufacturing connects faculty and students across campus,” says Hart. Alongside Hart, INM’s faculty co-directors are Institute Professor Suzanne Berger and Chris Love, professor of chemical engineering.The initiative is pursuing four main themes: reimagining manufacturing technologies and systems, elevating the productivity and human experience of manufacturing, scaling up new manufacturing, and transforming the manufacturing base.Breaking manufacturing barriers for corporationsAmgen, Autodesk, Flex, GE Vernova, PTC, Sanofi, and Siemens are founding members of INM’s industry consortium. These industry partners will work closely with MIT faculty, researchers, and students across many aspects of manufacturing-related research, both in broad-scale initiatives and in particular areas of shared interests. Membership requires a minimum three-year commitment of $500,000 a year to manufacturing-related activities at MIT, including the INM membership fee of $275,000 per year, which supports several core activities that engage the industry members.One major thrust for INM industry collaboration is the deployment and adoption of AI and automation in manufacturing. This effort will include seed research projects at MIT, collaborative case studies, and shared strategy development.INM also offers companies participation in the MIT-wide New Manufacturing Research effort, which is studying the trajectories of specific manufacturing industries and examining cross-cutting themes such as technology and financing.Additionally, INM will concentrate on education for all professions in manufacturing, with alliances bringing together corporations, community colleges, government agencies, and other partners. “We’ll scale our curriculum to broader audiences, from aspiring manufacturing workers and aspiring production line supervisors all the way up to engineers and executives,” says Hart.In workforce training, INM will collaborate with companies broadly to help understand the challenges and frame its overall workforce agenda, and with individual firms on specific challenges, such as acquiring suitably prepared employees for a new factory.Importantly, industry partners will also engage directly with students. Founding member Flex, for instance, hosted MIT researchers and students at the Flex Institute of Technology in Sorocaba, Brazil, developing new solutions for electronics manufacturing.“History shows that you need to innovate in manufacturing alongside the innovation in products,” Hart comments. “At MIT, as more students take classes in manufacturing, they’ll think more about key manufacturing issues as they decide what research problems they want to solve, or what choices they make as they prototype their devices. The same is true for industry — companies that operate at the frontier of manufacturing, whether through internal capabilities or their supply chains, are positioned to be on the frontier of product innovation and overall growth.”“We’ll have an opportunity to bring manufacturing upstream to the early stage of research, designing new processes and new devices with scalability in mind,” he says.Additionally, MIT expects to open new manufacturing-related labs and to further broaden cooperation with industry at existing shared facilities, such as MIT.nano. Hart says that facilities will also invite tighter collaborations with corporations — not just providing advanced equipment, but working jointly on, say, new technologies for weaving textiles, or speeding up battery manufacturing.Homing in on the United StatesINM is a global project that brings a particular focus on the United States, which remains the world’s second-largest manufacturing economy, but has suffered a significant decline in manufacturing employment and innovation.One key to reversing this trend and reinvigorating the U.S. manufacturing base is advocacy for manufacturing’s critical role in society and the career opportunities it offers.“No one really disputes the importance of manufacturing,” Hart says. “But we need to elevate interest in manufacturing as a rewarding career, from the production workers to manufacturing engineers and leaders, through advocacy, education programs, and buy-in from industry, government, and academia.”MIT is in a unique position to convene industry, academic, and government stakeholders in manufacturing to work together on this vital issue, he points out.Moreover, in times of radical and rapid changes in manufacturing, “we need to focus on deploying new technologies into factories and supply chains,” Hart says. “Technology is not all of the solution, but for the U.S. to expand our manufacturing base, we need to do it with technology as a key enabler, embracing companies of all sizes, including small and medium enterprises.”“As AI becomes more capable, and automation becomes more flexible and more available, these are key building blocks upon which you can address manufacturing challenges,” he says. “AI and automation offer new accelerated ways to develop, deploy, and monitor production processes, which present a huge opportunity and, in some cases, a necessity.”“While manufacturing is always a combination of old technology, new technology, established practice, and new ways of thinking, digital technology gives manufacturers an opportunity to leapfrog competitors,” Hart says. “That’s very, very powerful for the U.S. and any company, or country, that aims to create differentiated capabilities.”Fortunately, in recent years, investors have increasingly bought into new manufacturing in the United States. “They see the opportunity to re-industrialize, to build the factories and production systems of the future,” Hart says.“That said, building new manufacturing is capital-intensive, and takes time,” he adds. “So that’s another area where it’s important to convene stakeholders and to think about how startups and growth-stage companies build their capital portfolios, how large industry can support an ecosystem of small businesses and young companies, and how to develop talent to support those growing companies.”All these concerns and opportunities in the manufacturing ecosystem play to MIT’s strengths. “MIT’s DNA of cross-disciplinary collaboration and working with industry can let us create a lot of impact,” Hart emphasizes. “We can understand the practical challenges. We can also explore breakthrough ideas in research and cultivate successful outcomes, all the way to new companies and partnerships. Sometimes those are seen as disparate approaches, but we like to bring them together.” More

  • in

    Would you like that coffee with iron?

    Around the world, about 2 billion people suffer from iron deficiency, which can lead to anemia, impaired brain development in children, and increased infant mortality.To combat that problem, MIT researchers have come up with a new way to fortify foods and beverages with iron, using small crystalline particles. These particles, known as metal-organic frameworks, could be sprinkled on food, added to staple foods such as bread, or incorporated into drinks like coffee and tea.“We’re creating a solution that can be seamlessly added to staple foods across different regions,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research. “What’s considered a staple in Senegal isn’t the same as in India or the U.S., so our goal was to develop something that doesn’t react with the food itself. That way, we don’t have to reformulate for every context — it can be incorporated into a wide range of foods and beverages without compromise.”The particles designed in this study can also carry iodine, another critical nutrient. The particles could also be adapted to carry important minerals such as zinc, calcium, or magnesium.“We are very excited about this new approach and what we believe is a novel application of metal-organic frameworks to potentially advance nutrition, particularly in the developing world,” says Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute.Jaklenec and Langer are the senior authors of the study, which appears today in the journal Matter. MIT postdoc Xin Yang and Linzixuan (Rhoda) Zhang PhD ’24 are the lead authors of the paper.Iron stabilizationFood fortification can be a successful way to combat nutrient deficiencies, but this approach is often challenging because many nutrients are fragile and break down during storage or cooking. When iron is added to foods, it can react with other molecules in the food, giving the food a metallic taste.In previous work, Jaklenec’s lab has shown that encapsulating nutrients in polymers can protect them from breaking down or reacting with other molecules. In a small clinical trial, the researchers found that women who ate bread fortified with encapsulated iron were able to absorb the iron from the food.However, one drawback to this approach is that the polymer adds a lot of bulk to the material, limiting the amount of iron or other nutrients that end up in the food.“Encapsulating iron in polymers significantly improves its stability and reactivity, making it easier to add to food,” Jaklenec says. “But to be effective, it requires a substantial amount of polymer. That limits how much iron you can deliver in a typical serving, making it difficult to meet daily nutritional targets through fortified foods alone.”To overcome that challenge, Yang came up with a new idea: Instead of encapsulating iron in a polymer, they could use iron itself as a building block for a crystalline particle known as a metal-organic framework, or MOF (pronounced “moff”).MOFs consist of metal atoms joined by organic molecules called ligands to create a rigid, cage-like structure. Depending on the combination of metals and ligands chosen, they can be used for a wide variety of applications.“We thought maybe we could synthesize a metal-organic framework with food-grade ligands and food-grade micronutrients,” Yang says. “Metal-organic frameworks have very high porosity, so they can load a lot of cargo. That’s why we thought we could leverage this platform to make a new metal-organic framework that could be used in the food industry.”In this case, the researchers designed a MOF consisting of iron bound to a ligand called fumaric acid, which is often used as a food additive to enhance flavor or help preserve food.This structure prevents iron from reacting with polyphenols — compounds commonly found in foods such as whole grains and nuts, as well as coffee and tea. When iron does react with those compounds, it forms a metal polyphenol complex that cannot be absorbed by the body.The MOFs’ structure also allows them to remain stable until they reach an acidic environment, such as the stomach, where they break down and release their iron payload.Double-fortified saltsThe researchers also decided to include iodine in their MOF particle, which they call NuMOF. Iodized salt has been very successful at preventing iodine deficiency, and many efforts are now underway to create “double-fortified salts” that would also contain iron.Delivering these nutrients together has proven difficult because iron and iodine can react with each other, making each one less likely to be absorbed by the body. In this study, the MIT team showed that once they formed their iron-containing MOF particles, they could load them with iodine, in a way that the iron and iodine do not react with each other.In tests of the particles’ stability, the researchers found that the NuMOFs could withstand long-term storage, high heat and humidity, and boiling water.Throughout these tests, the particles maintained their structure. When the researchers then fed the particles to mice, they found that both iron and iodine became available in the bloodstream within several hours of the NuMOF consumption.The researchers are now working on launching a company that is developing coffee and other beverages fortified with iron and iodine. They also hope to continue working toward a double-fortified salt that could be consumed on its own or incorporated into staple food products.The research was partially supported by J-WAFS Fellowships for Water and Food Solutions.Other authors of the paper include Fangzheng Chen, Wenhao Gao, Zhiling Zheng, Tian Wang, Erika Yan Wang, Behnaz Eshaghi, and Sydney MacDonald. More

  • in

    Confronting the AI/energy conundrum

    The explosive growth of AI-powered computing centers is creating an unprecedented surge in electricity demand that threatens to overwhelm power grids and derail climate goals. At the same time, artificial intelligence technologies could revolutionize energy systems, accelerating the transition to clean power.“We’re at a cusp of potentially gigantic change throughout the economy,” said William H. Green, director of the MIT Energy Initiative (MITEI) and Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering, at MITEI’s Spring Symposium, “AI and energy: Peril and promise,” held on May 13. The event brought together experts from industry, academia, and government to explore solutions to what Green described as both “local problems with electric supply and meeting our clean energy targets” while seeking to “reap the benefits of AI without some of the harms.” The challenge of data center energy demand and potential benefits of AI to the energy transition is a research priority for MITEI.AI’s startling energy demandsFrom the start, the symposium highlighted sobering statistics about AI’s appetite for electricity. After decades of flat electricity demand in the United States, computing centers now consume approximately 4 percent of the nation’s electricity. Although there is great uncertainty, some projections suggest this demand could rise to 12-15 percent by 2030, largely driven by artificial intelligence applications.Vijay Gadepally, senior scientist at MIT’s Lincoln Laboratory, emphasized the scale of AI’s consumption. “The power required for sustaining some of these large models is doubling almost every three months,” he noted. “A single ChatGPT conversation uses as much electricity as charging your phone, and generating an image consumes about a bottle of water for cooling.”Facilities requiring 50 to 100 megawatts of power are emerging rapidly across the United States and globally, driven both by casual and institutional research needs relying on large language programs such as ChatGPT and Gemini. Gadepally cited congressional testimony by Sam Altman, CEO of OpenAI, highlighting how fundamental this relationship has become: “The cost of intelligence, the cost of AI, will converge to the cost of energy.”“The energy demands of AI are a significant challenge, but we also have an opportunity to harness these vast computational capabilities to contribute to climate change solutions,” said Evelyn Wang, MIT vice president for energy and climate and the former director at the Advanced Research Projects Agency-Energy (ARPA-E) at the U.S. Department of Energy.Wang also noted that innovations developed for AI and data centers — such as efficiency, cooling technologies, and clean-power solutions — could have broad applications beyond computing facilities themselves.Strategies for clean energy solutionsThe symposium explored multiple pathways to address the AI-energy challenge. Some panelists presented models suggesting that while artificial intelligence may increase emissions in the short term, its optimization capabilities could enable substantial emissions reductions after 2030 through more efficient power systems and accelerated clean technology development.Research shows regional variations in the cost of powering computing centers with clean electricity, according to Emre Gençer, co-founder and CEO of Sesame Sustainability and former MITEI principal research scientist. Gençer’s analysis revealed that the central United States offers considerably lower costs due to complementary solar and wind resources. However, achieving zero-emission power would require massive battery deployments — five to 10 times more than moderate carbon scenarios — driving costs two to three times higher.“If we want to do zero emissions with reliable power, we need technologies other than renewables and batteries, which will be too expensive,” Gençer said. He pointed to “long-duration storage technologies, small modular reactors, geothermal, or hybrid approaches” as necessary complements.Because of data center energy demand, there is renewed interest in nuclear power, noted Kathryn Biegel, manager of R&D and corporate strategy at Constellation Energy, adding that her company is restarting the reactor at the former Three Mile Island site, now called the “Crane Clean Energy Center,” to meet this demand. “The data center space has become a major, major priority for Constellation,” she said, emphasizing how their needs for both reliability and carbon-free electricity are reshaping the power industry.Can AI accelerate the energy transition?Artificial intelligence could dramatically improve power systems, according to Priya Donti, assistant professor and the Silverman Family Career Development Professor in MIT’s Department of Electrical Engineering and Computer Science and the Laboratory for Information and Decision Systems. She showcased how AI can accelerate power grid optimization by embedding physics-based constraints into neural networks, potentially solving complex power flow problems at “10 times, or even greater, speed compared to your traditional models.”AI is already reducing carbon emissions, according to examples shared by Antonia Gawel, global director of sustainability and partnerships at Google. Google Maps’ fuel-efficient routing feature has “helped to prevent more than 2.9 million metric tons of GHG [greenhouse gas] emissions reductions since launch, which is the equivalent of taking 650,000 fuel-based cars off the road for a year,” she said. Another Google research project uses artificial intelligence to help pilots avoid creating contrails, which represent about 1 percent of global warming impact.AI’s potential to speed materials discovery for power applications was highlighted by Rafael Gómez-Bombarelli, the Paul M. Cook Career Development Associate Professor in the MIT Department of Materials Science and Engineering. “AI-supervised models can be trained to go from structure to property,” he noted, enabling the development of materials crucial for both computing and efficiency.Securing growth with sustainabilityThroughout the symposium, participants grappled with balancing rapid AI deployment against environmental impacts. While AI training receives most attention, Dustin Demetriou, senior technical staff member in sustainability and data center innovation at IBM, quoted a World Economic Forum article that suggested that “80 percent of the environmental footprint is estimated to be due to inferencing.” Demetriou emphasized the need for efficiency across all artificial intelligence applications.Jevons’ paradox, where “efficiency gains tend to increase overall resource consumption rather than decrease it” is another factor to consider, cautioned Emma Strubell, the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. Strubell advocated for viewing computing center electricity as a limited resource requiring thoughtful allocation across different applications.Several presenters discussed novel approaches for integrating renewable sources with existing grid infrastructure, including potential hybrid solutions that combine clean installations with existing natural gas plants that have valuable grid connections already in place. These approaches could provide substantial clean capacity across the United States at reasonable costs while minimizing reliability impacts.Navigating the AI-energy paradoxThe symposium highlighted MIT’s central role in developing solutions to the AI-electricity challenge.Green spoke of a new MITEI program on computing centers, power, and computation that will operate alongside the comprehensive spread of MIT Climate Project research. “We’re going to try to tackle a very complicated problem all the way from the power sources through the actual algorithms that deliver value to the customers — in a way that’s going to be acceptable to all the stakeholders and really meet all the needs,” Green said.Participants in the symposium were polled about priorities for MIT’s research by Randall Field, MITEI director of research. The real-time results ranked “data center and grid integration issues” as the top priority, followed by “AI for accelerated discovery of advanced materials for energy.”In addition, attendees revealed that most view AI’s potential regarding power as a “promise,” rather than a “peril,” although a considerable portion remain uncertain about the ultimate impact. When asked about priorities in power supply for computing facilities, half of the respondents selected carbon intensity as their top concern, with reliability and cost following. More

  • in

    “Each of us holds a piece of the solution”

    MIT has an unparalleled history of bringing together interdisciplinary teams to solve pressing problems — think of the development of radar during World War II, or leading the international coalition that cracked the code of the human genome — but the challenge of climate change could demand a scale of collaboration unlike any that’s come before at MIT.“Solving climate change is not just about new technologies or better models. It’s about forging new partnerships across campus and beyond — between scientists and economists, between architects and data scientists, between policymakers and physicists, between anthropologists and engineers, and more,” MIT Vice President for Energy and Climate Evelyn Wang told an energetic crowd of faculty, students, and staff on May 6. “Each of us holds a piece of the solution — but only together can we see the whole.”Undeterred by heavy rain, approximately 300 campus community members filled the atrium in the Tina and Hamid Moghadam Building (Building 55) for a spring gathering hosted by Wang and the Climate Project at MIT. The initiative seeks to direct the full strength of MIT to address climate change, which Wang described as one of the defining challenges of this moment in history — and one of its greatest opportunities.“It calls on us to rethink how we power our world, how we build, how we live — and how we work together,” Wang said. “And there is no better place than MIT to lead this kind of bold, integrated effort. Our culture of curiosity, rigor, and relentless experimentation makes us uniquely suited to cross boundaries — to break down silos and build something new.”The Climate Project is organized around six missions, thematic areas in which MIT aims to make significant impact, ranging from decarbonizing industry to new policy approaches to designing resilient cities. The faculty leaders of these missions posed challenges to the crowd before circulating among the crowd to share their perspectives and to discuss community questions and ideas.Wang and the Climate Project team were joined by a number of research groups, startups, and MIT offices conducting relevant work today on issues related to energy and climate. For example, the MIT Office of Sustainability showcased efforts to use the MIT campus as a living laboratory; MIT spinouts such as Forma Systems, which is developing high-performance, low-carbon building systems, and Addis Energy, which envisions using the earth as a reactor to produce clean ammonia, presented their technologies; and visitors learned about current projects in MIT labs, including DebunkBot, an artificial intelligence-powered chatbot that can persuade people to shift their attitudes about conspiracies, developed by David Rand, the Erwin H. Schell Professor at the MIT Sloan School of Management.Benedetto Marelli, an associate professor in the Department of Civil and Environmental Engineering who leads the Wild Cards Mission, said the energy and enthusiasm that filled the room was inspiring — but that the individual conversations were equally valuable.“I was especially pleased to see so many students come out. I also spoke with other faculty, talked to staff from across the Institute, and met representatives of external companies interested in collaborating with MIT,” Marelli said. “You could see connections being made all around the room, which is exactly what we need as we build momentum for the Climate Project.” More

  • in

    Universal nanosensor unlocks the secrets to plant growth

    Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group within the Singapore-MIT Alliance for Research and Technology have developed the world’s first near-infrared fluorescent nanosensor capable of real-time, nondestructive, and species-agnostic detection of indole-3-acetic acid (IAA) — the primary bioactive auxin hormone that controls the way plants develop, grow, and respond to stress.Auxins, particularly IAA, play a central role in regulating key plant processes such as cell division, elongation, root and shoot development, and response to environmental cues like light, heat, and drought. External factors like light affect how auxin moves within the plant, temperature influences how much is produced, and a lack of water can disrupt hormone balance. When plants cannot effectively regulate auxins, they may not grow well, adapt to changing conditions, or produce as much food. Existing IAA detection methods, such as liquid chromatography, require taking plant samples from the plant — which harms or removes part of it. Conventional methods also measure the effects of IAA rather than detecting it directly, and cannot be used universally across different plant types. In addition, since IAA are small molecules that cannot be easily tracked in real time, biosensors that contain fluorescent proteins need to be inserted into the plant’s genome to measure auxin, making it emit a fluorescent signal for live imaging.SMART’s newly developed nanosensor enables direct, real-time tracking of auxin levels in living plants with high precision. The sensor uses near infrared imaging to monitor IAA fluctuations non-invasively across tissues like leaves, roots, and cotyledons, and it is capable of bypassing chlorophyll interference to ensure highly reliable readings even in densely pigmented tissues. The technology does not require genetic modification and can be integrated with existing agricultural systems — offering a scalable precision tool to advance both crop optimization and fundamental plant physiology research. By providing real-time, precise measurements of auxin, the sensor empowers farmers with earlier and more accurate insights into plant health. With these insights and comprehensive data, farmers can make smarter, data-driven decisions on irrigation, nutrient delivery, and pruning, tailored to the plant’s actual needs — ultimately improving crop growth, boosting stress resilience, and increasing yields.“We need new technologies to address the problems of food insecurity and climate change worldwide. Auxin is a central growth signal within living plants, and this work gives us a way to tap it to give new information to farmers and researchers,” says Michael Strano, co-lead principal investigator at DiSTAP, Carbon P. Dubbs Professor of Chemical Engineering at MIT, and co-corresponding author of the paper. “The applications are many, including early detection of plant stress, allowing for timely interventions to safeguard crops. For urban and indoor farms, where light, water, and nutrients are already tightly controlled, this sensor can be a valuable tool in fine-tuning growth conditions with even greater precision to optimize yield and sustainability.”The research team documented the nanosensor’s development in a paper titled, “A Near-Infrared Fluorescent Nanosensor for Direct and Real-Time Measurement of Indole-3-Acetic Acid in Plants,” published in the journal ACS Nano. The sensor comprises single-walled carbon nanotubes wrapped in a specially designed polymer, which enables it to detect IAA through changes in near infrared fluorescence intensity. Successfully tested across multiple species, including Arabidopsis, Nicotiana benthamiana, choy sum, and spinach, the nanosensor can map IAA responses under various environmental conditions such as shade, low light, and heat stress. “This sensor builds on DiSTAP’s ongoing work in nanotechnology and the CoPhMoRe technique, which has already been used to develop other sensors that can detect important plant compounds such as gibberellins and hydrogen peroxide. By adapting this approach for IAA, we’re adding to our inventory of novel, precise, and nondestructive tools for monitoring plant health. Eventually, these sensors can be multiplexed, or combined, to monitor a spectrum of plant growth markers for more complete insights into plant physiology,” says Duc Thinh Khong, research scientist at DiSTAP and co-first author of the paper.“This small but mighty nanosensor tackles a long-standing challenge in agriculture: the need for a universal, real-time, and noninvasive tool to monitor plant health across various species. Our collaborative achievement not only empowers researchers and farmers to optimize growth conditions and improve crop yield and resilience, but also advances our scientific understanding of hormone pathways and plant-environment interactions,” says In-Cheol Jang, senior principal investigator at TLL, principal investigator at DiSTAP, and co-corresponding author of the paper.Looking ahead, the research team is looking to combine multiple sensing platforms to simultaneously detect IAA and its related metabolites to create a comprehensive hormone signaling profile, offering deeper insights into plant stress responses and enhancing precision agriculture. They are also working on using microneedles for highly localized, tissue-specific sensing, and collaborating with industrial urban farming partners to translate the technology into practical, field-ready solutions. The research was carried out by SMART, and supported by the National Research Foundation of Singapore under its Campus for Research Excellence And Technological Enterprise program. More

  • in

    A new approach could fractionate crude oil using much less energy

    Separating crude oil into products such as gasoline, diesel, and heating oil is an energy-intensive process that accounts for about 6 percent of the world’s CO2 emissions. Most of that energy goes into the heat needed to separate the components by their boiling point.In an advance that could dramatically reduce the amount of energy needed for crude oil fractionation, MIT engineers have developed a membrane that filters the components of crude oil by their molecular size.“This is a whole new way of envisioning a separation process. Instead of boiling mixtures to purify them, why not separate components based on shape and size? The key innovation is that the filters we developed can separate very small molecules at an atomistic length scale,” says Zachary P. Smith, an associate professor of chemical engineering at MIT and the senior author of the new study.The new filtration membrane can efficiently separate heavy and light components from oil, and it is resistant to the swelling that tends to occur with other types of oil separation membranes. The membrane is a thin film that can be manufactured using a technique that is already widely used in industrial processes, potentially allowing it to be scaled up for widespread use.Taehoon Lee, a former MIT postdoc who is now an assistant professor at Sungkyunkwan University in South Korea, is the lead author of the paper, which appears today in Science.Oil fractionationConventional heat-driven processes for fractionating crude oil make up about 1 percent of global energy use, and it has been estimated that using membranes for crude oil separation could reduce the amount of energy needed by about 90 percent. For this to succeed, a separation membrane needs to allow hydrocarbons to pass through quickly, and to selectively filter compounds of different sizes.Until now, most efforts to develop a filtration membrane for hydrocarbons have focused on polymers of intrinsic microporosity (PIMs), including one known as PIM-1. Although this porous material allows the fast transport of hydrocarbons, it tends to excessively absorb some of the organic compounds as they pass through the membrane, leading the film to swell, which impairs its size-sieving ability.To come up with a better alternative, the MIT team decided to try modifying polymers that are used for reverse osmosis water desalination. Since their adoption in the 1970s, reverse osmosis membranes have reduced the energy consumption of desalination by about 90 percent — a remarkable industrial success story.The most commonly used membrane for water desalination is a polyamide that is manufactured using a method known as interfacial polymerization. During this process, a thin polymer film forms at the interface between water and an organic solvent such as hexane. Water and hexane do not normally mix, but at the interface between them, a small amount of the compounds dissolved in them can react with each other.In this case, a hydrophilic monomer called MPD, which is dissolved in water, reacts with a hydrophobic monomer called TMC, which is dissolved in hexane. The two monomers are joined together by a connection known as an amide bond, forming a polyamide thin film (named MPD-TMC) at the water-hexane interface.While highly effective for water desalination, MPD-TMC doesn’t have the right pore sizes and swelling resistance that would allow it to separate hydrocarbons.To adapt the material to separate the hydrocarbons found in crude oil, the researchers first modified the film by changing the bond that connects the monomers from an amide bond to an imine bond. This bond is more rigid and hydrophobic, which allows hydrocarbons to quickly move through the membrane without causing noticeable swelling of the film compared to the polyamide counterpart.“The polyimine material has porosity that forms at the interface, and because of the cross-linking chemistry that we have added in, you now have something that doesn’t swell,” Smith says. “You make it in the oil phase, react it at the water interface, and with the crosslinks, it’s now immobilized. And so those pores, even when they’re exposed to hydrocarbons, no longer swell like other materials.”The researchers also introduced a monomer called triptycene. This shape-persistent, molecularly selective molecule further helps the resultant polyimines to form pores that are the right size for hydrocarbons to fit through.This approach represents “an important step toward reducing industrial energy consumption,” says Andrew Livingston, a professor of chemical engineering at Queen Mary University of London, who was not involved in the study.“This work takes the workhorse technology of the membrane desalination industry, interfacial polymerization, and creates a new way to apply it to organic systems such as hydrocarbon feedstocks, which currently consume large chunks of global energy,” Livingston says. “The imaginative approach using an interfacial catalyst coupled to hydrophobic monomers leads to membranes with high permeance and excellent selectivity, and the work shows how these can be used in relevant separations.”Efficient separationWhen the researchers used the new membrane to filter a mixture of toluene and triisopropylbenzene (TIPB) as a benchmark for evaluating separation performance, it was able to achieve a concentration of toluene 20 times greater than its concentration in the original mixture. They also tested the membrane with an industrially relevant mixture consisting of naphtha, kerosene, and diesel, and found that it could efficiently separate the heavier and lighter compounds by their molecular size.If adapted for industrial use, a series of these filters could be used to generate a higher concentration of the desired products at each step, the researchers say.“You can imagine that with a membrane like this, you could have an initial stage that replaces a crude oil fractionation column. You could partition heavy and light molecules and then you could use different membranes in a cascade to purify complex mixtures to isolate the chemicals that you need,” Smith says.Interfacial polymerization is already widely used to create membranes for water desalination, and the researchers believe it should be possible to adapt those processes to mass produce the films they designed in this study.“The main advantage of interfacial polymerization is it’s already a well-established method to prepare membranes for water purification, so you can imagine just adopting these chemistries into existing scale of manufacturing lines,” Lee says.The research was funded, in part, by ExxonMobil through the MIT Energy Initiative.  More

  • in

    A day in the life of MIT MBA student David Brown

    “MIT Sloan was my first and only choice,” says MIT graduate student David Brown. After receiving his BS in chemical engineering at the U.S. Military Academy at West Point, Brown spent eight years as a helicopter pilot in the U.S. Army, serving as a platoon leader and troop commander. Now in the final year of his MBA, Brown has co-founded a climate tech company — Helix Carbon — with Ariel Furst, an MIT assistant professor in the Department of Chemical Engineering, and Evan Haas MBA ’24, SM ’24. Their goal: erase the carbon footprint of tough-to-decarbonize industries like ironmaking, polyurethanes, and olefins by generating competitively-priced, carbon-neutral fuels directly from waste carbon dioxide (CO2). It’s an ambitious project; they’re looking to scale the company large enough to have a gigaton per year impact on CO2 emissions. They have lab space off campus, and after graduation, Brown will be taking a full-time job as chief operating officer.“What I loved about the Army was that I felt every day that the work I was doing was important or impactful in some way. I wanted that to continue, and felt the best way to have the greatest possible positive impact was to use my operational skills learned from the military to help close the gap between the lab and impact in the market.”The following photo essay provides a snapshot of what a typical day for Brown has been like as an MIT student.

    8:30 a.m. — “The first thing on my schedule today is meeting with the Helix Carbon team. Today, we’re talking about the results from the latest lab runs, and what they mean for planned experiments the rest of the week. We are also discussing our fundraising plans ahead of the investor meetings we have scheduled for later this week.”

    10:00 a.m. — “I spend a lot of time at the Martin Trust Center for MIT Entrepreneurship. It’s the hub of entrepreneurship at MIT. My pre-MBA internship, and my first work experience after leaving the Army, was as the program manager for delta v, the premier startup accelerator at MIT. That was also my introduction to the entrepreneurship ecosystem at MIT, and how I met Ariel. With zero hyperbole I can say that was a life-changing experience, and really defined the direction of my life out of the military.”

    10:30 a.m. — “In addition to working to fund and scale Helix Carbon, I have a lot of work to do to finish up the semester. Something I think is unique about MIT is that classes give a real-world perspective from people who are actively a participant on the cutting edge of what’s happening in that realm. For example, I’m taking Climate and Energy in the Global Economy, and the professor, Catherine Wolfram, has incredible experience both on the ground and in policy with both climate and energy.”

    11:00 a.m. — “When I arrived at MIT Sloan, I was grouped into my cohort team. We navigated the first semester core classes together and built a strong bond. We still meet up for coffee and have team dinners even a year-and-a-half later. I always find myself inspired by how much they’ve accomplished, and I consider myself incredibly lucky for their support and to call them my friends.”

    12 p.m. — “Next, I have a meeting with Bill Aulet, the managing director of the Trust Center, to prepare for an entrepreneurship accelerator called Third Derivative that Helix Carbon got picked up for. Sustainability startups from all over the U.S. and around the world come together to meet with each other and other mentors in order to share progress, best practices, and develop plans for moving forward.”

    12:30 p.m. — “Throughout the day, I run into friends, colleagues, and mentors. Even though MIT Sloan is pitched as a community experience, I didn’t expect how much of a community experience it really is. My classmates have been the absolute highlight of my time here, and I have learned so much from their experiences and from the way they carry themselves.”

    1 p.m. — “My only class today is Applied Behavioral Economics. I’m taking it almost entirely for pleasure — it’s such a fascinating topic. And the professor — Drazen Prelec — is one of the world’s foremost experts. It’s a class that challenges assumptions and gets me thinking. I really enjoy it.”

    2:30 p.m. — “I have a little bit of time before my next event. When I need a place that isn’t too crowded to think, I like to hang out on the couch on the sky bridge between the Tang Center and the Morris and Sophie Chang Building. When the weather is nice, I’ll head out to one of the open green spaces in Kendall Square, or to Urban Park across the street.”

    3:30 p.m. — “When I was the program manager for delta v, this was where I sat, and it’s still where I like to spend time when I’m at the Trust Center. Because it looks like a welcome desk, a lot of people come up to ask questions or talk about their startups. Since I used to work there I’m able to help them out pretty well!”

    5:00 p.m. — “For my last event of the day, I’m attending a seminar at the Priscilla King Gray Public Service Center (PKG Center) as part of their IDEAS Social Innovation Challenge, MIT’s 20-plus year-old social impact incubator. The program works with MIT student-led teams addressing social and environmental challenges in our communities. The program has helped teach us critical frameworks and tools around setting goals for and measuring our social impact. We actually placed first in the Harvard Social Enterprise Conference Pitch competition thanks to the lessons we learned here!”

    7:00 p.m. — “Time to head home. A few days a week after work and class, my wife and I play in a combat archery league. It’s like dodgeball, but instead of dodgeballs everyone has a bow and you shoot arrows that have pillow tips. It’s incredible. Tons of fun. I have tried to recruit many of my classmates — marginal success rate!”

    Previous item
    Next item More

  • in

    Using liquid air for grid-scale energy storage

    As the world moves to reduce carbon emissions, solar and wind power will play an increasing role on electricity grids. But those renewable sources only generate electricity when it’s sunny or windy. So to ensure a reliable power grid — one that can deliver electricity 24/7 — it’s crucial to have a means of storing electricity when supplies are abundant and delivering it later, when they’re not. And sometimes large amounts of electricity will need to be stored not just for hours, but for days, or even longer.Some methods of achieving “long-duration energy storage” are promising. For example, with pumped hydro energy storage, water is pumped from a lake to another, higher lake when there’s extra electricity and released back down through power-generating turbines when more electricity is needed. But that approach is limited by geography, and most potential sites in the United States have already been used. Lithium-ion batteries could provide grid-scale storage, but only for about four hours. Longer than that and battery systems get prohibitively expensive.A team of researchers from MIT and the Norwegian University of Science and Technology (NTNU) has been investigating a less-familiar option based on an unlikely-sounding concept: liquid air, or air that is drawn in from the surroundings, cleaned and dried, and then cooled to the point that it liquefies. “Liquid air energy storage” (LAES) systems have been built, so the technology is technically feasible. Moreover, LAES systems are totally clean and can be sited nearly anywhere, storing vast amounts of electricity for days or longer and delivering it when it’s needed. But there haven’t been conclusive studies of its economic viability. Would the income over time warrant the initial investment and ongoing costs? With funding from the MIT Energy Initiative’s Future Energy Systems Center, the researchers developed a model that takes detailed information on LAES systems and calculates when and where those systems would be economically viable, assuming future scenarios in line with selected decarbonization targets as well as other conditions that may prevail on future energy grids.They found that under some of the scenarios they modeled, LAES could be economically viable in certain locations. Sensitivity analyses showed that policies providing a subsidy on capital expenses could make LAES systems economically viable in many locations. Further calculations showed that the cost of storing a given amount of electricity with LAES would be lower than with more familiar systems such as pumped hydro and lithium-ion batteries. They conclude that LAES holds promise as a means of providing critically needed long-duration storage when future power grids are decarbonized and dominated by intermittent renewable sources of electricity.The researchers — Shaylin A. Cetegen, a PhD candidate in the MIT Department of Chemical Engineering (ChemE); Professor Emeritus Truls Gundersen of the NTNU Department of Energy and Process Engineering; and MIT Professor Emeritus Paul I. Barton of ChemE — describe their model and their findings in a new paper published in the journal Energy.The LAES technology and its benefitsLAES systems consists of three steps: charging, storing, and discharging. When supply on the grid exceeds demand and prices are low, the LAES system is charged. Air is then drawn in and liquefied. A large amount of electricity is consumed to cool and liquefy the air in the LAES process. The liquid air is then sent to highly insulated storage tanks, where it’s held at a very low temperature and atmospheric pressure. When the power grid needs added electricity to meet demand, the liquid air is first pumped to a higher pressure and then heated, and it turns back into a gas. This high-pressure, high-temperature, vapor-phase air expands in a turbine that generates electricity to be sent back to the grid.According to Cetegen, a primary advantage of LAES is that it’s clean. “There are no contaminants involved,” she says. “It takes in and releases only ambient air and electricity, so it’s as clean as the electricity that’s used to run it.” In addition, a LAES system can be built largely from commercially available components and does not rely on expensive or rare materials. And the system can be sited almost anywhere, including near other industrial processes that produce waste heat or cold that can be used by the LAES system to increase its energy efficiency.Economic viabilityIn considering the potential role of LAES on future power grids, the first question is: Will LAES systems be attractive to investors? Answering that question requires calculating the technology’s net present value (NPV), which represents the sum of all discounted cash flows — including revenues, capital expenditures, operating costs, and other financial factors — over the project’s lifetime. (The study assumed a cash flow discount rate of 7 percent.)To calculate the NPV, the researchers needed to determine how LAES systems will perform in future energy markets. In those markets, various sources of electricity are brought online to meet the current demand, typically following a process called “economic dispatch:” The lowest-cost source that’s available is always deployed next. Determining the NPV of liquid air storage therefore requires predicting how that technology will fare in future markets competing with other sources of electricity when demand exceeds supply — and also accounting for prices when supply exceeds demand, so excess electricity is available to recharge the LAES systems.For their study, the MIT and NTNU researchers designed a model that starts with a description of an LAES system, including details such as the sizes of the units where the air is liquefied and the power is recovered, and also capital expenses based on estimates reported in the literature. The model then draws on state-of-the-art pricing data that’s released every year by the National Renewable Energy Laboratory (NREL) and is widely used by energy modelers worldwide. The NREL dataset forecasts prices, construction and retirement of specific types of electricity generation and storage facilities, and more, assuming eight decarbonization scenarios for 18 regions of the United States out to 2050.The new model then tracks buying and selling in energy markets for every hour of every day in a year, repeating the same schedule for five-year intervals. Based on the NREL dataset and details of the LAES system — plus constraints such as the system’s physical storage capacity and how often it can switch between charging and discharging — the model calculates how much money LAES operators would make selling power to the grid when it’s needed and how much they would spend buying electricity when it’s available to recharge their LAES system. In line with the NREL dataset, the model generates results for 18 U.S. regions and eight decarbonization scenarios, including 100 percent decarbonization by 2035 and 95 percent decarbonization by 2050, and other assumptions about future energy grids, including high-demand growth plus high and low costs for renewable energy and for natural gas.Cetegen describes some of their results: “Assuming a 100-megawatt (MW) system — a standard sort of size — we saw economic viability pop up under the decarbonization scenario calling for 100 percent decarbonization by 2035.” So, positive NPVs (indicating economic viability) occurred only under the most aggressive — therefore the least realistic — scenario, and they occurred in only a few southern states, including Texas and Florida, likely because of how those energy markets are structured and operate.The researchers also tested the sensitivity of NPVs to different storage capacities, that is, how long the system could continuously deliver power to the grid. They calculated the NPVs of a 100 MW system that could provide electricity supply for one day, one week, and one month. “That analysis showed that under aggressive decarbonization, weekly storage is more economically viable than monthly storage, because [in the latter case] we’re paying for more storage capacity than we need,” explains Cetegen.Improving the NPV of the LAES systemThe researchers next analyzed two possible ways to improve the NPV of liquid air storage: by increasing the system’s energy efficiency and by providing financial incentives. Their analyses showed that increasing the energy efficiency, even up to the theoretical limit of the process, would not change the economic viability of LAES under the most realistic decarbonization scenarios. On the other hand, a major improvement resulted when they assumed policies providing subsidies on capital expenditures on new installations. Indeed, assuming subsidies of between 40 percent and 60 percent made the NPVs for a 100 MW system become positive under all the realistic scenarios.Thus, their analysis showed that financial incentives could be far more effective than technical improvements in making LAES economically viable. While engineers may find that outcome disappointing, Cetegen notes that from a broader perspective, it’s good news. “You could spend your whole life trying to optimize the efficiency of this process, and it wouldn’t translate to securing the investment needed to scale the technology,” she says. “Policies can take a long time to implement as well. But theoretically you could do it overnight. So if storage is needed [on a future decarbonized grid], then this is one way to encourage adoption of LAES right away.”Cost comparison with other energy storage technologiesCalculating the economic viability of a storage technology is highly dependent on the assumptions used. As a result, a different measure — the “levelized cost of storage” (LCOS) — is typically used to compare the costs of different storage technologies. In simple terms, the LCOS is the cost of storing each unit of energy over the lifetime of a project, not accounting for any income that results.On that measure, the LAES technology excels. The researchers’ model yielded an LCOS for liquid air storage of about $60 per megawatt-hour, regardless of the decarbonization scenario. That LCOS is about a third that of lithium-ion battery storage and half that of pumped hydro. Cetegen cites another interesting finding: the LCOS of their assumed LAES system varied depending on where it’s being used. The standard practice of reporting a single LCOS for a given energy storage technology may not provide the full picture.Cetegen has adapted the model and is now calculating the NPV and LCOS for energy storage using lithium-ion batteries. But she’s already encouraged by the LCOS of liquid air storage. “While LAES systems may not be economically viable from an investment perspective today, that doesn’t mean they won’t be implemented in the future,” she concludes. “With limited options for grid-scale storage expansion and the growing need for storage technologies to ensure energy security, if we can’t find economically viable alternatives, we’ll likely have to turn to least-cost solutions to meet storage needs. This is why the story of liquid air storage is far from over. We believe our findings justify the continued exploration of LAES as a key energy storage solution for the future.” More