More stories

  • in

    Waging a two-pronged campaign against climate change

    If nuclear energy is to play a pivotal role in securing a low-carbon future, researchers must not only develop a new generation of powerful and cost-efficient nuclear power plants, but provide stakeholders with the tools for making smart investment choices among these advanced reactors. W. Robb “Robbie” Stewart, a doctoral candidate in the MIT Department of Nuclear Science and Engineering (NSE), is working on both these problems.

    “Capital construction and operational costs are limiting the nuclear industry’s ability to expand at this critical moment, and if we can’t reduce these costs then nuclear doesn’t have a chance of being a big player in decarbonizing the economy,” Stewart says. “So I decided to focus my thesis research on an estimating tool that quantifies the costs of building a nuclear power plant, and which could be useful for assessing different reactor designs,” he says.

    This precision cost-modeling method helps inform an ambitious project that Stewart has been pursuing alongside his dissertation work: designing and building a modular, integrated, gas high-temperature nuclear reactor, called MIGHTR, along with Enrique Velez-Lopez SM ’20. “Our entire thesis … is that we have to simplify the civil construction elements of the project,” says Stewart

    Costly infrastructure

    Both Stewart’s doctoral research and his own reactor development are motivated to a large degree by a central concern: “Managing the construction of massive nuclear plants is extremely difficult, and too likely to result in cost overruns,” he says. “That’s because we don’t do enough of this kind of construction to be good at it.” In the United States, the key challenge to launching new commercial plants is not regulatory delay or public resistance, but inefficient construction practices, he believes.

    Stewart views overcoming nuclear’s daunting building costs as paramount in the drive to bring more plants online in the near future. His modeling tool will make this more likely through precise estimations of construction risks and associated expenses — all based on actual U.S. Department of Energy data on the costs of thousands of items required in commercial reactors, from pressure vessels and fuel to containment buildings and instrumentation.

    This rigorous method of quantifying costs is aimed at smoothing the way to the next generation of nuclear reactors, such as small, modular nuclear reactor (SMRs). This type of advanced nuclear reactor can be fabricated in an economically desirable assembly-line fashion, and fit into sites where larger facilities would not. Some SMRs like MIGHTR will also be able to operate at higher temperatures. This attribute makes them uniquely suited for powering industrial processes that are currently served by greenhouse-gas emitting fossil fuel plants.

    Commercial (typically light water) nuclear reactors supply nearly one-third of the world’s carbon-free electricity. But they must operate at temperatures that do not generally exceed 300 degrees Celsius, which means they cannot generate the heat required for petrochemical manufacturing and other power-hungry industrial needs. In contrast, next-generation reactors such as MIGHTR could turn the temperature dial up to 700 C and beyond. “Industrial process heat accounts for 10 percent of greenhouse gas emissions, so an important criterion for selecting an advanced reactor would be whether it can meet the need of decarbonizing industries,” says Stewart.

    His modeling tool could help determine which advanced nuclear designs offers the best investment bet. For instance, some SMRs might require 30 million work-hours to build, and others 8 million. Some facilities might involve technological uncertainties that make them too much of a gamble, no matter how much electricity or heat they purport to deliver. Investors, utilities, and policymakers must feel confident that their decision strikes the optimal balance of desired reactor attributes and applications with the reactor’s risk and price tag. “Not all SMRs are equally cost-competitive, and assessment can help distribute resources much more effectively,” he says.

    Modeling new technologies

    Stewart, who grew up in Dallas, Texas, gravitated early toward cutting-edge technologies with the capacity to serve society. “I knew I wanted to be an engineer from a young age, and loved reading pop culture science trying to understand what the next generation of cars or jet engines might be,” he recalls.

    Although tempted by aerospace studies, he found his groove in mechanical engineering as an undergraduate and then master’s student at the University of Texas at Austin. His master’s thesis on heat transfer in gas turbines led directly to work with GE Global Research. After four years spent on ventures to improve cooling efficiencies inside gas turbines, and then to model and predict the life of commercial jet engines, he grew restless.

    Over the years he’d felt a mounting concern about the dangers of climate change, and a growing desire to train his engineering expertise on the challenge. “I wanted to be at the forefront of a new technology, and I wanted to be able to look back at the point of retirement and say I dedicated my engineering time and knowledge to this big problem,” says Stewart. So he decided to leave his mechanical engineering career and learn a new discipline at MIT. He quickly found a mentor in Koroush Shirvan, the John Clark Hardwick (1986) Career Development Professor in NSE. “He seemed to be solving problems the nuclear industry was facing, from operational and capital costs, to new fuel and enhanced safety designs,” says Stewart. “That resonated with me.”

    MIGHTR drew from the kind of multidisciplinary perspective championed by Shirvan and other members of the department. Other designs for high-temperature gas reactors envision housing components in a structure 60 meters tall. Stewart and his partner thought it might be simpler to lay the entire structure flat, including the reactor core and steam generator. Building height leads to great complexity and higher construction costs. The flat design leverages cost-efficient building techniques new to nuclear, such as precast concrete panels

    “We took our idea to a faculty meeting, where they threw stones at it because they wanted proof we could reduce the building size five times less than other HTRs without affecting safety,” Stewart recalls. “That was the birth of MIGHTR.”

    Stewart and Velez-Lopez have since launched a startup, Boston Atomics, to bring MIGHTR to life. The team’s design filed a patent last October and received a $5 million grant in December from the U.S. Department of Energy (DOE)’s Advanced Reactor Design Program. MIT is helping drive this venture forward, with Shirvan overseeing the project, which includes partners from other universities.

    Stewart’s creation of the nuclear plant cost modeling tool, sponsored by the Finnish energy company Fortum, and co-invention of the MIGHTR design have already won recognition: His research is headed for publication in several journals, and last year he received NSE’s 2020 Manson Benedict Award for Academic Excellence and Professional Promise.

    Today, even as he presses forward on both MIGHTR and his cost-modeling research, Stewart has broadened his portfolio. He is assisting associate provost and Japan Steel Industry Professor Richard Lester with the MIT Climate Grand Challenges Program. “The goal is to identify a handful of powerful research ideas that can be big movers in solving the climate change problem, not just through carbon mitigation but by promoting the adaptation and resilience of cities and reducing impacts on people in zones experiencing extreme weather-related conditions, such as fires and hurricanes,” says Stewart.

    After picking up his doctorate next year, Stewart plans on dedicating himself to Boston Atomics and MIGHTR. He also hopes that his modeling tool, free to the public, will help direct research and development dollars into nuclear technologies with a high potential for reducing cost, and “get people excited by new reactor designs,” he says. More

  • in

    3 Questions: Secretary Kathleen Theoharides on climate and energy in Massachusetts

    Massachusetts is poised to be a national and global leader in the fight against climate change. This spring, Kathleen Theoharides, secretary of the Executive Office of Energy and Environmental Affairs of the Commonwealth of Massachusetts, spoke with MIT Energy Initiative Director Robert Armstrong at a seminar focused on Massachusetts’ emissions-reduction plans. Here, Theoharides discusses the state’s initiatives to address the decarbonization of key sectors to help the state achieve these goals.

    Q: In March, Massachusetts Governor Charlie Baker signed new legislation addressing climate change. What is the scope and mission of this bill? And how does it work with preexisting programs to address key climate concerns for the state?

    A: Governor Baker has offered long-term support to make Massachusetts a model of climate action. He further strengthened this commitment to achieving net-zero by 2050 when he signed this climate change legislation, which now gives Massachusetts the most ambitious emissions-reduction goals in the country. So, what does this legislation do? There are a number of really critical pieces in it, some of which we have been working very hard on at the executive branch level already. First and foremost, it codifies into law the state’s net-zero target. This will help to accomplish things such as provisions to make our appliances more energy efficient and allow municipalities to opt into highly efficient codes for new construction; it includes important nation-leading provisions that will help us protect our environmental justice communities, significantly push development in offshore wind, and much, much more.

    We recently released a 2050 Decarbonization Roadmap, which has set the table for much of the work that we will be doing in the next 10 years to get us on track to hit our 30-year target. This report is a combination of two years of science-based analysis using models and analytical tools to explore in great detail what steps the Commonwealth and the region need to take to achieve this goal while maintaining a healthy, thriving, and equitable economy.

    The long-range analysis of the 2050 Decarbonization Roadmap has helped inform our Clean Energy and Climate Plan for 2030, which aims to achieve a 50 percent emissions limit by the end of the decade. Based off the report, we determined a number of really ambitious goals that we need to meet by 2030. For the heating sector, this includes retrofitting about 1 million homes, making sure that all new construction is highly efficient, and helping people adopt clean heating solutions. In the transportation sector, we need around 750,000 electric vehicles on the road, and also to achieve a reduction in vehicle miles traveled by 15 percent. We also need to build and interconnect 6,000 megawatts (MW) of clean energy and modernize our electric grid to support the development of these clean energy resources. This plan is really our map of how to make these changes over the next decade, and a lot hinges on the work we do with our federal partners and with other states.

    Here are some specific programs we’re working on to help us achieve our 2030 plan.

    First, we’re working on whole-scale market reform by modernizing our electric grid to support the development of clean energy in the Commonwealth and across New England.
    Second, we’re convening a first-in-the-nation Commission on Clean Heat, which will bring together many different stakeholders to provide the governor recommendations on the heating sector.
    Further, we are updating our Energy Efficiency Plan. Massachusetts is a national leader in energy efficiency, and we hope to further align energy efficiency with the state’s climate goals and to improve program equity by increasing participation from groups which have traditionally been excluded from this process.
    Energy storage has been a large component of our work in this space, especially since the governor took office and we launched the Energy Storage Initiative in 2015. One notable success is that by including energy storage incentives directly into our solar program, we have approved nearly 1,600 MW hours of energy storage, exceeding our initial 2025 target of 1,000 MW hours.
    Finally, we have been working hard on our Transportation and Climate Initiative program, which is a cap-and-invest program that’s been in the works for the past five years. We anticipate that this will drive pollution in the sector down 26 percent by 2030. We’ve been working with nine other states and expect many more to come into the program — this has been a critical opportunity to reduce emissions in the sector, deliver cleaner energy, and reinvest the proceeds in paving the way for a new future of transportation.

    Q: What are some of the most exciting and recent developments for the state in terms of climate and energy?

    A: On May 10, the federal-level Bureau of Ocean Energy Management approved the development of Vineyard Wind — an 800 MW offshore wind project located off the southern Massachusetts coast — making it the largest approved offshore wind project in the United States to date. This key, long-awaited milestone was supposed to happen in my first couple of months on the job as secretary in June 2019. It was close to being final, and then it got pulled back in the federal permitting process as more projects came on. This recent approval has given us a lot of momentum, and a lot of hope for the future as these projects move forward and start delivering the clean energy, jobs, and environmental benefits that are so needed.

    On March 11, we extended that momentum. Our Department of Energy Resources filed a request for proposals (RFP) for the third round of our 83C Offshore Wind Energy Solicitation. That RFP is now open for bids, and there are several key changes we’ve made in the solicitation that are worth highlighting. First, we’ve baked in a little bit more time for the federal permitting and review process. Second, we’re proposing to allow bids from 200 MW all the way up to 1,600 MW, which would be a doubling of any of the approved projects we’ve had to date. The allowance for larger-sized bids is intended to capture potential efficiencies related to transmission cabling, as well as the use of onshore transmission interconnection points. Additionally, this RFP is really a result of extensive stakeholder engagement, which has led to some important changes that will allow us to build on the Commonwealth’s commitment to environmental justice and to diversity, equity, and inclusion (DEI) in the workforce. For the first time, the RFP will require bidders to submit DEI plans that include a workforce diversity plan, a supplier diversity program plan, and more. Finally, the RFP includes both an environmental and socioeconomic impact evaluation. This will ask bidders to detail any potential impacts — both positive and negative — including assessments of cumulative environmental impacts on environmental justice populations and host communities. Overall, we are really excited about these developments in the offshore wind space and think it helps to move the entire industry in the right direction.

    Q: In what way do you see Massachusetts being able to work with federal, private, and public partners moving forward? Are there any areas where you see room for growth and collaboration?

    A: Our administration and the legislature have had a long-standing, bipartisan record of partnership, particularly around energy and climate issues, which has helped us to make Massachusetts a leader in the field. I think the state’s bipartisanship really could serve as a model for how those at the federal level could go about passing important climate change and environmental laws. One of the things I’ve spent a lot of time on in this role and in my prior role as the state’s undersecretary of climate change was trying to highlight bipartisanship and consensus around the need for climate change solutions. We as a nation have the opportunity to build strong economies, to create a clean energy workforce, and to really be leaders among other nations on these issues. Thanks to the new legislation and other activities being undertaken within the Commonwealth, we once again added to our record of national leadership on climate change and have taken a significant step to reduce emissions and to really turn up the action on climate change in this next critical decade, while also protecting vulnerable communities in the pursuit of achieving this goal.

    It is critical that we continue to work with other states and regions in addition to fostering federal partnerships. Working to upgrade transmission capacity with our neighbors both in New England and Canada in order to ensure the connection and distribution of new renewable sources, from hydropower in Québec to onshore wind in places like Maine, is one critical component. Additionally, our six-state regional transmission organization, ISO New England, doesn’t currently reflect the policy goals around climate change that most of the states have. Moving forward, there needs to be more input from participating state leadership towards ISO’s governance and we all need to engage in scenario-based, forward-looking, long-term transition planning to understand how to meet the energy needs of the future. Finally, we all need to accommodate greater proactive participation from environmental justice communities so that we’re building this new, regional energy system in a way that is inclusive and avoids conflict.

    We are looking forward to finding new ways to partner with educational institutions and initiatives such as the MIT Energy Initiative and others at MIT. We have a great richness of resources here in the Commonwealth, especially in terms of our educational opportunities. There are tremendous areas of overlap, and I am excited to see how we can all work together toward this major decarbonization goal we have as a state, and now as a nation. More

  • in

    Asegun Henry has a big idea for tackling climate change: Store up the sun

    Asegun Henry has a bold idea to save the world. He believes the key to reducing carbon emissions, and mitigating further climate change, lies in our ability to box up the sun.  

    Today, much of the renewable energy that’s captured from the wind and sun is delivered in a use-it-or-lose-it capacity. To store such energy, Henry envisions a completely sustainable, zero-carbon grid with the potential to supply all our electrical needs, even on overcast and windless days. And he has a blueprint for how to get there.

    Imagine, alongside solar plants and wind turbines, a heavily insulated, warehouse-sized container filled with white-hot liquid metal. Any excess energy captured during low-use times would be diverted into this container, where it would be converted into heat. When energy demand goes up, the liquid metal could be pumped through a converter to turn heat back into electricity.

    Henry says this “sun-in-a-box” system would serve as a rechargeable battery, albeit one that takes up half a football field. He has shown that key parts of the system work, and is bringing those parts together to demonstrate a lab-scale system. If that proves successful, he will push ahead to versions with increasing storage capacity, and ultimately, to a commercial-scale, grid-integrated system.

    It’s an ambitious road, and one that has not been without hurdles, much like Henry’s own path to MIT. In 2020, he was granted a faculty tenure position in MIT’s Department of Mechanical Engineering, and is funneling much of his energy into storing up the sun.

    “In my view, the choice to go to MIT was a path toward saving the human race,” Henry says. “I believe in this technology, and that this is the key, the linchpin that will set a lot of things in the right direction.”

    Thinking big

    Henry grew up in Sarasota, Florida, then in Tallahassee, where he became a drummer. His parents, both professors at Florida A&M University, made every effort to instill in him an appreciation of his family’s West African roots. When he was 10, his mother brought him to an African dance class at the university.

    “I had been taught to revere African culture, but never had really seen or heard it, and I totally fell in love with the drums that day,” Henry says.

    For the next six years, he joined a professional touring ensemble and devoted himself to African drumming. He was 16 when he decided to forgo a drumming career, after seeing his teachers’ financial hardships. Around that time, he took part in a rites-of-passage program for young Black men, where he met mentor Makola Abdullah, a professor of civil engineering at Florida A&M. Abdullah hired Henry as a young assistant in his lab, where the team was studying the structure of the Egyptian pyramids.

    “That really turned things on for me,” Henry recalls. “I was getting paid to do technical work for the first time, which was exciting at that age.”

    As an undergraduate at Florida A&M, Henry continued working in Abdullah’s lab, on a study of earthquake-induced vibrations. He also became interested in vibrations at the atomic scale, and the motion of atoms in the context of heat, which led him to apply to graduate school at MIT.

    Henry’s experience at MIT was academically intense, at times financially uncertain, and overall, socially isolating, he says, noting that at one time he was the only Black engineering graduate student in the department.

    “That [isolation] motivated me, and I was on a mission to get my degree,” he says.

    Henry pushed on, working with his advisor to develop molecular dynamics simulations of heat conduction, and received his master’s and PhD in mechanical engineering in 2009. He then accepted a faculty position at Georgia Tech, but before settling on campus, he took up three consecutive postdocs, at Oak Ridge National Laboratories, Northwestern University, and the Department of Energy’s Advanced Research Projects Agency-Energy, or ARPA-E.

    Each postdoc helped to crystallize his own research goals. At Oak Ridge, he learned to do electronic structure calculations. At Northwestern, he got into renewable energy, simulating promising solar-thermochemical materials. And at ARPA-E, a division that was designed to support high-risk, high-reward projects, he learned to think big.

    “I visited Jurassic-sized machine shops where they build the largest turbines in the world, and also toured concentrated solar plants,” Henry says. “That was a transformative experience, and I started getting interested in systems-level design.”

    “A step change”

    He returned to Georgia Tech with a risky idea for a new kind of concentrated solar power (CSP). Most CSP designs are based on the idea of storing heat as molten salt,  and moving the liquid through metal piping and pumps to convert into electricity. But there is a limit to how hot the salts can get when storing heat. Temperatures higher than this limit could also cause metal pipes and pumps to corrode too quickly.

    “I was interested in pushing this to the extreme, to see how to get a step change in performance,” Henry says.

    He proposed making pipes and pumps out of more heat-resistant ceramics, and storing heat not in molten salt, but in glowing, white-hot liquid metal.

    “It was a radical idea, and based on the physics, it’s sound,” Henry says.

    He and his students worked for years to demonstrate a key component of the system, a high-temperature ceramic pump, at first with little progress.

    “I used to have to give these coach-in-the-locker-room speeches to keep everyone motivated,” Henry remembers.

    In 2017, their efforts paid off with a pump that could circulate liquid at up to 1,400 degrees Celsius. The demonstration earned them a publication in Nature, and a Guinness World Record for the “highest operating temperature liquid pump.”

    “That escalated things,” says Henry, who at the time had received an invitation to interview for a faculty position at MIT. When he was offered the job, he wasn’t sure he could take it. While his work was moving forward, he was in the middle of a complicated divorce.

    “I was at a difficult crossroads,” he says. “Do I stay, and possibly get custody of my kids, or do I double down on my career and go to MIT, where I think I have the best chance of pursuing this idea?”

    In the end, Henry accepted the position and moved back to MIT in 2018. The divorce pushed him into bankruptcy, even as he was starting up a new lab and managing teaching demands on campus. It was a tumultuous year, but he eventually moved to Boston with his sons, and just before the pandemic set in, Henry was also awarded tenure.

    “It’s a dramatic relief for me,” Henry says. “After risking it all to come here, you want that security that things will work out.”

    He is forging ahead to improve the sun-in-a-box system, and has since bested his record with an even higher-temperature pump. He’s also continuing to simulate the motion of atoms in different materials and is converting those motions into sound — a project that was partly inspired by his early experience in music.

    Of the new balance he has found in work and life, he says: “It’s very grounding. And I’m thankful.” More

  • in

    Infrared cameras and artificial intelligence provide insight into boiling

    Boiling is not just for heating up dinner. It’s also for cooling things down. Turning liquid into gas removes energy from hot surfaces, and keeps everything from nuclear power plants to powerful computer chips from overheating. But when surfaces grow too hot, they might experience what’s called a boiling crisis.

    In a boiling crisis, bubbles form quickly, and before they detach from the heated surface, they cling together, establishing a vapor layer that insulates the surface from the cooling fluid above. Temperatures rise even faster and can cause catastrophe. Operators would like to predict such failures, and new research offers insight into the phenomenon using high-speed infrared cameras and machine learning.

    Matteo Bucci, the Norman C. Rasmussen Assistant Professor of Nuclear Science and Engineering at MIT, led the new work, published June 23 in Applied Physics Letters. In previous research, his team spent almost five years developing a technique in which machine learning could streamline relevant image processing. In the experimental setup for both projects, a transparent heater 2 centimeters across sits below a bath of water. An infrared camera sits below the heater, pointed up and recording at 2,500 frames per second with a resolution of about 0.1 millimeter. Previously, people studying the videos would have to manually count the bubbles and measure their characteristics, but Bucci trained a neural network to do the chore, cutting a three-week process to about five seconds. “Then we said, ‘Let’s see if other than just processing the data we can actually learn something from an artificial intelligence,’” Bucci says.

    The goal was to estimate how close the water was to a boiling crisis. The system looked at 17 factors provided by the image-processing AI: the “nucleation site density” (the number of sites per unit area where bubbles regularly grow on the heated surface), as well as, for each video frame, the mean infrared radiation at those sites and 15 other statistics about the distribution of radiation around those sites, including how they’re changing over time. Manually finding a formula that correctly weighs all those factors would present a daunting challenge. But “artificial intelligence is not limited by the speed or data-handling capacity of our brain,” Bucci says. Further, “machine learning is not biased” by our preconceived hypotheses about boiling.

    To collect data, they boiled water on a surface of indium tin oxide, by itself or with one of three coatings: copper oxide nanoleaves, zinc oxide nanowires, or layers of silicon dioxide nanoparticles. They trained a neural network on 85 percent of the data from the first three surfaces, then tested it on 15 percent of the data of those conditions plus the data from the fourth surface, to see how well it could generalize to new conditions. According to one metric, it was 96 percent accurate, even though it hadn’t been trained on all the surfaces. “Our model was not just memorizing features,” Bucci says. “That’s a typical issue in machine learning. We’re capable of extrapolating predictions to a different surface.”

    The team also found that all 17 factors contributed significantly to prediction accuracy (though some more than others). Further, instead of treating the model as a black box that used 17 factors in unknown ways, they identified three intermediate factors that explained the phenomenon: nucleation site density, bubble size (which was calculated from eight of the 17 factors), and the product of growth time and bubble departure frequency (which was calculated from 12 of the 17 factors). Bucci says models in the literature often use only one factor, but this work shows that we need to consider many, and their interactions. “This is a big deal.”

    “This is great,” says Rishi Raj, an associate professor at the Indian Institute of Technology at Patna, who was not involved in the work. “Boiling has such complicated physics.” It involves at least two phases of matter, and many factors contributing to a chaotic system. “It’s been almost impossible, despite at least 50 years of extensive research on this topic, to develop a predictive model,” Raj says. “It makes a lot of sense to us the new tools of machine learning.”

    Researchers have debated the mechanisms behind the boiling crisis. Does it result solely from phenomena at the heating surface, or also from distant fluid dynamics? This work suggests surface phenomena are enough to forecast the event.

    Predicting proximity to the boiling crisis doesn’t only increase safety. It also improves efficiency. By monitoring conditions in real-time, a system could push chips or reactors to their limits without throttling them or building unnecessary cooling hardware. It’s like a Ferrari on a track, Bucci says: “You want to unleash the power of the engine.”

    In the meantime, Bucci hopes to integrate his diagnostic system into a feedback loop that can control heat transfer, thus automating future experiments, allowing the system to test hypotheses and collect new data. “The idea is really to push the button and come back to the lab once the experiment is finished.” Is he worried about losing his job to a machine? “We’ll just spend more time thinking, not doing operations that can be automated,” he says. In any case: “It’s about raising the bar. It’s not about losing the job.” More

  • in

    Engineered yeast could expand biofuels’ reach

    Boosting production of biofuels such as ethanol could be an important step toward reducing global consumption of fossil fuels. However, ethanol production is limited in large part by its reliance on corn, which isn’t grown in large enough quantities to make up a significant portion of U.S. fuel needs.

    To try to expand biofuels’ potential impact, a team of MIT engineers has now found a way to expand the use of a wider range of nonfood feedstocks to produce such fuels. At the moment, feedstocks such as straw and woody plants are difficult to use for biofuel production  because they first need to be broken down to fermentable sugars, a process that releases numerous byproducts that are toxic to yeast, the microbes most commonly used to produce biofuels.

    The MIT researchers developed a way to circumvent that toxicity, making it feasible to use those sources, which are much more plentiful, to produce biofuels. They also showed that this tolerance can be engineered into strains of yeast used to manufacture other chemicals, potentially making it possible to use “cellulosic” woody plant material as a source to make biodiesel or bioplastics.

    “What we really want to do is open cellulose feedstocks to almost any product and take advantage of the sheer abundance that cellulose offers,” says Felix Lam, an MIT research associate and the lead author of the new study.

    Gregory Stephanopoulos, the Willard Henry Dow Professor in Chemical Engineering, and Gerald Fink, the Margaret and Herman Sokol Professor at the Whitehead Institute of Biomedical Research and the American Cancer Society Professor of Genetics in MIT’s Department of Biology, are the senior authors of the paper, which appears today in Science Advances.

    Boosting tolerance

    Currently, around 40 percent of the U.S. corn harvest goes into ethanol. Corn is primarily a food crop that requires a great deal of water and fertilizer, so plant material known as cellulosic biomass is considered an attractive, noncompeting source for renewable fuels and chemicals. This biomass, which includes many types of straw, and parts of the corn plant that typically go unused, could amount to more than 1 billion tons of material per year, according to a U.S. Department of Energy study — enough to substitute for 30 to 50 percent of the petroleum used for transportation.

    However, two major obstacles to using cellulosic biomass are that cellulose first needs to be liberated from the woody lignin, and the cellulose then needs to be further broken down into simple sugars that yeast can use. The particularly aggressive preprocessing needed generates compounds called aldehydes, which are very reactive and can kill yeast cells.

    To overcome this, the MIT team built on a technique they had developed several years ago to improve yeast cells’ tolerance to a wide range of alcohols, which are also toxic to yeast in large quantities. In that study, they showed that spiking the bioreactor with specific compounds that strengthen the membrane of the yeast helped yeast to survive much longer in high concentrations of ethanol. Using this approach, they were able to improve the traditional fuel ethanol yield of a high-performing strain of yeast by about 80 percent.

    In their new study, the researchers engineered yeast so that they could convert the cellulosic byproduct aldehydes into alcohols, allowing them to take advantage of the alcohol tolerance strategy they had already developed. They tested several naturally occurring enzymes that perform this reaction, from several species of yeast, and identified one that worked the best. Then, they used directed evolution to further improve it.

    “This enzyme converts aldehydes into alcohols, and we have shown that yeast can be made a lot more tolerant of alcohols as a class than it is of aldehydes, using the other methods we have developed,” Stephanopoulos says.

    Yeast are generally not very efficient at producing ethanol from toxic cellulosic feedstocks; however, when the researchers expressed this top-performing enzyme and spiked the reactor with the membrane-strengthening additives, the strain more than tripled its cellulosic ethanol production, to levels matching traditional corn ethanol.

    Abundant feedstocks

    The researchers demonstrated that they could achieve high yields of ethanol with five different types of cellulosic feedstocks, including switchgrass, wheat straw, and corn stover (the leaves, stalks, and husks left behind after the corn is harvested).

    “With our engineered strain, you can essentially get maximum cellulosic fermentation from all these feedstocks that are usually very toxic,” Lam says. “The great thing about this is it doesn’t matter if maybe one season your corn residues aren’t that great. You can switch to energy straws, or if you don’t have high availability of straws, you can switch to some sort of pulpy, woody residue.” 

    The researchers also engineered their aldehyde-to-ethanol enzyme into a strain of yeast that has been engineered to produce lactic acid, a precursor to bioplastics. As it did with ethanol, this strain was able to produce the same yield of lactic acid from cellulosic materials as it does from corn.

    This demonstration suggests that it could be feasible to engineer aldehyde tolerance into strains of yeast that generate other products such as diesel. Biodiesels could potentially have a big impact on industries such as heavy trucking, shipping, or aviation, which lack an emission-free alternative like electrification and require huge amounts of fossil fuel.

    “Now we have a tolerance module that you can bolt on to almost any sort of production pathway,” Stephanopoulos says. “Our goal is to extend this technology to other organisms that are better suited for the production of these heavy fuels, like oils, diesel, and jet fuel.”

    The research was funded by the U.S. Department of Energy and the National Institutes of Health. More

  • in

    3Q: Why “nuclear batteries” offer a new approach to carbon-free energy

    We may be on the brink of a new paradigm for nuclear power, a group of nuclear specialists suggested recently in The Bridge, the journal of the National Academy of Engineering. Much as large, expensive, and centralized computers gave way to the widely distributed PCs of today, a new generation of relatively tiny and inexpensive factory-built reactors, designed for autonomous plug-and-play operation similar to plugging in an oversized battery, is on the horizon, they say.

    These proposed systems could provide heat for industrial processes or electricity for a military base or a neighborhood, run unattended for five to 10 years, and then be trucked back to the factory for refueling and refurbishment. The authors — Jacopo Buongiorno, MIT’s TEPCO Professor of Nuclear Science and Engineering; Robert Frida, a founder of GenH; Steven Aumeier of the Idaho National Laboratory; and Kevin Chilton, retired commander of the U.S. Strategic Command — have dubbed these small power plants “nuclear batteries.” Because of their simplicity of operation, they could play a significant role in decarbonizing the world’s electricity systems to avert catastrophic climate change, the researchers say. MIT News asked Prof. Buongiorno to describe his group’s proposal.

    Q: The idea of smaller, modular nuclear reactors has been discussed for several years. What makes this proposal for nuclear batteries different?

    A: The units we describe take that concept of factory fabrication and modularity to an extreme. Earlier proposals have looked at reactors in the range of 100 to 300 megawatts of electric output, which are a factor of 10 smaller than the traditional big nuclear reactors at the gigawatt scale. These could be assembled from factory-built components, but they still require some assembly at the site and a lot of site preparation work. So, it’s an improvement over the traditional plants, but it’s not a game changer.

    This nuclear battery concept is really a different thing because of the physical scale and power output of these machines — about 10 megawatts. It’s so small that the whole power plant is actually built in a factory and fits within a standard container.

    This provides several benefits from an economic point of view. Deploying these nuclear batteries does not entail managing a large construction site, which has been the primary source of schedule delays and cost overruns for nuclear projects over the past 20 years.

    The nuclear battery is deployed quickly, say in a few weeks, and it becomes a sort of energy on demand service. Nuclear energy can be viewed as a product, not a mega-project.

    Q: You talk about potentially having such units widely distributed, including even in residential areas to power whole neighborhoods. How confident can people be as to the safety of these plants?

    A: The nuclear battery designs that are being developed are exceptionally robust; that’s actually one of the selling points for this technology. The small physical size helps with safety in various ways. First, the amount of residual heat that has to be removed when the reactor is shut down is small. Second, the reactor core has a high surface-to-volume ratio, which also makes it easier to keep the nuclear fuel cool under all circumstances without any external intervention. The system essentially takes care of itself.

    Third, the reactor also has a very compact and strong steel containment structure surrounding it to protect against a release of radioactivity into the biosphere. To enhance security, we envision that at most sites these nuclear batteries would be located below grade, to provide an additional level of protection from an attacking force.

    Q: How do we know that these new kinds of reactors will work, and what would need to happen for such units to become widely available?

    A: NASA and Los Alamos National Laboratory demonstrated a microreactor for space applications in three years (2015-2018) from the start of design to fabrication and testing. And it cost them $20 million, leveraging the available Department of Energy nuclear technology infrastructure. This cost and schedule are orders of magnitude smaller than for traditional large nuclear plants that easily cost billions and take between five years and a decade to build.

    There are half a dozen companies now developing their own designs. For example, Westinghouse is working on a nuclear battery that uses heat pipe technology for cooling, and plans to run a demonstration unit in three years. This would be a pilot plant at one of the national laboratories, for example, the Idaho National Laboratory which has a number of facilities that are being modified to accommodate these small reactors and to perform intense testing on them.

    For example, the reactor can be subjected to more extreme conditions than would ever be encountered in normal operation, and in doing so show by direct testing that failure limits are not exceeded. That provides confidence for the subsequent phase of widespread commercial installation.

    These nuclear batteries are ideally suited to create resilience in every sectors of the economy, by providing a steady, dependable source of carbon-free electricity and heat that can be sited just where its output is needed, thus reducing the need for expensive and delicate energy transmission and storage infrastructure. If these become as widespread as we envision, they could make a significant contribution to reducing the world’s greenhouse gas emissions. More

  • in

    Revisiting a quantum past for a fusion future

    “I’m going back. It’s almost like a cycle in your life,” muses physicist Abhay Ram.

    Ram, a principal research scientist at the Plasma Science and Fusion Center (PSFC) at MIT, is returning to a field he first embraced as a graduate student at the Institute 50 years ago: quantum mechanics. Funded by the U.S. Department of Energy, he is exploring different pathways for using the power and speed of quantum computing — when it is eventually available — to study electromagnetic waves in plasmas, the so called “fourth state of matter” that fuels fusion in stars and in energy experiments on earth.

    Ram’s early education put him on a classical science trajectory. Intrigued with mathematics since high school days in India and Kenya, he chose the subject as his major during undergraduate studies at the University of Nairobi. His minor in applied mathematics directed him toward exploring physics-based problems. By the time he arrived at MIT, however, he was ready to move from classical mathematics and physics to quantum field theory, joining the Department of Physics to pursue his PhD.

    “Those were exciting times,” Ram recalls. “Jerome Friedman and Henry Kendall were professors in the department, and they had done pioneering experiments exploring the internal structure of protons and neutrons. Professor Samuel Ting was doing state-of-the-art experiments which eventually led to the discovery of the subatomic J particle. Steven Weinberg, the famous theoretical physicist, was also at MIT at that time. I took a three-semester course taught by him on quantum mechanics and quantum field theory that really stimulated and motivated me to start working in the field.”

    Not until he was a postdoc in the Department of Physics did he begin to feel drawn back to classical physics and practical applications. He was hearing a lot about fusion and its potential to create a virtually endless source of carbon-free energy on earth, but he was uncertain about pursuing research in this field.

    “To me, coming from the physics department, fusion was an engineering problem, not a physics problem. But when I saw the connection between fusion and plasma physics, I really started to think there were many interesting possibilities.”

    He credits an opportune postdoctoral position with electrical engineering and computer science professor Abraham Bers for providing him a path into the fusion community. Although he had no background in the topic other than his own reading, Ram began to feel comfortable in the new field and to understand different ways he could contribute, focusing his research on electromagnetic waves in plasmas and their interaction with charged particles.

    Electromagnetic waves are key to the study of plasma. In plasmas created for fusion research, electromagnetic waves of different frequencies are used for heating and generating current. Plasmas also emit electromagnetic radiation that can be used as a diagnostic tool, providing information about plasma conditions.

    “For diagnostics, we can either look at waves emitted naturally by the plasma or send waves into the plasma and observe their reflection,” Ram explains. “The latter would be similar to the way that a radar operates — it sends out a wave and then looks at the reflected wave to determine the location of an aircraft. When we use launched waves for diagnostics, the waves are of very small amplitude so that they do not modify the plasma, but tickle it just enough to get some information.”

    To fully understand their observations, researchers need the help of computers. Using quantum mechanics to broaden the scope and speed of computations would be a boon to research in plasma physics, and would help optimize the performance of a fusion device.

    With his recent funding, Ram hopes that returning to his quantum roots will ultimately accelerate understanding of classical electromagnetic wave phenomena in laboratory and space plasmas, which he has studied and taught for decades.

    But can a classical physics question be handled on a quantum computer?

    “What is interesting is that we take a classical problem — electromagnetic wave propagation in plasmas — and then try to mold or frame it so that it can be tackled on a quantum computer using algorithms that can be tested on present-day traditional computers. That means we’re taking a classical problem and converting it into a quantum problem.”

    The perceived advantage of quantum computers is that, for a certain set of problems, the computational speed increases exponentially as the number of processors increases. In traditional computers, speed increases only linearly with the number of processors. 

    Ram notes that the ongoing research on wave propagation using traditional computers will provide a broad base for testing and verifying algorithms that will be implemented on quantum computers.

    “Beyond that, due to gain in computational power, we will be able to rapidly solve complex problems that would be a challenge for classical computers,” says Ram. “Consequently, we could discover something new as we explore a broader range of physical phenomena.”

    The real innovation will be in learning to use quantum computing to solve a classical equation, assuring that, once such computers are available, physicists will be ready with questions framed effectively for quantum calculations.

    Ram feels fortunate to be aligning his early experience in quantum mechanics with the expertise he has developed in plasma physics at MIT. He credits support from his MIT mentors and colleagues for his professional growth and for providing a “free-flowing intellectual environment.”

    “I was fortunate that I could change fields and lead a fruitful career in plasma physics and fusion energy research,” he says. “You know, you start studying classical physics, then move to quantum mechanics, then have a career in classical physics. And then you come back to quantum mechanics again! It’s like a full circle — and a half.” More

  • in

    Diving into the global problem of technology waste

    While green energy solutions often rely on new technology, MIT students who took class STS.032 (Energy, Environment, and Society) in fall 2020 discovered that even many promising innovations share a downside — electronics waste (e-waste).

    “We’ve been using energy technologies that work well for our needs now, but we don’t think about what happens 30 years in the future,” says Jemma Schroder, a first-year student in the class who learned that waste from solar panels, for example, is on the rise. The International Renewable Energy Agency has projected that, given the current rate of accumulation, the world will have amassed 78 million metric tons of such waste by 2050.

    “We’re trying to dig ourselves out of the pit, but we’re just digging ourselves another pit,” Schroder says. “If you’re really aiming for sustainability, you have to think about all aspects of the problem.”

    Providing context for energy and sustainability issues is the major goal of STS.032, an elective for the Energy Studies minor. “I understand the imperative that we need energy, we need electronic goods, but the environment is an afterthought. That’s a big mistake,” says Professor Clapperton Chakanetsa Mavhunga of the Program in Science, Technology, and Society, who teaches the class.

    “We can no longer just focus on happy stories about technology,” says Mavhunga, who serves on the Energy Minor Oversight Committee, a subcommittee of the Energy Education Task Force of the MIT Energy Initiative. “What I try to do is place energy in everyday life and to show issues everyday people are grappling with.”

    To that end, every year Mavhunga identifies a specific energy challenge and asks students in STS.032 to tackle it. “It’s very much a problem-centered approach to the energy curriculum,” he says.

    Global perspective

    During the fall 2020 term, Mavhunga’s students spent eight weeks exploring the global landscape of energy and electronics waste, including cast-off cell phones and computers but also retired parts for solar panels. Topics covered ranged from the interplay of energy, race, inequality, poverty, and pollution in the United States to the dumping and innovative recycling of e-waste in Africa.

    “We take a world tour, looking at how things are made, how they travel illegally around the world,” Mavhunga says, noting that many cast-off electronics — and their associated pollutants — end up in the Global South. “There is this planned obsolescence at the level of design,” he adds. “And the question of what to do with the waste has not been really discussed.”

    Students in STS.032 say they were shocked to learn that many solar panels are already becoming obsolete and that designers did not plan well for end-of-life reuse or recycling. “Solar panels only last 20 or 30 years, so what happens to them after they stop working is a problem,” Schroder says. “Many can’t be recycled, or they can be but it’s too expensive to do so. So, people end up illegally shipping them off to sit in a waste dump.”

    “It never really occurred to me that electronics waste, especially solar waste, was such a big issue,” says senior Julian Dubransky, who is majoring in humanities and engineering. “I’d argue it’s one of the most important things I learned at MIT.”

    Waste hazards

    STS.032 requires two individual papers and culminates in a final group research paper, which this term focused on characterizing the problems associated with solar and electronics waste and proposing solutions.

    In their final paper, the students noted some of the hazards of electronics waste, including harmful chemicals such as lead, cadmium, and other known carcinogens, which can leach into the soil and contaminate water supplies. “In East African waste dumps, acids and chemicals from solar panels, lead-acid batteries, and lithium batteries are commonly drained directly into the ground to allow the metal components to be melted down and resold,” the students wrote.

    It’s also common to burn the plastic off wires to recover valuable copper, even though the process generates toxic fumes, Schroder says. “It’s not a priority for people to deal with these pollutants, though they are getting into land and water and deteriorating the health of everyone,” she says, because the waste is being processed in areas where subsistence is the higher priority.

    The students conclude that addressing the problem of electronics waste will require more public awareness of the environmental and human health consequences of improperly discarded waste. “Tech waste is a big form of waste that we don’t really talk about or see,” Schroder says.

    “You have to expose these problems and make people aware of them,” Dubransky says, adding that the challenge of addressing electronics waste is more about the will than the way. “There isn’t any true waste product if you can figure out how to reuse it or recycle it.”

    Innovative recycling

    Underscoring that point, STS.032 provided students with several examples of innovative recycling efforts, ranging from simply using water bottles filled with dirt as building blocks to creating new electronics out of the old. “I don’t know what I would do if someone gave me a pile of old electronics pieces, but they’ve created all these amazing machines, even 3D printers, from recycled tech,” Schroder says, referring to entrepreneurs across the continent who have built businesses from electronics waste dumped in Africa (WoeLab in Togo is one example). “It’s really inspiring.”

    Investigating what different communities do with waste is important, because it gives students the chance to see the problem from a new perspective, Mavhunga explains. “Different places in the world are connected, dealing with the same issues in different ways,” he says. “Knowledge doesn’t just come from universities and books. Knowledge can also come from people on the ground.”

    The students in STS.032 were able to identify some big-picture challenges to addressing electronics waste — notably the worldwide problem of inconsistent regulation — but they also had personal takeaways from the class.

    Schroder, for example, says she won’t be upgrading her phone anytime soon. That’s because now that she understands the problem of electronics waste, she wants to do something about it.

    “If you see a coal factory or a coal burner, you see the fumes rising up,” she notes. “What you don’t see is the phone you break and just throw out — you don’t see what happens to that. The lack of awareness of what happens to these devices is a really big problem.”

    The students hope awareness will drive demand for solutions, such as products that are designed for reuse and recycling. “Lack of awareness is probably the biggest issue we have in regard to the e-waste problem. If we’re aware it’s a problem, solutions can start flowing in,” Dubransky says.

    Mavhunga says he hopes STS.032 can help MIT students drive such solutions. “Places like MIT should be where this is done precisely because this is where we’ve got the engineers,” he says. “We need more people at the table who design from an ethical, environmental, and social perspective.” More