More stories

  • in

    Moving water and earth

    As a river cuts through a landscape, it can operate like a conveyer belt, moving truckloads of sediment over time. Knowing how quickly or slowly this sediment flows can help engineers plan for the downstream impact of restoring a river or removing a dam. But the models currently used to estimate sediment flow can be off by a wide margin.

    An MIT team has come up with a better formula to calculate how much sediment a fluid can push across a granular bed — a process known as bed load transport. The key to the new formula comes down to the shape of the sediment grains.

    It may seem intuitive: A smooth, round stone should skip across a river bed faster than an angular pebble. But flowing water also pushes harder on the angular pebble, which could erase the round stone’s advantage. Which effect wins? Existing sediment transport models surprisingly don’t offer an answer, mainly because the problem of measuring grain shape is too unwieldy: How do you quantify a pebble’s contours?

    The MIT researchers found that instead of considering a grain’s exact shape, they could boil the concept of shape down to two related properties: friction and drag. A grain’s drag, or resistance to fluid flow, relative to its internal friction, the resistance to sliding past other grains, can provide an easy way to gauge the effects of a grain’s shape.

    When they incorporated this new mathematical measure of grain shape into a standard model for bed load transport, the new formula made predictions that matched experiments that the team performed in the lab.

    “Sediment transport is a part of life on Earth’s surface, from the impact of storms on beaches to the gravel nests in mountain streams where salmon lay their eggs,” the team writes of their new study, appearing today in Nature. “Damming and sea level rise have already impacted many such terrains and pose ongoing threats. A good understanding of bed load transport is crucial to our ability to maintain these landscapes or restore them to their natural states.”

    The study’s authors are Eric Deal, Santiago Benavides, Qiong Zhang, Ken Kamrin, and Taylor Perron of MIT, and Jeremy Venditti and Ryan Bradley of Simon Fraser University in Canada.

    Figuring flow

    Video of glass spheres (top) and natural river gravel (bottom) undergoing bed load transport in a laboratory flume, slowed down 17x relative to real time. Average grain diameter is about 5 mm. This video shows how rolling and tumbling natural grains interact with one another in a way that is not possible for spheres. What can’t be seen so easily is that natural grains also experience higher drag forces from the flowing water than spheres do.

    Credit: Courtesy of the researchers

    Previous item
    Next item

    Bed load transport is the process by which a fluid such as air or water drags grains across a bed of sediment, causing the grains to hop, skip, and roll along the surface as a fluid flows through. This movement of sediment in a current is what drives rocks to migrate down a river and sand grains to skip across a desert.

    Being able to estimate bed load transport can help scientists prepare for situations such as urban flooding and coastal erosion. Since the 1930s, one formula has been the go-to model for calculating bed load transport; it’s based on a quantity known as the Shields parameter, after the American engineer who originally derived it. This formula sets a relationship between the force of a fluid pushing on a bed of sediment, and how fast the sediment moves in response. Albert Shields incorporated certain variables into this formula, including the average size and density of a sediment’s grains — but not their shape.

    “People may have backed away from accounting for shape because it’s one of these very scary degrees of freedom,” says Kamrin, a professor of mechanical engineering at MIT. “Shape is not a single number.”

    And yet, the existing model has been known to be off by a factor of 10 in its predictions of sediment flow. The team wondered whether grain shape could be a missing ingredient, and if so, how the nebulous property could be mathematically represented.

    “The trick was to focus on characterizing the effect that shape has on sediment transport dynamics, rather than on characterizing the shape itself,” says Deal.

    “It took some thinking to figure that out,” says Perron, a professor of geology in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “But we went back to derive the Shields parameter, and when you do the math, this ratio of drag to friction falls out.”

    Drag and drop

    Their work showed that the Shields parameter — which predicts how much sediment is transported — can be modified to include not just size and density, but also grain shape, and furthermore, that a grain’s shape can be simply represented by a measure of the grain’s drag and its internal friction. The math seemed to make sense. But could the new formula predict how sediment actually flows?

    To answer this, the researchers ran a series of flume experiments, in which they pumped a current of water through an inclined tank with a floor covered in sediment. They ran tests with sediment of various grain shapes, including beds of round glass beads, smooth glass chips, rectangular prisms, and natural gravel. They measured the amount of sediment that was transported through the tank in a fixed amount of time. They then determined the effect of each sediment type’s grain shape by measuring the grains’ drag and friction.

    For drag, the researchers simply dropped individual grains down through a tank of water and gathered statistics for the time it took the grains of each sediment type to reach the bottom. For instance, a flatter grain type takes a longer time on average, and therefore has greater drag, than a round grain type of the same size and density.

    To measure friction, the team poured grains through a funnel and onto a circular tray, then measured the resulting pile’s angle, or slope — an indication of the grains’ friction, or ability to grip onto each other.

    For each sediment type, they then worked the corresponding shape’s drag and friction into the new formula, and found that it could indeed predict the bedload transport, or the amount of moving sediment that the researchers measured in their experiments.

    The team says the new model more accurately represents sediment flow. Going forward, scientists and engineers can use the model to better gauge how a river bed will respond to scenarios such as sudden flooding from severe weather or the removal of a dam.

    “If you were trying to make a prediction of how fast all that sediment will get evacuated after taking a dam out, and you’re wrong by a factor of three or five, that’s pretty bad,” Perron says. “Now we can do a lot better.”

    This research was supported, in part, by the U.S. Army Research Laboratory. More

  • in

    New MIT internships expand research opportunities in Africa

    With new support from the Office of the Associate Provost for International Activities, MIT International Science and Technology Initiatives (MISTI) and the MIT-Africa program are expanding internship opportunities for MIT students at universities and leading academic research centers in Africa. This past summer, MISTI supported 10 MIT student interns at African universities, significantly more than in any previous year.

    “These internships are an opportunity to better merge the research ecosystem of MIT with academia-based research systems in Africa,” says Evan Lieberman, the Total Professor of Political Science and Contemporary Africa and faculty director for MISTI.

    For decades, MISTI has helped MIT students to learn and explore through international experiential learning opportunities and internships in industries like health care, education, agriculture, and energy. MISTI’s MIT-Africa Seed Fund supports collaborative research between MIT faculty and Africa-based researchers, and the new student research internship opportunities are part of a broader vision for deeper engagement between MIT and research institutions across the African continent.

    While Africa is home to 12.5 percent of the world’s population, it generates less than 1 percent of scientific research output in the form of academic journal publications, according to the African Academy of Sciences. Research internships are one way that MIT can build mutually beneficial partnerships across Africa’s research ecosystem, to advance knowledge and spawn innovation in fields important to MIT and its African counterparts, including health care, biotechnology, urban planning, sustainable energy, and education.

    Ari Jacobovits, managing director of MIT-Africa, notes that the new internships provide additional funding to the lab hosting the MIT intern, enabling them to hire a counterpart student research intern from the local university. This support can make the internships more financially feasible for host institutions and helps to grow the research pipeline.

    With the support of MIT, State University of Zanzibar (SUZA) lecturers Raya Ahmada and Abubakar Bakar were able to hire local students to work alongside MIT graduate students Mel Isidor and Rajan Hoyle. Together the students collaborated over a summer on a mapping project designed to plan and protect Zanzibar’s coastal economy.

    “It’s been really exciting to work with research peers in a setting where we can all learn alongside one another and develop this project together,” says Hoyle.

    Using low-cost drone technology, the students and their local counterparts worked to create detailed maps of Zanzibar to support community planning around resilience projects designed to combat coastal flooding and deforestation and assess climate-related impacts to seaweed farming activities. 

    “I really appreciated learning about how engagement happens in this particular context and how community members understand local environmental challenges and conditions based on research and lived experience,” says Isidor. “This is beneficial for us whether we’re working in an international context or in the United States.”

    For biology major Shaida Nishat, her internship at the University of Cape Town allowed her to work in a vital sphere of public health and provided her with the chance to work with a diverse, international team headed by Associate Professor Salome Maswine, head of the global surgery division and a widely-renowned expert in global surgery, a multidisciplinary field in the sphere of global health focused on improved and equitable surgical outcomes.

    “It broadened my perspective as to how an effort like global surgery ties so many nations together through a common goal that would benefit them all,” says Nishat, who plans to pursue a career in public health.

    For computer science sophomore Antonio L. Ortiz Bigio, the MISTI research internship in Africa was an incomparable experience, culturally and professionally. Bigio interned at the Robotics Autonomous Intelligence and Learning Laboratory at the University of Witwatersrand in Johannesburg, led by Professor Benjamin Rosman, where he developed software to enable a robot to play chess. The experience has inspired Bigio to continue to pursue robotics and machine learning.

    Participating faculty at the host institutions welcomed their MIT interns, and were impressed by their capabilities. Both Rosman and Maswime described their MIT interns as hard-working and valued team members, who had helped to advance their own work.  

    Building strong global partnerships, whether through faculty research, student internships, or other initiatives, takes time and cultivation, explains Jacobovits. Each successful collaboration helps to seed future exchanges and builds interest at MIT and peer institutions in creative partnerships. As MIT continues to deepen its connections to institutions and researchers across Africa, says Jacobovits, “students like Shaida, Rajan, Mel, and Antonio are really effective ambassadors in building those networks.” More

  • in

    Strengthening electron-triggered light emission

    The way electrons interact with photons of light is a key part of many modern technologies, from lasers to solar panels to LEDs. But the interaction is inherently a weak one because of a major mismatch in scale: A wavelength of visible light is about 1,000 times larger than an electron, so the way the two things affect each other is limited by that disparity.

    Now, researchers at MIT and elsewhere have come up with an innovative way to make much stronger interactions between photons and electrons possible, in the process producing a hundredfold increase in the emission of light from a phenomenon called Smith-Purcell radiation. The finding has potential implications for both commercial applications and fundamental scientific research, although it will require more years of research to make it practical.

    The findings are reported today in the journal Nature, in a paper by MIT postdocs Yi Yang (now an assistant professor at the University of Hong Kong) and Charles Roques-Carmes, MIT professors Marin Soljačić and John Joannopoulos, and five others at MIT, Harvard University, and Technion-Israel Institute of Technology.

    In a combination of computer simulations and laboratory experiments, the team found that using a beam of electrons in combination with a specially designed photonic crystal — a slab of silicon on an insulator, etched with an array of nanometer-scale holes — they could theoretically predict stronger emission by many orders of magnitude than would ordinarily be possible in conventional Smith-Purcell radiation. They also experimentally recorded a one hundredfold increase in radiation in their proof-of-concept measurements.

    Unlike other approaches to producing sources of light or other electromagnetic radiation, the free-electron-based method is fully tunable — it can produce emissions of any desired wavelength, simply by adjusting the size of the photonic structure and the speed of the electrons. This may make it especially valuable for making sources of emission at wavelengths that are difficult to produce efficiently, including terahertz waves, ultraviolet light, and X-rays.

    The team has so far demonstrated the hundredfold enhancement in emission using a repurposed electron microscope to function as an electron beam source. But they say that the basic principle involved could potentially enable far greater enhancements using devices specifically adapted for this function.

    The approach is based on a concept called flatbands, which have been widely explored in recent years for condensed matter physics and photonics but have never been applied to affecting the basic interaction of photons and free electrons. The underlying principle involves the transfer of momentum from the electron to a group of photons, or vice versa. Whereas conventional light-electron interactions rely on producing light at a single angle, the photonic crystal is tuned in such a way that it enables the production of a whole range of angles.

    The same process could also be used in the opposite direction, using resonant light waves to propel electrons, increasing their velocity in a way that could potentially be harnessed to build miniaturized particle accelerators on a chip. These might ultimately be able to perform some functions that currently require giant underground tunnels, such as the 30-kilometer-wide Large Hadron Collider in Switzerland.

    “If you could actually build electron accelerators on a chip,” Soljačić says, “you could make much more compact accelerators for some of the applications of interest, which would still produce very energetic electrons. That obviously would be huge. For many applications, you wouldn’t have to build these huge facilities.”

    The new system could also potentially provide a highly controllable X-ray beam for radiotherapy purposes, Roques-Carmes says.

    And the system could be used to generate multiple entangled photons, a quantum effect that could be useful in the creation of quantum-based computational and communications systems, the researchers say. “You can use electrons to couple many photons together, which is a considerably hard problem if using a purely optical approach,” says Yang. “That is one of the most exciting future directions of our work.”

    Much work remains to translate these new findings into practical devices, Soljačić cautions. It may take some years to develop the necessary interfaces between the optical and electronic components and how to connect them on a single chip, and to develop the necessary on-chip electron source producing a continuous wavefront, among other challenges.

    “The reason this is exciting,” Roques-Carmes adds, “is because this is quite a different type of source.” While most technologies for generating light are restricted to very specific ranges of color or wavelength, and “it’s usually difficult to move that emission frequency. Here it’s completely tunable. Simply by changing the velocity of the electrons, you can change the emission frequency. … That excites us about the potential of these sources. Because they’re different, they offer new types of opportunities.”

    But, Soljačić concludes, “in order for them to become truly competitive with other types of sources, I think it will require some more years of research. I would say that with some serious effort, in two to five years they might start competing in at least some areas of radiation.”

    The research team also included Steven Kooi at MIT’s Institute for Soldier Nanotechnologies, Haoning Tang and Eric Mazur at Harvard University, Justin Beroz at MIT, and Ido Kaminer at Technion-Israel Institute of Technology. The work was supported by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, the U.S. Air Force Office of Scientific Research, and the U.S. Office of Naval Research. More

  • in

    MIT scientists contribute to National Ignition Facility fusion milestone

    On Monday, Dec. 5, at around 1 a.m., a tiny sphere of deuterium-tritium fuel surrounded by a cylindrical can of gold called a hohlraum was targeted by 192 lasers at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL) in California. Over the course of billionths of a second, the lasers fired, generating X-rays inside the gold can, and imploding the sphere of fuel.

    On that morning, for the first time ever, the lasers delivered 2.1 megajoules of energy and yielded 3.15 megajoules in return, achieving a historic fusion energy gain well above 1 — a result verified by diagnostic tools developed by the MIT Plasma Science and Fusion Center (PSFC). The use of these tools and their importance was referenced by Arthur Pak, a LLNL staff scientist who spoke at a U.S. Department of Energy press event on Dec. 13 announcing the NIF’s success.

    Johan Frenje, head of the PSFC High-Energy-Density Physics division, notes that this milestone “will have profound implications for laboratory fusion research in general.”

    Since the late 1950s, researchers worldwide have pursued fusion ignition and energy gain in a laboratory, considering it one of the grand challenges of the 21st century. Ignition can only be reached when the internal fusion heating power is high enough to overcome the physical processes that cool the fusion plasma, creating a positive thermodynamic feedback loop that very rapidly increases the plasma temperature. In the case of inertial confinement fusion, the method used at the NIF, ignition can initiate a “fuel burn propagation” into the surrounding dense and cold fuel, and when done correctly, enable fusion-energy gain.

    Frenje and his PSFC division initially designed dozens of diagnostic systems that were implemented at the NIF, including the vitally important magnetic recoil neutron spectrometer (MRS), which measures the neutron energy spectrum, the data from which fusion yield, plasma ion temperature, and spherical fuel pellet compression (“fuel areal density”) can be determined. Overseen by PSFC Research Scientist Maria Gatu Johnson since 2013, the MRS is one of two systems at the NIF relied upon to measure the absolute neutron yield from the Dec. 5 experiment because of its unique ability to accurately interpret an implosion’s neutron signals.

    “Before the announcement of this historic achievement could be made, the LLNL team wanted to wait until Maria had analyzed the MRS data to an adequate level for a fusion yield to be determined,” says Frenje.

    Response around MIT to NIF’s announcement has been enthusiastic and hopeful. “This is the kind of breakthrough that ignites the imagination,” says Vice President for Research Maria Zuber, “reminding us of the wonder of discovery and the possibilities of human ingenuity. Although we have a long, hard path ahead of us before fusion can deliver clean energy to the electrical grid, we should find much reason for optimism in today’s announcement. Innovation in science and technology holds great power and promise to address some of the world’s biggest challenges, including climate change.”

    Frenje also credits the rest of the team at the PSFC’s High-Energy-Density Physics division, the Laboratory for Laser Energetics at the University of Rochester, LLNL, and other collaborators for their support and involvement in this research, as well as the National Nuclear Security Administration of the Department of Energy, which has funded much of their work since the early 1990s. He is also proud of the number of MIT PhDs that have been generated by the High-Energy-Density Physics Division and subsequently hired by LLNL, including the experimental lead for this experiment, Alex Zylstra PhD ’15.

    “This is really a team effort,” says Frenje. “Without the scientific dialogue and the extensive know-how at the HEDP Division, the critical contributions made by the MRS system would not have happened.” More

  • in

    Microparticles could help prevent vitamin A deficiency

    Vitamin A deficiency is the world’s leading cause of childhood blindness, and in severe cases, it can be fatal. About one-third of the global population of preschool-aged children suffer from this vitamin deficiency, which is most prevalent in sub-Saharan Africa and South Asia.

    MIT researchers have now developed a new way to fortify foods with vitamin A, which they hope could help to improve the health of millions of people around the world. In a new study, they showed that encapsulating vitamin A in a protective polymer prevents the nutrient from being broken down during cooking or storage.

    “Vitamin A is a very important micronutrient, but it’s an unstable molecule,” says Ana Jaklenec, a research scientist at MIT’s Koch Institute for Integrative Cancer Research. “We wanted to see if our encapsulated vitamin A could fortify a food vehicle like bouillon cubes or flour, throughout storage and cooking, and whether the vitamin A could remain biologically active and be absorbed.”

    In a small clinical trial, the researchers showed that when people ate bread fortified with encapsulated vitamin A, the bioavailability of the nutrient was similar to when they consumed vitamin A on its own. The technology has been licensed to two companies that hope to develop it for use in food products.

    “This is a study that our team is really excited about because it shows that everything we did in test tubes and animals works safely and effectively in humans,” says Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute. “We hope this opens the door for someday helping millions, if not billions, of people in the developing world.”

    Jaklenec and Langer are the senior authors of the new study, which appears this week in the Proceedings of the National Academy of Sciences. The paper’s lead author is former MIT postdoc Wen Tang, who is now an associate professor at South China University of Technology.

    Nutrient stability

    Vitamin A is critical not only for vision but also the functioning of the immune system and organs such as the heart and lungs. Efforts to add vitamin A to bread or other foods such as bouillon cubes, which are commonly consumed in West African countries, have been largely unsuccessful because the vitamin breaks down during storage or cooking.

    In a 2019 study, the MIT team showed that they could use a polymer called BMC to encapsulate nutrients, including iron, vitamin A, and several others. They showed that this protective coating improved the shelf life of the nutrients, and that people who consumed bread fortified with encapsulated iron were able to absorb the iron.

    BMC is classified by the FDA as “generally regarded as safe,” and is already used in coatings for drugs and dietary supplements. In the new study, the researchers focused on using this polymer to encapsulate vitamin A, a nutrient that is very sensitive to temperature and ultraviolet light.

    Using an industrial process known as a spinning disc process, the researchers mixed vitamin A with the polymer to form particles 100 to 200 microns in diameter. They also coated the particles with starch, which prevents them from sticking to each other.

    The researchers found that vitamin A encapsulated in the polymer particles were more resistant to degradation by intense light, high temperatures, or boiling water. Under those conditions, much more vitamin A remained active than when the vitamin A was free or when it was delivered in a form called VitA 250, which is currently the most stable form of vitamin A used for food fortification.

    The researchers also showed that the encapsulated particles could be easily incorporated into flour or bouillon cubes. To test how well they would survive long-term storage, the researchers exposed the cubes to harsh conditions, as recommended by the World Health Organization: 40 degrees Celsius (104 degrees Fahrenheit) and 75 percent humidity. Under those conditions, the encapsulated vitamin A was much more stable than other forms of vitamin A. 

    “The enhanced stability of vitamin A with our technology can ensure that the vitamin A-fortified food does provide the recommended daily uptake of vitamin A, even after long-term storage in a hot humidified environment, and cooking processes such as boiling or baking,” Tang says. “People who are suffering from vitamin A deficiency and want to get vitamin A through fortified food will benefit, without changing their daily routines, and without wondering how much vitamin A is still in the food.”

    Vitamin absorption

    When the researchers cooked their encapsulated particles and then fed them to animals, they found that 30 percent of the vitamin A was absorbed, the same as free uncooked vitamin A, compared to about 3 percent of free vitamin A that had been cooked.

    Working with Biofortis, a company that does dietary clinical testing, the researchers then evaluated how well vitamin A was absorbed in people who ate foods fortified with the particles. For this study, the researchers incorporated the particles into bread, then measured vitamin A levels in the blood over a 24-hour period after the bread was consumed. They found that when vitamin A was encapsulated in the BMC polymer, it was absorbed from the food at levels comparable to free vitamin A, indicating that it is readily released in bioactive form.

    Two companies have licensed the technology and are focusing on developing products fortified with vitamin A and other nutrients. A benefit corporation called Particles for Humanity, funded by the Bill and Melinda Gates Foundation, is working with partners in Africa to incorporate this technology into existing fortification efforts. Another company called VitaKey, founded by Jaklenec, Langer, and others, is working on using this approach to add nutrients to a variety of foods and beverages.

    The research was funded by the Bill and Melinda Gates Foundation. Other authors of the paper include Jia Zhuang, Aaron Anselmo, Xian Xu, Aranda Duan, Ruojie Zhang, James Sugarman, Yingying Zeng, Evan Rosenberg, Tyler Graf, Kevin McHugh, Stephany Tzeng, Adam Behrens, Lisa Freed, Lihong Jing, Surangi Jayawardena, Shelley Weinstock, Xiao Le, Christopher Sears, James Oxley, John Daristotle, and Joe Collins. More

  • in

    Pursuing a practical approach to research

    Koroush Shirvan, the John Clark Hardwick Career Development Professor in the Department of Nuclear Science and Engineering (NSE), knows that the nuclear industry has traditionally been wary of innovations until they are shown to have proven utility. As a result, he has relentlessly focused on practical applications in his research, work that has netted him the 2022 Reactor Technology Award from the American Nuclear Society. “The award has usually recognized practical contributions to the field of reactor design and has not often gone to academia,” Shirvan says.

    One of these “practical contributions” is in the field of accident-tolerant fuels, a program launched by the U.S. Nuclear Regulatory Commission in the wake of the 2011 Fukushima Daiichi incident. The goal within this program, says Shirvan, is to develop new forms of nuclear fuels that can tolerate heat. His team, with students from over 16 countries, is working on numerous possibilities that range in composition and method of production.

    Another aspect of Shirvan’s research focuses on how radiation impacts heat transfer mechanisms in the reactor. The team found fuel corrosion to be the driving force. “[The research] informs how nuclear fuels perform in the reactor, from a practical point of view,” Shirvan says.

    Optimizing nuclear reactor design

    A summer internship when Shirvan was an undergraduate at the University of Florida at Gainesville seeded his drive to focus on practical applications in his studies. A nearby nuclear utility was losing millions because of crud accumulating on fuel rods. Over time, the company was solving the problem by using more fuel, before it had extracted all the life from earlier batches.

    Placement of fuel rods in nuclear reactors is a complex problem with many factors — the life of the fuel, location of hot spots — affecting outcomes. Nuclear reactors change their configuration of fuel rods every 18-24 months to optimize close to 15-20 constraints, leading to roughly 200-800 assemblies. The mind-boggling nature of the problem means that plants have to rely on experienced engineers.

    During his internship, Shirvan optimized the program used to place fuel rods in the reactor. He found that certain rods in assemblies were more prone to the crud deposits, and reworked their configurations, optimizing for these rods’ performance instead of adding assemblies.

    In recent years, Shirvan has applied a branch of artificial intelligence — reinforcement learning — to the configuration problem and created a software program used by the largest U.S. nuclear utility. “This program gives even a layperson the ability to reconfigure the fuels and the reactor without having expert knowledge,” Shirvan says.

    From advanced math to counting jelly beans

    Shirvan’s own expertise in nuclear science and engineering developed quite organically. He grew up in Tehran, Iran, and when he was 14 the family moved to Gainesville, where Shirvan’s aunt and family live. He remembers an awkward couple of years at the new high school where he was grouped in with newly arrived international students, and placed in entry-level classes. “I went from doing advanced mathematics in Iran to counting jelly beans,” he laughs.

    Shirvan applied to the University of Florida for his undergraduate studies since it made economic sense; the school gave full scholarships to Floridian students who received a certain minimum SAT score. Shirvan qualified. His uncle, who was a professor in the nuclear engineering department then, encouraged Shirvan to take classes in the department. Under his uncle’s mentorship, the courses Shirvan took, and his internship, cemented his love of the interdisciplinary approach that the field demanded.

    Having always known that he wanted to teach — he remembers finishing his math tests early in Tehran so he could earn the reward of being class monitor — Shirvan knew graduate school was next. His uncle encouraged him to apply to MIT and to the University of Michigan, home to reputable programs in the field. Shirvan chose MIT because “only at MIT was there a program on nuclear design. There were faculty dedicated to designing new reactors, looking at multiple disciplines, and putting all of that together.” He went on to pursue his master’s and doctoral studies at NSE under the supervision of Professor Mujid Kazimi, focusing on compact pressurized and boiling water reactor designs. When Kazimi passed away suddenly in 2015, Shirvan was a research scientist, and switched to tenure track to guide the professor’s team.

    Another project that Shirvan took in 2015: leadership of MIT’s course on nuclear reactor technology for utility executives. Offered only by the Institute, the program is an introduction to nuclear engineering and safety for personnel who might not have much background in the area. “It’s a great course because you get to see what the real problems are in the energy sector … like grid stability,” Shirvan says.

    A multipronged approach to savings

    Another very real problem nuclear utilities face is cost. Contrary to what one hears on the news, one of the biggest stumbling blocks to building new nuclear facilities in the United States is cost, which today can be up to three times that of renewables, Shirvan says. While many approaches such as advanced manufacturing have been tried, Shirvan believes that the solution to decrease expenditures lies in designing more compact reactors.

    His team has developed an open-source advanced nuclear cost tool and has focused on two different designs: a small water reactor using compact steam technology and a horizontal gas reactor. Compactness also means making fuels more efficient, as Shirvan’s work does, and in improving the heat exchange device. It’s all back to the basics and bringing “commercial viable arguments in with your research,” Shirvan explains.

    Shirvan is excited about the future of the U.S. nuclear industry, and that the 2022 Inflation Reduction Act grants the same subsidies to nuclear as it does for renewables. In this new level playing field, advanced nuclear still has a long way to go in terms of affordability, he admits. “It’s time to push forward with cost-effective design,” Shirvan says, “I look forward to supporting this by continuing to guide these efforts with research from my team.” More

  • in

    A healthy wind

    Nearly 10 percent of today’s electricity in the United States comes from wind power. The renewable energy source benefits climate, air quality, and public health by displacing emissions of greenhouse gases and air pollutants that would otherwise be produced by fossil-fuel-based power plants.

    A new MIT study finds that the health benefits associated with wind power could more than quadruple if operators prioritized turning down output from the most polluting fossil-fuel-based power plants when energy from wind is available.

    In the study, published today in Science Advances, researchers analyzed the hourly activity of wind turbines, as well as the reported emissions from every fossil-fuel-based power plant in the country, between the years 2011 and 2017. They traced emissions across the country and mapped the pollutants to affected demographic populations. They then calculated the regional air quality and associated health costs to each community.

    The researchers found that in 2014, wind power that was associated with state-level policies improved air quality overall, resulting in $2 billion in health benefits across the country. However, only roughly 30 percent of these health benefits reached disadvantaged communities.

    The team further found that if the electricity industry were to reduce the output of the most polluting fossil-fuel-based power plants, rather than the most cost-saving plants, in times of wind-generated power, the overall health benefits could quadruple to $8.4 billion nationwide. However, the results would have a similar demographic breakdown.

    “We found that prioritizing health is a great way to maximize benefits in a widespread way across the U.S., which is a very positive thing. But it suggests it’s not going to address disparities,” says study co-author Noelle Selin, a professor in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences at MIT. “In order to address air pollution disparities, you can’t just focus on the electricity sector or renewables and count on the overall air pollution benefits addressing these real and persistent racial and ethnic disparities. You’ll need to look at other air pollution sources, as well as the underlying systemic factors that determine where plants are sited and where people live.”

    Selin’s co-authors are lead author and former MIT graduate student Minghao Qiu PhD ’21, now at Stanford University, and Corwin Zigler at the University of Texas at Austin.

    Turn-down service

    In their new study, the team looked for patterns between periods of wind power generation and the activity of fossil-fuel-based power plants, to see how regional electricity markets adjusted the output of power plants in response to influxes of renewable energy.

    “One of the technical challenges, and the contribution of this work, is trying to identify which are the power plants that respond to this increasing wind power,” Qiu notes.

    To do so, the researchers compared two historical datasets from the period between 2011 and 2017: an hour-by-hour record of energy output of wind turbines across the country, and a detailed record of emissions measurements from every fossil-fuel-based power plant in the U.S. The datasets covered each of seven major regional electricity markets, each market providing energy to one or multiple states.

    “California and New York are each their own market, whereas the New England market covers around seven states, and the Midwest covers more,” Qiu explains. “We also cover about 95 percent of all the wind power in the U.S.”

    In general, they observed that, in times when wind power was available, markets adjusted by essentially scaling back the power output of natural gas and sub-bituminous coal-fired power plants. They noted that the plants that were turned down were likely chosen for cost-saving reasons, as certain plants were less costly to turn down than others.

    The team then used a sophisticated atmospheric chemistry model to simulate the wind patterns and chemical transport of emissions across the country, and determined where and at what concentrations the emissions generated fine particulates and ozone — two pollutants that are known to damage air quality and human health. Finally, the researchers mapped the general demographic populations across the country, based on U.S. census data, and applied a standard epidemiological approach to calculate a population’s health cost as a result of their pollution exposure.

    This analysis revealed that, in the year 2014, a general cost-saving approach to displacing fossil-fuel-based energy in times of wind energy resulted in $2 billion in health benefits, or savings, across the country. A smaller share of these benefits went to disadvantaged populations, such as communities of color and low-income communities, though this disparity varied by state.

    “It’s a more complex story than we initially thought,” Qiu says. “Certain population groups are exposed to a higher level of air pollution, and those would be low-income people and racial minority groups. What we see is, developing wind power could reduce this gap in certain states but further increase it in other states, depending on which fossil-fuel plants are displaced.”

    Tweaking power

    The researchers then examined how the pattern of emissions and the associated health benefits would change if they prioritized turning down different fossil-fuel-based plants in times of wind-generated power. They tweaked the emissions data to reflect several alternative scenarios: one in which the most health-damaging, polluting power plants are turned down first; and two other scenarios in which plants producing the most sulfur dioxide and carbon dioxide respectively, are first to reduce their output.

    They found that while each scenario increased health benefits overall, and the first scenario in particular could quadruple health benefits, the original disparity persisted: Communities of color and low-income communities still experienced smaller health benefits than more well-off communities.

    “We got to the end of the road and said, there’s no way we can address this disparity by being smarter in deciding which plants to displace,” Selin says.

    Nevertheless, the study can help identify ways to improve the health of the general population, says Julian Marshall, a professor of environmental engineering at the University of Washington.

    “The detailed information provided by the scenarios in this paper can offer a roadmap to electricity-grid operators and to state air-quality regulators regarding which power plants are highly damaging to human health and also are likely to noticeably reduce emissions if wind-generated electricity increases,” says Marshall, who was not involved in the study.

    “One of the things that makes me optimistic about this area is, there’s a lot more attention to environmental justice and equity issues,” Selin concludes. “Our role is to figure out the strategies that are most impactful in addressing those challenges.”

    This work was supported, in part, by the U.S. Environmental Protection Agency, and by the National Institutes of Health. More

  • in

    Reversing the charge

    Owners of electric vehicles (EVs) are accustomed to plugging into charging stations at home and at work and filling up their batteries with electricity from the power grid. But someday soon, when these drivers plug in, their cars will also have the capacity to reverse the flow and send electrons back to the grid. As the number of EVs climbs, the fleet’s batteries could serve as a cost-effective, large-scale energy source, with potentially dramatic impacts on the energy transition, according to a new paper published by an MIT team in the journal Energy Advances.

    “At scale, vehicle-to-grid (V2G) can boost renewable energy growth, displacing the need for stationary energy storage and decreasing reliance on firm [always-on] generators, such as natural gas, that are traditionally used to balance wind and solar intermittency,” says Jim Owens, lead author and a doctoral student in the MIT Department of Chemical Engineering. Additional authors include Emre Gençer, a principal research scientist at the MIT Energy Initiative (MITEI), and Ian Miller, a research specialist for MITEI at the time of the study.

    The group’s work is the first comprehensive, systems-based analysis of future power systems, drawing on a novel mix of computational models integrating such factors as carbon emission goals, variable renewable energy (VRE) generation, and costs of building energy storage, production, and transmission infrastructure.

    “We explored not just how EVs could provide service back to the grid — thinking of these vehicles almost like energy storage on wheels — but also the value of V2G applications to the entire energy system and if EVs could reduce the cost of decarbonizing the power system,” says Gençer. “The results were surprising; I personally didn’t believe we’d have so much potential here.”

    Displacing new infrastructure

    As the United States and other nations pursue stringent goals to limit carbon emissions, electrification of transportation has taken off, with the rate of EV adoption rapidly accelerating. (Some projections show EVs supplanting internal combustion vehicles over the next 30 years.) With the rise of emission-free driving, though, there will be increased demand for energy. “The challenge is ensuring both that there’s enough electricity to charge the vehicles and that this electricity is coming from renewable sources,” says Gençer.

    But solar and wind energy is intermittent. Without adequate backup for these sources, such as stationary energy storage facilities using lithium-ion batteries, for instance, or large-scale, natural gas- or hydrogen-fueled power plants, achieving clean energy goals will prove elusive. More vexing, costs for building the necessary new energy infrastructure runs to the hundreds of billions.

    This is precisely where V2G can play a critical, and welcome, role, the researchers reported. In their case study of a theoretical New England power system meeting strict carbon constraints, for instance, the team found that participation from just 13.9 percent of the region’s 8 million light-duty (passenger) EVs displaced 14.7 gigawatts of stationary energy storage. This added up to $700 million in savings — the anticipated costs of building new storage capacity.

    Their paper also described the role EV batteries could play at times of peak demand, such as hot summer days. “V2G technology has the ability to inject electricity back into the system to cover these episodes, so we don’t need to install or invest in additional natural gas turbines,” says Owens. “The way that EVs and V2G can influence the future of our power systems is one of the most exciting and novel aspects of our study.”

    Modeling power

    To investigate the impacts of V2G on their hypothetical New England power system, the researchers integrated their EV travel and V2G service models with two of MITEI’s existing modeling tools: the Sustainable Energy System Analysis Modeling Environment (SESAME) to project vehicle fleet and electricity demand growth, and GenX, which models the investment and operation costs of electricity generation, storage, and transmission systems. They incorporated such inputs as different EV participation rates, costs of generation for conventional and renewable power suppliers, charging infrastructure upgrades, travel demand for vehicles, changes in electricity demand, and EV battery costs.

    Their analysis found benefits from V2G applications in power systems (in terms of displacing energy storage and firm generation) at all levels of carbon emission restrictions, including one with no emissions caps at all. However, their models suggest that V2G delivers the greatest value to the power system when carbon constraints are most aggressive — at 10 grams of carbon dioxide per kilowatt hour load. Total system savings from V2G ranged from $183 million to $1,326 million, reflecting EV participation rates between 5 percent and 80 percent.

    “Our study has begun to uncover the inherent value V2G has for a future power system, demonstrating that there is a lot of money we can save that would otherwise be spent on storage and firm generation,” says Owens.

    Harnessing V2G

    For scientists seeking ways to decarbonize the economy, the vision of millions of EVs parked in garages or in office spaces and plugged into the grid for 90 percent of their operating lives proves an irresistible provocation. “There is all this storage sitting right there, a huge available capacity that will only grow, and it is wasted unless we take full advantage of it,” says Gençer.

    This is not a distant prospect. Startup companies are currently testing software that would allow two-way communication between EVs and grid operators or other entities. With the right algorithms, EVs would charge from and dispatch energy to the grid according to profiles tailored to each car owner’s needs, never depleting the battery and endangering a commute.

    “We don’t assume all vehicles will be available to send energy back to the grid at the same time, at 6 p.m. for instance, when most commuters return home in the early evening,” says Gençer. He believes that the vastly varied schedules of EV drivers will make enough battery power available to cover spikes in electricity use over an average 24-hour period. And there are other potential sources of battery power down the road, such as electric school buses that are employed only for short stints during the day and then sit idle.

    The MIT team acknowledges the challenges of V2G consumer buy-in. While EV owners relish a clean, green drive, they may not be as enthusiastic handing over access to their car’s battery to a utility or an aggregator working with power system operators. Policies and incentives would help.

    “Since you’re providing a service to the grid, much as solar panel users do, you could be paid for your participation, and paid at a premium when electricity prices are very high,” says Gençer.

    “People may not be willing to participate ’round the clock, but if we have blackout scenarios like in Texas last year, or hot-day congestion on transmission lines, maybe we can turn on these vehicles for 24 to 48 hours, sending energy back to the system,” adds Owens. “If there’s a power outage and people wave a bunch of money at you, you might be willing to talk.”

    “Basically, I think this comes back to all of us being in this together, right?” says Gençer. “As you contribute to society by giving this service to the grid, you will get the full benefit of reducing system costs, and also help to decarbonize the system faster and to a greater extent.”

    Actionable insights

    Owens, who is building his dissertation on V2G research, is now investigating the potential impact of heavy-duty electric vehicles in decarbonizing the power system. “The last-mile delivery trucks of companies like Amazon and FedEx are likely to be the earliest adopters of EVs,” Owen says. “They are appealing because they have regularly scheduled routes during the day and go back to the depot at night, which makes them very useful for providing electricity and balancing services in the power system.”

    Owens is committed to “providing insights that are actionable by system planners, operators, and to a certain extent, investors,” he says. His work might come into play in determining what kind of charging infrastructure should be built, and where.

    “Our analysis is really timely because the EV market has not yet been developed,” says Gençer. “This means we can share our insights with vehicle manufacturers and system operators — potentially influencing them to invest in V2G technologies, avoiding the costs of building utility-scale storage, and enabling the transition to a cleaner future. It’s a huge win, within our grasp.”

    The research for this study was funded by MITEI’s Future Energy Systems Center. More