More stories

  • in

    Engineers use artificial intelligence to capture the complexity of breaking waves

    Waves break once they swell to a critical height, before cresting and crashing into a spray of droplets and bubbles. These waves can be as large as a surfer’s point break and as small as a gentle ripple rolling to shore. For decades, the dynamics of how and when a wave breaks have been too complex to predict.

    Now, MIT engineers have found a new way to model how waves break. The team used machine learning along with data from wave-tank experiments to tweak equations that have traditionally been used to predict wave behavior. Engineers typically rely on such equations to help them design resilient offshore platforms and structures. But until now, the equations have not been able to capture the complexity of breaking waves.

    The updated model made more accurate predictions of how and when waves break, the researchers found. For instance, the model estimated a wave’s steepness just before breaking, and its energy and frequency after breaking, more accurately than the conventional wave equations.

    Their results, published today in the journal Nature Communications, will help scientists understand how a breaking wave affects the water around it. Knowing precisely how these waves interact can help hone the design of offshore structures. It can also improve predictions for how the ocean interacts with the atmosphere. Having better estimates of how waves break can help scientists predict, for instance, how much carbon dioxide and other atmospheric gases the ocean can absorb.

    “Wave breaking is what puts air into the ocean,” says study author Themis Sapsis, an associate professor of mechanical and ocean engineering and an affiliate of the Institute for Data, Systems, and Society at MIT. “It may sound like a detail, but if you multiply its effect over the area of the entire ocean, wave breaking starts becoming fundamentally important to climate prediction.”

    The study’s co-authors include lead author and MIT postdoc Debbie Eeltink, Hubert Branger and Christopher Luneau of Aix-Marseille University, Amin Chabchoub of Kyoto University, Jerome Kasparian of the University of Geneva, and T.S. van den Bremer of Delft University of Technology.

    Learning tank

    To predict the dynamics of a breaking wave, scientists typically take one of two approaches: They either attempt to precisely simulate the wave at the scale of individual molecules of water and air, or they run experiments to try and characterize waves with actual measurements. The first approach is computationally expensive and difficult to simulate even over a small area; the second requires a huge amount of time to run enough experiments to yield statistically significant results.

    The MIT team instead borrowed pieces from both approaches to develop a more efficient and accurate model using machine learning. The researchers started with a set of equations that is considered the standard description of wave behavior. They aimed to improve the model by “training” the model on data of breaking waves from actual experiments.

    “We had a simple model that doesn’t capture wave breaking, and then we had the truth, meaning experiments that involve wave breaking,” Eeltink explains. “Then we wanted to use machine learning to learn the difference between the two.”

    The researchers obtained wave breaking data by running experiments in a 40-meter-long tank. The tank was fitted at one end with a paddle which the team used to initiate each wave. The team set the paddle to produce a breaking wave in the middle of the tank. Gauges along the length of the tank measured the water’s height as waves propagated down the tank.

    “It takes a lot of time to run these experiments,” Eeltink says. “Between each experiment you have to wait for the water to completely calm down before you launch the next experiment, otherwise they influence each other.”

    Safe harbor

    In all, the team ran about 250 experiments, the data from which they used to train a type of machine-learning algorithm known as a neural network. Specifically, the algorithm is trained to compare the real waves in experiments with the predicted waves in the simple model, and based on any differences between the two, the algorithm tunes the model to fit reality.

    After training the algorithm on their experimental data, the team introduced the model to entirely new data — in this case, measurements from two independent experiments, each run at separate wave tanks with different dimensions. In these tests, they found the updated model made more accurate predictions than the simple, untrained model, for instance making better estimates of a breaking wave’s steepness.

    The new model also captured an essential property of breaking waves known as the “downshift,” in which the frequency of a wave is shifted to a lower value. The speed of a wave depends on its frequency. For ocean waves, lower frequencies move faster than higher frequencies. Therefore, after the downshift, the wave will move faster. The new model predicts the change in frequency, before and after each breaking wave, which could be especially relevant in preparing for coastal storms.

    “When you want to forecast when high waves of a swell would reach a harbor, and you want to leave the harbor before those waves arrive, then if you get the wave frequency wrong, then the speed at which the waves are approaching is wrong,” Eeltink says.

    The team’s updated wave model is in the form of an open-source code that others could potentially use, for instance in climate simulations of the ocean’s potential to absorb carbon dioxide and other atmospheric gases. The code can also be worked into simulated tests of offshore platforms and coastal structures.

    “The number one purpose of this model is to predict what a wave will do,” Sapsis says. “If you don’t model wave breaking right, it would have tremendous implications for how structures behave. With this, you could simulate waves to help design structures better, more efficiently, and without huge safety factors.”

    This research is supported, in part, by the Swiss National Science Foundation, and by the U.S. Office of Naval Research. More

  • in

    Five MIT PhD students awarded 2022 J-WAFS fellowships for water and food solutions

    The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) recently announced the selection of its 2022-23 cohort of graduate fellows. Two students were named Rasikbhai L. Meswani Fellows for Water Solutions and three students were named J-WAFS Graduate Student Fellows. All five fellows will receive full tuition and a stipend for one semester, and J-WAFS will support the students throughout the 2022-23 academic year by providing networking, mentorship, and opportunities to showcase their research.

    New this year, fellowship nominations were open not only to students pursuing water research, but food-related research as well. The five students selected were chosen for their commitment to solutions-based research that aims to alleviate problems such as water supply or purification, food security, or agriculture. Their projects exemplify the wide range of research that J-WAFS supports, from enhancing nutrition through improved methods to deliver micronutrients to developing high-performance drip irrigation technology. The strong applicant pool reflects the passion MIT students have to address the water and food crises currently facing the planet.

    “This year’s fellows are drawn from a dynamic and engaged community across the Institute whose creativity and ingenuity are pushing forward transformational water and food solutions,” says J-WAFS executive director Renee J. Robins. “We congratulate these students as we recognize their outstanding achievements and their promise as up-and-coming leaders in global water and food sectors.”

    2022-23 Rasikbhai L. Meswani Fellows for Water SolutionsThe Rasikbhai L. Meswani Fellowship for Water Solutions is a fellowship for students pursuing water-related research at MIT. The Rasikbhai L. Meswani Fellowship for Water Solutions was made possible by a generous gift from Elina and Nikhil Meswani and family.

    Aditya Ghodgaonkar is a PhD candidate in the Department of Mechanical Engineering at MIT, where he works in the Global Engineering and Research (GEAR) Lab under Professor Amos Winter. Ghodgaonkar received a bachelor’s degree in mechanical engineering from the RV College of Engineering in India. He then moved to the United States and received a master’s degree in mechanical engineering from Purdue University.Ghodgaonkar is currently designing hydraulic components for drip irrigation that could support the development of water-efficient irrigation systems that are off-grid, inexpensive, and low-maintenance. He has focused on designing drip irrigation emitters that are resistant to clogging, seeking inspiration about flow regulation from marine fauna such as manta rays, as well as turbomachinery concepts. Ghodgaonkar notes that clogging is currently an expensive technical challenge to diagnose, mitigate, and resolve. With an eye on hundreds of millions of farms in developing countries, he aims to bring the benefits of irrigation technology to even the poorest farmers.Outside of his research, Ghodgaonkar is a mentor in MIT Makerworks and has been a teaching assistant for classes such as 2.007 (Design and Manufacturing I). He also helped organize the annual MIT Water Summit last fall.

    Devashish Gokhale is a PhD candidate advised by Professor Patrick Doyle in the Department of Chemical Engineering. He received a bachelor’s degree in chemical engineering from the Indian Institute of Technology Madras, where he researched fluid flow in energy-efficient pumps. Gokhale’s commitment to global water security stemmed from his experience growing up in India, where water sources are threatened by population growth, industrialization, and climate change.As a researcher in the Doyle group, Devashish is developing sustainable and reusable materials for water treatment, with a focus on the elimination of emerging contaminants and other micropollutants from water through cost-effective processes. Many of these contaminants are carcinogens or endocrine disruptors, posing significant threats to both humans and animals. His advisor notes that Devashish was the first researcher in the Doyle group to work on water purification, bringing his passion for the topic to the lab.Gokhale’s research won an award for potential scalability in last year’s J-WAFS World Water Day competition. He also serves as the lecture series chair in the MIT Water Club.

    2022-23 J-WAFS Graduate Student FellowsThe J-WAFS Fellowship for Water and Food Solutions is funded by the J-WAFS Research Affiliate Program, which offers companies the opportunity to collaborate with MIT on water and food research. A portion of each research affiliate’s fees supports this fellowship. The program is central to J-WAFS’ efforts to engage across sector and disciplinary boundaries in solving real-world problems. Currently, there are two J-WAFS Research Affiliates: Xylem, Inc., a water technology company, and GoAigua, a company leading the digital transformation of the water industry.

    James Zhang is a PhD candidate in the Department of Mechanical Engineering at MIT, where he has worked in the NanoEngineering Laboratory with Professor Gang Chen since 2019. As an undergraduate at Carnegie Mellon University, he double majored in mechanical engineering and engineering public policy. He then received a master’s degree in mechanical engineering from MIT. In addition to working in the NanoEngineering Laboratory, James has also worked in the Zhao Laboratory and in the Boriskina Research Group at MIT.Zhang is developing a technology that uses light-induced evaporation to clean water. He is currently investigating the fundamental properties of how light interacts with brackish water surfaces. With strong theoretical as well as experimental components, his research could lead to innovations in desalinating brackish water at high energy efficiencies. Outside of his research, Zhang has served as a student moderator for the MIT International Colloquia on Thermal Innovations.

    Katharina Fransen is a PhD candidate advised by Professor Bradley Olsen in the Department of Chemical Engineering at MIT. She received a bachelor’s degree in chemical engineering from the University of Minnesota, where she was involved in the Society of Women Engineers. Fransen is motivated by the challenge of protecting the most vulnerable global communities from the large quantities of plastic waste associated with traditional food packaging materials. As a researcher in the Olsen Lab, Fransen is developing new plastics that are biologically-based and biodegradable, so they can degrade in the environment instead of polluting communities with plastic waste. These polymers are also optimized for food packaging applications to keep food fresher for longer, preventing food waste.Outside of her research, Fransen is involved in Diversity in Chemical Engineering as the coordinator for the graduate application mentorship program for underrepresented groups. She is also an active member of Graduate Womxn in ChemE and mentors an Undergraduate Research Opportunities Program student.

    Linzixuan (Rhoda) Zhang is a PhD candidate advised by Professor Robert Langer and Ana Jaklenec in the Department of Chemical Engineering at MIT. She received a bachelor’s degree in chemical engineering from the University of Illinois at Urbana-Champaign, where she researched how to genetically engineer microorganisms for the efficient production of advanced biofuels and chemicals.Zhang is currently developing a micronutrient delivery platform that fortifies foods with essential vitamins and nutrients. She has helped develop a group of biodegradable polymers that can stabilize micronutrients under harsh conditions, enabling local food companies to fortify food with essential vitamins. This work aims to tackle a hidden crisis in low- and middle-income countries, where a chronic lack of essential micronutrients affects an estimated 2 billion people.Zhang is also working on the development of self-boosting vaccines to promote more widespread vaccine access and serves as a research mentor in the Langer Lab. More

  • in

    A community approach to improving the health of the planet

    Earlier this month, MIT’s Department of Mechanical Engineering (MechE) hosted a Health of the Planet Showcase. The event was the culmination of a four-year long community initiative to focus on what the mechanical engineering community at MIT can do to solve some of the biggest challenges the planet faces on a local and global scale. Structured like an informal poster session, the event marked the first time that administrative staff joined students, researchers, and postdocs in sharing their own research.

    When Evelyn Wang started her tenure as mechanical engineering department head in July 2018, she and associate department heads Pierre Lermusiaux and Rohit Karnik made the health of the planet a top priority for the department. Their goal was to bring students, faculty, and staff together to develop solutions that address the many problems related to the health of the planet.

    “As a field, mechanical engineering is unique in its diversity,” says Wang, the Ford Professor of Engineering. “We have researchers who are world-leading experts on desalination, ocean engineering, energy storage, and photovoltaics, just to name a few. One of our driving motivations has been getting those experts to collaborate and work on new health of the planet research projects together.”

    Wang also saw an opportunity to tap into the passions of the department’s students and staff, many of whom devote their extracurricular and personal time to environmental causes. She enlisted the help of a team of faculty and staff to launch what has become known as the MechE Health of the Planet Initiative.

    The initiative, which capitalizes on the diverse range of research fields in mechanical engineering, encouraged both grand research ideas that could have impact on a global scale, and smaller personal habits that could help on a smaller scale.

    “We wanted to encourage everyone in our community to think about their daily routine and make small changes that really add up over time,” says Dorothy Hanna, program administrator at MIT and one of the staff members leading the initiative.

    The Health of the Planet team started small. They hosted an office supply swap day to encourage recycling and reuse of everyday office products. This idea expanded to include the launch of “Lab Reuse Days.” Members of the Rohsenow Kendall Lab, including members of the research groups of professors Gang Chen, John Lienhard, and Evelyn Wang, gathered extra materials for reuse. Researchers from other labs picked up Arduino kits, tubing, and electrical wiring to use for their own projects.

    While individuals were encouraged to adopt small habits at home and at work to help the health of the planet, research teams were encouraged to work together on solutions on a larger scale.

    Seed funding for collaborative research

    In early 2020, the MIT Department of Mechanical Engineering launched a new collaborative seed research program based on funding from MathWorks, the computing software company that developed MATLAB. The first seed funding supported health of the planet research projects led by two or more mechanical engineering faculty members.

    “One of the driving goals of MechE has been fostering collaborations and supporting interdisciplinary research on the grand challenges our world faces,” says Pierre Lermusiaux, the Nam P. Suh Professor and associate department head for operations. “The seed funding from MathWorks was a great opportunity to build upon the diverse expertise and creativity our researchers have to address health of the planet related issues.” 

    The research projects supported by the seed funding ranged from lithium-ion batteries for electric vehicles to high-performance household energy products for low- and middle-income countries. Each project differs in scope and application, and draws upon the expertise of at least two different research groups at MIT.

    Throughout the past two years, faculty presented about these research projects in several community seminars. They also participated in a full-day faculty research retreat focused on health of the planet research that included presentations from local Cambridge and Boston city leaders, as well as experts from other MIT departments and Harvard University.

    These projects have helped break down barriers and increased collaboration among research groups that focus on different areas. The third round of seed funding for collaborative research projects was recently announced and new projects will be chosen in the coming weeks.

    A community showcase

    Upon returning to the campus last fall, the Health of the Planet team began planning an event to bring the community together and celebrate the department’s research efforts. The Health of the Planet Showcase, which took place on April 4, featured 26 presenters from across the mechanical engineering community at MIT.

    Projects included a marine coastal monitoring robot, solar hydrogen production with thermochemical cycles, and a portable atmospheric water extractor for dry climates. Among the presenters was Administrative Assistant Tony Pulsone, who presented on how honeybees navigate their surroundings, as well as program manager Theresa Werth and program administrator Dorothy Hanna, who presented on reducing bottled water use and practical strategies developed by staff to overcome functional barriers on campus.

    The event concluded with the announcement of the Fay and Alfred D. Chandler Jr. Research Fellowship, awarded to a MechE student-led effort to propose a new paradigm to improve the health of our planet. Graduate student Charlene Xia won for her work developing a real-time opto-fluidics system for monitoring the soil microbiome.

    “The soil microbiome governs the biogeochemical cycling of macronutrients, micronutrients, and other elements vital for the growth of plants and animal life,” Xia said. “Understanding and predicting the impact of climate change on soil microbiomes and the ecosystem services they provide present a grand challenge and major opportunity.”

    The Chandler Fellowship will continue during the 2022-23 academic year, when another student-led project will be chosen. The department also hopes to make the Health of the Planet Showcase an annual gathering.

    “The showcase was such a vibrant event,” adds Wang. “It really energized the department and renewed our commitment to growing community efforts and continuing to advance research to help improve and protect the health of our planet.” More

  • in

    MIT engineers introduce the Oreometer

    When you twist open an Oreo cookie to get to the creamy center, you’re mimicking a standard test in rheology — the study of how a non-Newtonian material flows when twisted, pressed, or otherwise stressed. MIT engineers have now subjected the sandwich cookie to rigorous materials tests to get to the center of a tantalizing question: Why does the cookie’s cream stick to just one wafer when twisted apart?

    “There’s the fascinating problem of trying to get the cream to distribute evenly between the two wafers, which turns out to be really hard,” says Max Fan, an undergraduate in MIT’s Department of Mechanical Engineering.

    In pursuit of an answer, the team subjected cookies to standard rheology tests in the lab and found that no matter the flavor or amount of stuffing, the cream at the center of an Oreo almost always sticks to one wafer when twisted open. Only for older boxes of cookies does the cream sometimes separate more evenly between both wafers.

    The researchers also measured the torque required to twist open an Oreo, and found it to be similar to the torque required to turn a doorknob and about 1/10th what’s needed to twist open a bottlecap. The cream’s failure stress — i.e. the force per area required to get the cream to flow, or deform — is twice that of cream cheese and peanut butter, and about the same magnitude as mozzarella cheese. Judging from the cream’s response to stress, the team classifies its texture as “mushy,” rather than brittle, tough, or rubbery.

    So, why does the cookie’s cream glom to one side rather than splitting evenly between both? The manufacturing process may be to blame.

    “Videos of the manufacturing process show that they put the first wafer down, then dispense a ball of cream onto that wafer before putting the second wafer on top,” says Crystal Owens, an MIT mechanical engineering PhD candidate who studies the properties of complex fluids. “Apparently that little time delay may make the cream stick better to the first wafer.”

    The team’s study isn’t simply a sweet diversion from bread-and-butter research; it’s also an opportunity to make the science of rheology accessible to others. To that end, the researchers have designed a 3D-printable “Oreometer” — a simple device that firmly grasps an Oreo cookie and uses pennies and rubber bands to control the twisting force that progressively twists the cookie open. Instructions for the tabletop device can be found here.

    The new study, “On Oreology, the fracture and flow of ‘milk’s favorite cookie,’” appears today in Kitchen Flows, a special issue of the journal Physics of Fluids. It was conceived of early in the Covid-19 pandemic, when many scientists’ labs were closed or difficult to access. In addition to Owens and Fan, co-authors are mechanical engineering professors Gareth McKinley and A. John Hart.

    Confection connection

    A standard test in rheology places a fluid, slurry, or other flowable material onto the base of an instrument known as a rheometer. A parallel plate above the base can be lowered onto the test material. The plate is then twisted as sensors track the applied rotation and torque.

    Owens, who regularly uses a laboratory rheometer to test fluid materials such as 3D-printable inks, couldn’t help noting a similarity with sandwich cookies. As she writes in the new study:

    “Scientifically, sandwich cookies present a paradigmatic model of parallel plate rheometry in which a fluid sample, the cream, is held between two parallel plates, the wafers. When the wafers are counter-rotated, the cream deforms, flows, and ultimately fractures, leading to separation of the cookie into two pieces.”

    While Oreo cream may not appear to possess fluid-like properties, it is considered a “yield stress fluid” — a soft solid when unperturbed that can start to flow under enough stress, the way toothpaste, frosting, certain cosmetics, and concrete do.

    Curious as to whether others had explored the connection between Oreos and rheology, Owens found mention of a 2016 Princeton University study in which physicists first reported that indeed, when twisting Oreos by hand, the cream almost always came off on one wafer.

    “We wanted to build on this to see what actually causes this effect and if we could control it if we mounted the Oreos carefully onto our rheometer,” she says.

    Play video

    Cookie twist

    In an experiment that they would repeat for multiple cookies of various fillings and flavors, the researchers glued an Oreo to both the top and bottom plates of a rheometer and applied varying degrees of torque and angular rotation, noting the values  that successfully twisted each cookie apart. They plugged the measurements into equations to calculate the cream’s viscoelasticity, or flowability. For each experiment, they also noted the cream’s “post-mortem distribution,” or where the cream ended up after twisting open.

    In all, the team went through about 20 boxes of Oreos, including regular, Double Stuf, and Mega Stuf levels of filling, and regular, dark chocolate, and “golden” wafer flavors. Surprisingly, they found that no matter the amount of cream filling or flavor, the cream almost always separated onto one wafer.

    “We had expected an effect based on size,” Owens says. “If there was more cream between layers, it should be easier to deform. But that’s not actually the case.”

    Curiously, when they mapped each cookie’s result to its original position in the box, they noticed the cream tended to stick to the inward-facing wafer: Cookies on the left side of the box twisted such that the cream ended up on the right wafer, whereas cookies on the right side separated with cream mostly on the left wafer. They suspect this box distribution may be a result of post-manufacturing environmental effects, such as heating or jostling that may cause cream to peel slightly away from the outer wafers, even before twisting.

    The understanding gained from the properties of Oreo cream could potentially be applied to the design of other complex fluid materials.

    “My 3D printing fluids are in the same class of materials as Oreo cream,” she says. “So, this new understanding can help me better design ink when I’m trying to print flexible electronics from a slurry of carbon nanotubes, because they deform in almost exactly the same way.”

    As for the cookie itself, she suggests that if the inside of Oreo wafers were more textured, the cream might grip better onto both sides and split more evenly when twisted.

    “As they are now, we found there’s no trick to twisting that would split the cream evenly,” Owens concludes.

    This research was supported, in part, by the MIT UROP program and by the National Defense Science and Engineering Graduate Fellowship Program. More

  • in

    A new heat engine with no moving parts is as efficient as a steam turbine

    Engineers at MIT and the National Renewable Energy Laboratory (NREL) have designed a heat engine with no moving parts. Their new demonstrations show that it converts heat to electricity with over 40 percent efficiency — a performance better than that of traditional steam turbines.

    The heat engine is a thermophotovoltaic (TPV) cell, similar to a solar panel’s photovoltaic cells, that passively captures high-energy photons from a white-hot heat source and converts them into electricity. The team’s design can generate electricity from a heat source of between 1,900 to 2,400 degrees Celsius, or up to about 4,300 degrees Fahrenheit.

    The researchers plan to incorporate the TPV cell into a grid-scale thermal battery. The system would absorb excess energy from renewable sources such as the sun and store that energy in heavily insulated banks of hot graphite. When the energy is needed, such as on overcast days, TPV cells would convert the heat into electricity, and dispatch the energy to a power grid.

    With the new TPV cell, the team has now successfully demonstrated the main parts of the system in separate, small-scale experiments. They are working to integrate the parts to demonstrate a fully operational system. From there, they hope to scale up the system to replace fossil-fuel-driven power plants and enable a fully decarbonized power grid, supplied entirely by renewable energy.

    “Thermophotovoltaic cells were the last key step toward demonstrating that thermal batteries are a viable concept,” says Asegun Henry, the Robert N. Noyce Career Development Professor in MIT’s Department of Mechanical Engineering. “This is an absolutely critical step on the path to proliferate renewable energy and get to a fully decarbonized grid.”

    Henry and his collaborators have published their results today in the journal Nature. Co-authors at MIT include Alina LaPotin, Kevin Schulte, Kyle Buznitsky, Colin Kelsall, Andrew Rohskopf, and Evelyn Wang, the Ford Professor of Engineering and head of the Department of Mechanical Engineering, along with collaborators at NREL in Golden, Colorado.

    Jumping the gap

    More than 90 percent of the world’s electricity comes from sources of heat such as coal, natural gas, nuclear energy, and concentrated solar energy. For a century, steam turbines have been the industrial standard for converting such heat sources into electricity.

    On average, steam turbines reliably convert about 35 percent of a heat source into electricity, with about 60 percent representing the highest efficiency of any heat engine to date. But the machinery depends on moving parts that are temperature- limited. Heat sources higher than 2,000 degrees Celsius, such as Henry’s proposed thermal battery system, would be too hot for turbines.

    In recent years, scientists have looked into solid-state alternatives — heat engines with no moving parts, that could potentially work efficiently at higher temperatures.

    “One of the advantages of solid-state energy converters are that they can operate at higher temperatures with lower maintenance costs because they have no moving parts,” Henry says. “They just sit there and reliably generate electricity.”

    Thermophotovoltaic cells offered one exploratory route toward solid-state heat engines. Much like solar cells, TPV cells could be made from semiconducting materials with a particular bandgap — the gap between a material’s valence band and its conduction band. If a photon with a high enough energy is absorbed by the material, it can kick an electron across the bandgap, where the electron can then conduct, and thereby generate electricity — doing so without moving rotors or blades.

    To date, most TPV cells have only reached efficiencies of around 20 percent, with the record at 32 percent, as they have been made of relatively low-bandgap materials that convert lower-temperature, low-energy photons, and therefore convert energy less efficiently.

    Catching light

    In their new TPV design, Henry and his colleagues looked to capture higher-energy photons from a higher-temperature heat source, thereby converting energy more efficiently. The team’s new cell does so with higher-bandgap materials and multiple junctions, or material layers, compared with existing TPV designs.

    The cell is fabricated from three main regions: a high-bandgap alloy, which sits over a slightly lower-bandgap alloy, underneath which is a mirror-like layer of gold. The first layer captures a heat source’s highest-energy photons and converts them into electricity, while lower-energy photons that pass through the first layer are captured by the second and converted to add to the generated voltage. Any photons that pass through this second layer are then reflected by the mirror, back to the heat source, rather than being absorbed as wasted heat.

    The team tested the cell’s efficiency by placing it over a heat flux sensor — a device that directly measures the heat absorbed from the cell. They exposed the cell to a high-temperature lamp and concentrated the light onto the cell. They then varied the bulb’s intensity, or temperature, and observed how the cell’s power efficiency — the amount of power it produced, compared with the heat it absorbed — changed with temperature. Over a range of 1,900 to 2,400 degrees Celsius, the new TPV cell maintained an efficiency of around 40 percent.

    “We can get a high efficiency over a broad range of temperatures relevant for thermal batteries,” Henry says.

    The cell in the experiments is about a square centimeter. For a grid-scale thermal battery system, Henry envisions the TPV cells would have to scale up to about 10,000 square feet (about a quarter of a football field), and would operate in climate-controlled warehouses to draw power from huge banks of stored solar energy. He points out that an infrastructure exists for making large-scale photovoltaic cells, which could also be adapted to manufacture TPVs.

    “There’s definitely a huge net positive here in terms of sustainability,” Henry says. “The technology is safe, environmentally benign in its life cycle, and can have a tremendous impact on abating carbon dioxide emissions from electricity production.”

    This research was supported, in part, by the U.S. Department of Energy. More

  • in

    Engineers enlist AI to help scale up advanced solar cell manufacturing

    Perovskites are a family of materials that are currently the leading contender to potentially replace today’s silicon-based solar photovoltaics. They hold the promise of panels that are far thinner and lighter, that could be made with ultra-high throughput at room temperature instead of at hundreds of degrees, and that are cheaper and easier to transport and install. But bringing these materials from controlled laboratory experiments into a product that can be manufactured competitively has been a long struggle.

    Manufacturing perovskite-based solar cells involves optimizing at least a dozen or so variables at once, even within one particular manufacturing approach among many possibilities. But a new system based on a novel approach to machine learning could speed up the development of optimized production methods and help make the next generation of solar power a reality.

    The system, developed by researchers at MIT and Stanford University over the last few years, makes it possible to integrate data from prior experiments, and information based on personal observations by experienced workers, into the machine learning process. This makes the outcomes more accurate and has already led to the manufacturing of perovskite cells with an energy conversion efficiency of 18.5 percent, a competitive level for today’s market.

    The research is reported today in the journal Joule, in a paper by MIT professor of mechanical engineering Tonio Buonassisi, Stanford professor of materials science and engineering Reinhold Dauskardt, recent MIT research assistant Zhe Liu, Stanford doctoral graduate Nicholas Rolston, and three others.

    Perovskites are a group of layered crystalline compounds defined by the configuration of the atoms in their crystal lattice. There are thousands of such possible compounds and many different ways of making them. While most lab-scale development of perovskite materials uses a spin-coating technique, that’s not practical for larger-scale manufacturing, so companies and labs around the world have been searching for ways of translating these lab materials into a practical, manufacturable product.

    “There’s always a big challenge when you’re trying to take a lab-scale process and then transfer it to something like a startup or a manufacturing line,” says Rolston, who is now an assistant professor at Arizona State University. The team looked at a process that they felt had the greatest potential, a method called rapid spray plasma processing, or RSPP.

    The manufacturing process would involve a moving roll-to-roll surface, or series of sheets, on which the precursor solutions for the perovskite compound would be sprayed or ink-jetted as the sheet rolled by. The material would then move on to a curing stage, providing a rapid and continuous output “with throughputs that are higher than for any other photovoltaic technology,” Rolston says.

    “The real breakthrough with this platform is that it would allow us to scale in a way that no other material has allowed us to do,” he adds. “Even materials like silicon require a much longer timeframe because of the processing that’s done. Whereas you can think of [this approach as more] like spray painting.”

    Within that process, at least a dozen variables may affect the outcome, some of them more controllable than others. These include the composition of the starting materials, the temperature, the humidity, the speed of the processing path, the distance of the nozzle used to spray the material onto a substrate, and the methods of curing the material. Many of these factors can interact with each other, and if the process is in open air, then humidity, for example, may be uncontrolled. Evaluating all possible combinations of these variables through experimentation is impossible, so machine learning was needed to help guide the experimental process.

    But while most machine-learning systems use raw data such as measurements of the electrical and other properties of test samples, they don’t typically incorporate human experience such as qualitative observations made by the experimenters of the visual and other properties of the test samples, or information from other experiments reported by other researchers. So, the team found a way to incorporate such outside information into the machine learning model, using a probability factor based on a mathematical technique called Bayesian Optimization.

    Using the system, he says, “having a model that comes from experimental data, we can find out trends that we weren’t able to see before.” For example, they initially had trouble adjusting for uncontrolled variations in humidity in their ambient setting. But the model showed them “that we could overcome our humidity challenges by changing the temperature, for instance, and by changing some of the other knobs.”

    The system now allows experimenters to much more rapidly guide their process in order to optimize it for a given set of conditions or required outcomes. In their experiments, the team focused on optimizing the power output, but the system could also be used to simultaneously incorporate other criteria, such as cost and durability — something members of the team are continuing to work on, Buonassisi says.

    The researchers were encouraged by the Department of Energy, which sponsored the work, to commercialize the technology, and they’re currently focusing on tech transfer to existing perovskite manufacturers. “We are reaching out to companies now,” Buonassisi says, and the code they developed has been made freely available through an open-source server. “It’s now on GitHub, anyone can download it, anyone can run it,” he says. “We’re happy to help companies get started in using our code.”

    Already, several companies are gearing up to produce perovskite-based solar panels, even though they are still working out the details of how to produce them, says Liu, who is now at the Northwestern Polytechnical University in Xi’an, China. He says companies there are not yet doing large-scale manufacturing, but instead starting with smaller, high-value applications such as building-integrated solar tiles where appearance is important. Three of these companies “are on track or are being pushed by investors to manufacture 1 meter by 2-meter rectangular modules [comparable to today’s most common solar panels], within two years,” he says.

    ‘The problem is, they don’t have a consensus on what manufacturing technology to use,” Liu says. The RSPP method, developed at Stanford, “still has a good chance” to be competitive, he says. And the machine learning system the team developed could prove to be important in guiding the optimization of whatever process ends up being used.

    “The primary goal was to accelerate the process, so it required less time, less experiments, and less human hours to develop something that is usable right away, for free, for industry,” he says.

    “Existing work on machine-learning-driven perovskite PV fabrication largely focuses on spin-coating, a lab-scale technique,” says Ted Sargent, University Professor at the University of Toronto, who was not associated with this work, which he says demonstrates “a workflow that is readily adapted to the deposition techniques that dominate the thin-film industry. Only a handful of groups have the simultaneous expertise in engineering and computation to drive such advances.” Sargent adds that this approach “could be an exciting advance for the manufacture of a broader family of materials” including LEDs, other PV technologies, and graphene, “in short, any industry that uses some form of vapor or vacuum deposition.” 

    The team also included Austin Flick and Thomas Colburn at Stanford and Zekun Ren at the Singapore-MIT Alliance for Science and Technology (SMART). In addition to the Department of Energy, the work was supported by a fellowship from the MIT Energy Initiative, the Graduate Research Fellowship Program from the National Science Foundation, and the SMART program. More

  • in

    Q&A: Latifah Hamzah ’12 on creating sustainable solutions in Malaysia and beyond

    Latifah Hamzah ’12 graduated from MIT with a BS in mechanical engineering and minors in energy studies and music. During their time at MIT, Latifah participated in various student organizations, including the MIT Symphony Orchestra, Alpha Phi Omega, and the MIT Design/Build/Fly team. They also participated in the MIT Energy Initiative’s Undergraduate Research Opportunities Program (UROP) in the lab of former professor of mechanical engineering Alexander Mitsos, examining solar-powered thermal and electrical co-generation systems.

    After graduating from MIT, Latifah worked as a subsea engineer at Shell Global Solutions and co-founded Engineers Without Borders – Malaysia, a nonprofit organization dedicated to finding sustainable and empowering solutions that impact disadvantaged populations in Malaysia. More recently, Latifah received a master of science in mechanical engineering from Stanford University, where they are currently pursuing a PhD in environmental engineering with a focus on water and sanitation in developing contexts.

    Q: What inspired you to pursue energy studies as an undergraduate student at MIT?

    A: I grew up in Malaysia, where I was at once aware of both the extent to which the oil and gas industry is a cornerstone of the economy and the need to transition to a lower-carbon future. The Energy Studies minor was therefore enticing because it gave me a broader view of the energy space, including technical, policy, economic, and other viewpoints. This was my first exposure to how things worked in the real world — in that many different fields and perspectives had to be considered cohesively in order to have a successful, positive, and sustained impact. Although the minor was predominantly grounded in classroom learning, what I learned drove me to want to discover for myself how the forces of technology, society, and policy interacted in the field in my subsequent endeavors.

    In addition to the breadth that the minor added to my education, it also provided a structure and focus for me to build on my technical fundamentals. This included taking graduate-level classes and participating in UROPs that had specific energy foci. These were my first forays into questions that, while still predominantly technical, were more open-ended and with as-yet-unknown answers that would be substantially shaped by the framing of the question. This shift in mindset required from typical undergraduate classes and problem sets took a bit of adjusting to, but ultimately gave me the confidence and belief that I could succeed in a more challenging environment.

    Q: How did these experiences with energy help shape your path forward, particularly in regard to your work with Engineers Without Borders – Malaysia and now at Stanford?

    A: When I returned home after graduation, I was keen to harness my engineering education and explore in practice what the Energy Studies minor curriculum had taught by theory and case studies: to consider context, nuance, and interdisciplinary and myriad perspectives to craft successful, sustainable solutions. Recognizing that there were many underserved communities in Malaysia, I co-founded Engineers Without Borders – Malaysia with some friends with the aim of working with these communities to bring simple and sustainable engineering solutions. Many of these projects did have an energy focus. For example, we designed, sized, and installed micro-hydro or solar-power systems for various indigenous communities, allowing them to continue living on their ancestral lands while reducing energy poverty. Many other projects incorporated other aspects of engineering, such as hydrotherapy pools for folks with special needs, and water and sanitation systems for stateless maritime communities.

    Through my work with Engineers Without Borders – Malaysia, I found a passion for the broader aspects of sustainability, development, and equity. By spending time with communities in the field and sharing in their experiences, I recognized gaps in my skill set that I could work on to be more effective in advocating for social and environmental justice. In particular, I wanted to better understand communities and their perspectives while being mindful of my positionality. In addition, I wanted to address the more systemic aspects of the problems they faced, which I felt in many cases would only be possible through a combination of research, evidence, and policy. To this end, I embarked on a PhD in environmental engineering with a minor in anthropology and pursued a Community-Based Research Fellowship with Stanford’s Haas Center for Public Service. I have also participated in the Rising Environmental Leaders Program (RELP), which helps graduate students “hone their leadership and communications skills to maximize the impact of their research.” RELP afforded me the opportunity to interact with representatives from government, NGOs [nongovernmental organizations], think tanks, and industry, from which I gained a better understanding of the policy and adjacent ecosystems at both the federal and state levels.

    Q: What are you currently studying, and how does it relate to your past work and educational experiences?

    A: My dissertation investigates waste management and monitoring for improved planetary health in three distinct projects. Suboptimal waste management can lead to poor outcomes, including environmental contamination, overuse of resources, and lost economic and environmental opportunities in resource recovery. My first project showed that three combinations of factors resulted in ruminant feces contaminating the stored drinking water supplies of households in rural Kenya, and the results were published in the International Journal of Environmental Research and Public Health. Consequently, water and sanitation interventions must also consider animal waste for communities to have safe drinking water.

    My second project seeks to establish a circular economy in the chocolate industry with indigenous Malaysian farmers and the Chocolate Concierge, a tree-to-bar social enterprise. Having designed and optimized apparatuses and processes to create biochar from cacao husk waste, we are now examining its impact on the growth of cacao saplings and their root systems. The hope is that biochar will increase the resilience of saplings for when they are transplanted from the nursery to the farm. As biochar can improve soil health and yield while reducing fertilizer inputs and sequestering carbon, farmers can accrue substantial economic and environmental benefits, especially if they produce, use, and sell it themselves.

    My third project investigates the gap in sanitation coverage worldwide and potential ways of reducing it. Globally, 46 percent of the population lacks access to safely managed sanitation, while the majority of the 54 percent who do have access use on-site sanitation facilities such as septic tanks and latrines. Given that on-site, decentralized systems typically have a lower space and resource footprint, are cheaper to build and maintain, and can be designed to suit various contexts, they could represent the best chance of reaching the sanitation Sustainable Development Goal. To this end, I am part of a team of researchers at the Criddle Group at Stanford working to develop a household-scale system as part of the Gates Reinvent the Toilet Challenge, an initiative aimed at developing new sanitation and toilet technologies for developing contexts.

    The thread connecting these projects is a commitment to investigating both the technical and socio-anthropological dimensions of an issue to develop sustainable, reliable, and environmentally sensitive solutions, especially in low- and middle-income countries (LMICs). I believe that an interdisciplinary approach can provide a better understanding of the problem space, which will hopefully lead to effective potential solutions that can have a greater community impact.

    Q: What do you plan to do once you obtain your PhD?

    A: I hope to continue working in the spheres of water and sanitation and/or sustainability post-PhD. It is a fascinating moment to be in this space as a person of color from an LMIC, especially as ideas such as community-based research and decolonizing fields and institutions are becoming more widespread and acknowledged. Even during my time at Stanford, I have noticed some shifts in the discourse, although we still have a long way to go to achieve substantive and lasting change. Folks like me are underrepresented in forums where the priorities, policies, and financing of aid and development are discussed at the international or global scale. I hope I’ll be able to use my qualifications, experience, and background to advocate for more just outcomes.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative More

  • in

    Q&A: Climate Grand Challenges finalists on accelerating reductions in global greenhouse gas emissions

    This is the second article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalists, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

    Last month, the Intergovernmental Panel on Climate Change (IPCC), an expert body of the United Nations representing 195 governments, released its latest scientific report on the growing threats posed by climate change, and called for drastic reductions in greenhouse gas emissions to avert the most catastrophic outcomes for humanity and natural ecosystems.

    Bringing the global economy to net-zero carbon dioxide emissions by midcentury is complex and demands new ideas and novel approaches. The first-ever MIT Climate Grand Challenges competition focuses on four problem areas including removing greenhouse gases from the atmosphere and identifying effective, economic solutions for managing and storing these gases. The other Climate Grand Challenges research themes address using data and science to forecast climate-related risk, decarbonizing complex industries and processes, and building equity and fairness into climate solutions.

    In the following conversations prepared for MIT News, faculty from three of the teams working to solve “Removing, managing, and storing greenhouse gases” explain how they are drawing upon geological, biological, chemical, and oceanic processes to develop game-changing techniques for carbon removal, management, and storage. Their responses have been edited for length and clarity.

    Directed evolution of biological carbon fixation

    Agricultural demand is estimated to increase by 50 percent in the coming decades, while climate change is simultaneously projected to drastically reduce crop yield and predictability, requiring a dramatic acceleration of land clearing. Without immediate intervention, this will have dire impacts on wild habitat, rob the livelihoods of hundreds of millions of subsistence farmers, and create hundreds of gigatons of new emissions. Matthew Shoulders, associate professor in the Department of Chemistry, talks about the working group he is leading in partnership with Ed Boyden, the Y. Eva Tan professor of neurotechnology and Howard Hughes Medical Institute investigator at the McGovern Institute for Brain Research, that aims to massively reduce carbon emissions from agriculture by relieving core biochemical bottlenecks in the photosynthetic process using the most sophisticated synthetic biology available to science.

    Q: Describe the two pathways you have identified for improving agricultural productivity and climate resiliency.

    A: First, cyanobacteria grow millions of times faster than plants and dozens of times faster than microalgae. Engineering these cyanobacteria as a source of key food products using synthetic biology will enable food production using less land, in a fundamentally more climate-resilient manner. Second, carbon fixation, or the process by which carbon dioxide is incorporated into organic compounds, is the rate-limiting step of photosynthesis and becomes even less efficient under rising temperatures. Enhancements to Rubisco, the enzyme mediating this central process, will both improve crop yields and provide climate resilience to crops needed by 2050. Our team, led by Robbie Wilson and Max Schubert, has created new directed evolution methods tailored for both strategies, and we have already uncovered promising early results. Applying directed evolution to photosynthesis, carbon fixation, and food production has the potential to usher in a second green revolution.

    Q: What partners will you need to accelerate the development of your solutions?

    A: We have already partnered with leading agriculture institutes with deep experience in plant transformation and field trial capacity, enabling the integration of our improved carbon-dioxide-fixing enzymes into a wide range of crop plants. At the deployment stage, we will be positioned to partner with multiple industry groups to achieve improved agriculture at scale. Partnerships with major seed companies around the world will be key to leverage distribution channels in manufacturing supply chains and networks of farmers, agronomists, and licensed retailers. Support from local governments will also be critical where subsidies for seeds are necessary for farmers to earn a living, such as smallholder and subsistence farming communities. Additionally, our research provides an accessible platform that is capable of enabling and enhancing carbon dioxide sequestration in diverse organisms, extending our sphere of partnership to a wide range of companies interested in industrial microbial applications, including algal and cyanobacterial, and in carbon capture and storage.

    Strategies to reduce atmospheric methane

    One of the most potent greenhouse gases, methane is emitted by a range of human activities and natural processes that include agriculture and waste management, fossil fuel production, and changing land use practices — with no single dominant source. Together with a diverse group of faculty and researchers from the schools of Humanities, Arts, and Social Sciences; Architecture and Planning; Engineering; and Science; plus the MIT Schwarzman College of Computing, Desiree Plata, associate professor in the Department of Civil and Environmental Engineering, is spearheading the MIT Methane Network, an integrated approach to formulating scalable new technologies, business models, and policy solutions for driving down levels of atmospheric methane.

    Q: What is the problem you are trying to solve and why is it a “grand challenge”?

    A: Removing methane from the atmosphere, or stopping it from getting there in the first place, could change the rates of global warming in our lifetimes, saving as much as half a degree of warming by 2050. Methane sources are distributed in space and time and tend to be very dilute, making the removal of methane a challenge that pushes the boundaries of contemporary science and engineering capabilities. Because the primary sources of atmospheric methane are linked to our economy and culture — from clearing wetlands for cultivation to natural gas extraction and dairy and meat production — the social and economic implications of a fundamentally changed methane management system are far-reaching. Nevertheless, these problems are tractable and could significantly reduce the effects of climate change in the near term.

    Q: What is known about the rapid rise in atmospheric methane and what questions remain unanswered?

    A: Tracking atmospheric methane is a challenge in and of itself, but it has become clear that emissions are large, accelerated by human activity, and cause damage right away. While some progress has been made in satellite-based measurements of methane emissions, there is a need to translate that data into actionable solutions. Several key questions remain around improving sensor accuracy and sensor network design to optimize placement, improve response time, and stop leaks with autonomous controls on the ground. Additional questions involve deploying low-level methane oxidation systems and novel catalytic materials at coal mines, dairy barns, and other enriched sources; evaluating the policy strategies and the socioeconomic impacts of new technologies with an eye toward decarbonization pathways; and scaling technology with viable business models that stimulate the economy while reducing greenhouse gas emissions.

    Deploying versatile carbon capture technologies and storage at scale

    There is growing consensus that simply capturing current carbon dioxide emissions is no longer sufficient — it is equally important to target distributed sources such as the oceans and air where carbon dioxide has accumulated from past emissions. Betar Gallant, the American Bureau of Shipping Career Development Associate Professor of Mechanical Engineering, discusses her work with Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the Department of Earth, Atmospheric and Planetary Sciences, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering and director of the School of Chemical Engineering Practice, to dramatically advance the portfolio of technologies available for carbon capture and permanent storage at scale. (A team led by Assistant Professor Matěj Peč of EAPS is also addressing carbon capture and storage.)

    Q: Carbon capture and storage processes have been around for several decades. What advances are you seeking to make through this project?

    A: Today’s capture paradigms are costly, inefficient, and complex. We seek to address this challenge by developing a new generation of capture technologies that operate using renewable energy inputs, are sufficiently versatile to accommodate emerging industrial demands, are adaptive and responsive to varied societal needs, and can be readily deployed to a wider landscape.

    New approaches will require the redesign of the entire capture process, necessitating basic science and engineering efforts that are broadly interdisciplinary in nature. At the same time, incumbent technologies have been optimized largely for integration with coal- or natural gas-burning power plants. Future applications must shift away from legacy emitters in the power sector towards hard-to-mitigate sectors such as cement, iron and steel, chemical, and hydrogen production. It will become equally important to develop and optimize systems targeted for much lower concentrations of carbon dioxide, such as in oceans or air. Our effort will expand basic science studies as well as human impacts of storage, including how public engagement and education can alter attitudes toward greater acceptance of carbon dioxide geologic storage.

    Q: What are the expected impacts of your proposed solution, both positive and negative?

    A: Renewable energy cannot be deployed rapidly enough everywhere, nor can it supplant all emissions sources, nor can it account for past emissions. Carbon capture and storage (CCS) provides a demonstrated method to address emissions that will undoubtedly occur before the transition to low-carbon energy is completed. CCS can succeed even if other strategies fail. It also allows for developing nations, which may need to adopt renewables over longer timescales, to see equitable economic development while avoiding the most harmful climate impacts. And, CCS enables the future viability of many core industries and transportation modes, many of which do not have clear alternatives before 2050, let alone 2040 or 2030.

    The perceived risks of potential leakage and earthquakes associated with geologic storage can be minimized by choosing suitable geologic formations for storage. Despite CCS providing a well-understood pathway for removing enough of the carbon dioxide already emitted into the atmosphere, some environmentalists vigorously oppose it, fearing that CCS rewards oil companies and disincentivizes the transition away from fossil fuels. We believe that it is more important to keep in mind the necessity of meeting key climate targets for the sake of the planet, and welcome those who can help. More